A method of topology synthesis based on graph theory and mechanism combination theory was applied to the configuration design of locomotion systems of lunar exploration rovers(LER).Through topology combination of whee...A method of topology synthesis based on graph theory and mechanism combination theory was applied to the configuration design of locomotion systems of lunar exploration rovers(LER).Through topology combination of wheel structural unit,suspension unit,and connecting device unit between suspension and load platform,some new locomotion system configurations were proposed and the metrics and indexes to evaluate the performance of the new locomotion system were analyzed.Performance evaluation and comparison between two LER with locomotion systems of different configurations were analyzed.The analysis results indicate that the new locomotion system configuration has good trafficability performance.展开更多
Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis app...Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis applications.This paper proposes a deep learning model for the medical image fusion process.This model depends on Convolutional Neural Network(CNN).The basic idea of the proposed model is to extract features from both CT and MR images.Then,an additional process is executed on the extracted features.After that,the fused feature map is reconstructed to obtain the resulting fused image.Finally,the quality of the resulting fused image is enhanced by various enhancement techniques such as Histogram Matching(HM),Histogram Equalization(HE),fuzzy technique,fuzzy type,and Contrast Limited Histogram Equalization(CLAHE).The performance of the proposed fusion-based CNN model is measured by various metrics of the fusion and enhancement quality.Different realistic datasets of different modalities and diseases are tested and implemented.Also,real datasets are tested in the simulation analysis.展开更多
Weed is a plant that grows along with nearly allfield crops,including rice,wheat,cotton,millets and sugar cane,affecting crop yield and quality.Classification and accurate identification of all types of weeds is a cha...Weed is a plant that grows along with nearly allfield crops,including rice,wheat,cotton,millets and sugar cane,affecting crop yield and quality.Classification and accurate identification of all types of weeds is a challenging task for farmers in earlier stage of crop growth because of similarity.To address this issue,an efficient weed classification model is proposed with the Deep Convolutional Neural Network(CNN)that implements automatic feature extraction and performs complex feature learning for image classification.Throughout this work,weed images were trained using the proposed CNN model with evolutionary computing approach to classify the weeds based on the two publicly available weed datasets.The Tamil Nadu Agricultural University(TNAU)dataset used as afirst dataset that consists of 40 classes of weed images and the other dataset is from Indian Council of Agriculture Research–Directorate of Weed Research(ICAR-DWR)which contains 50 classes of weed images.An effective Particle Swarm Optimization(PSO)technique is applied in the proposed CNN to automa-tically evolve and improve its classification accuracy.The proposed model was evaluated and compared with pre-trained transfer learning models such as GoogLeNet,AlexNet,Residual neural Network(ResNet)and Visual Geometry Group Network(VGGNet)for weed classification.This work shows that the performance of the PSO assisted proposed CNN model is significantly improved the success rate by 98.58%for TNAU and 97.79%for ICAR-DWR weed datasets.展开更多
When dealing with the ratings from users,traditional collaborative filtering algorithms do not consider the credibility of rating data,which affects the accuracy of similarity.To address this issue,the paper proposes ...When dealing with the ratings from users,traditional collaborative filtering algorithms do not consider the credibility of rating data,which affects the accuracy of similarity.To address this issue,the paper proposes an improved algorithm based on classification and user trust.It firstly classifies all the ratings by the categories of items.And then,for each category,it evaluates the trustworthy degree of each user on the category and imposes the degree on the ratings of the user.Finally,the algorithm explores the similarities between users,finds the nearest neighbors,and makes recommendations within each category.Simulations show that the improved algorithm outperforms the traditional collaborative filtering algorithms and enhances the accuracy of recommendation.展开更多
While optimizing model parameters with respect to evaluation metrics has recently proven to benefit end to-end neural machine translation (NMT), the evaluation metrics used in the training are restricted to be defined...While optimizing model parameters with respect to evaluation metrics has recently proven to benefit end to-end neural machine translation (NMT), the evaluation metrics used in the training are restricted to be defined at the sentence level to facilitate online learning algorithms. This is undesirable because the final evaluation metrics used in the testing phase are usually non-decomposable (i.e., they are defined at the corpus level and cannot be expressed as the sum of sentence-level metrics). To minimize the discrepancy between the training and the testing, we propose to extend the minimum risk training (MRT) algorithm to take non-decomposable corpus-level evaluation metrics into consideration while still keeping the advantages of online training. This can be done by calculating corpus-level evaluation metrics on a subset of training data at each step in online training. Experiments on Chinese-English and English-French translation show that our approach improves the correlation between training and testing and significantly outperforms the MRT algorithm using decomposable evaluation metrics.展开更多
With the explosive growth of information, more and more organizations are deploying private cloud systems or renting public cloud systems to process big data. However, there is no existing benchmark suite for evaluati...With the explosive growth of information, more and more organizations are deploying private cloud systems or renting public cloud systems to process big data. However, there is no existing benchmark suite for evaluating cloud performance on the whole system level. To the best of our knowledge, this paper proposes the first benchmark suite CloudRank-D to benchmark and rank cloud computing sys- tems that are shared for running big data applications. We an- alyze the limitations of previous metrics, e.g., floating point operations, for evaluating a cloud computing system, and propose two simple metrics: data processed per second and data processed per Joule as two complementary metrics for evaluating cloud computing systems. We detail the design of CloudRank-D that considers representative applications, di- versity of data characteristics, and dynamic behaviors of both applications and system software platforms. Through experi- ments, we demonstrate the advantages of our proposed met- tics. In several case studies, we evaluate two small-scale de- ployments of cloud computing systems using CloudRank-D.展开更多
In underwater scenes,the quality of the video and image acquired by the underwater imaging system suffers from severe degradation,influencing target detection and recognition.Thus,restoring real scenes from blurred vi...In underwater scenes,the quality of the video and image acquired by the underwater imaging system suffers from severe degradation,influencing target detection and recognition.Thus,restoring real scenes from blurred videos and images is of great significance.Owing to the light absorption and scattering by suspended particles,the images acquired often have poor visibility,including color shift,low contrast,noise,and blurring issues.This paper aims to classify and compare some of the significant technologies in underwater image defogging,presenting a comprehensive picture of the current research landscape for researchers.First we analyze the reasons for degradation of underwater images and the underwater optical imaging model.Then we classify the underwater image defogging technologies into three categories,including image restoration approaches,image enhancement approaches,and deep learning approaches.Afterward,we present the objective evaluation metrics and analyze the state-of-the-art approaches.Finally,we summarize the shortcomings of the defogging approaches for underwater images and propose seven research directions.展开更多
Abdominal organ segmentation is the segregation of a single or multiple abdominal organ(s) into semantic image segments of pixels identified with homogeneous features such as color and texture, and intensity. The abdo...Abdominal organ segmentation is the segregation of a single or multiple abdominal organ(s) into semantic image segments of pixels identified with homogeneous features such as color and texture, and intensity. The abdominal organ(s) condition is mostly connected with greater morbidity and mortality. Most patients often have asymptomatic abdominal conditions and symptoms, which are often recognized late;hence the abdomen has been the third most common cause of damage to the human body. That notwithstanding,there may be improved outcomes where the condition of an abdominal organ is detected earlier. Over the years, supervised and semi-supervised machine learning methods have been used to segment abdominal organ(s) in order to detect the organ(s) condition. The supervised methods perform well when the used training data represents the target data, but the methods require large manually annotated data and have adaptation problems. The semi-supervised methods are fast but record poor performance than the supervised if assumptions about the data fail to hold. Current state-of-the-art methods of supervised segmentation are largely based on deep learning techniques due to their good accuracy and success in real world applications. Though it requires a large amount of training data for automatic feature extraction, deep learning can hardly be used. As regards the semi-supervised methods of segmentation, self-training and graph-based techniques have attracted much research attention. Self-training can be used with any classifier but does not have a mechanism to rectify mistakes early. Graph-based techniques thrive on their convexity, scalability, and effectiveness in application but have an out-of-sample problem. In this review paper, a study has been carried out on supervised and semi-supervised methods of performing abdominal organ segmentation. An observation of the current approaches, connection and gaps are identified, and prospective future research opportunities are enumerated.展开更多
Feature selection is a key task in statistical pattern recognition. Most feature selection algorithms have been proposed based on specific objective functions which are usually intuitively reasonable but can sometimes...Feature selection is a key task in statistical pattern recognition. Most feature selection algorithms have been proposed based on specific objective functions which are usually intuitively reasonable but can sometimes be far from the more basic objectives of the feature selection. This paper describes how to select features such that the basic objectives, e.g., classification or clustering accuracies, can be optimized in a more direct way. The analysis requires that the contribution of each feature to the evaluation metrics can be quantitatively described by some score function. Motivated by the conditional independence structure in probabilistic distributions, the analysis uses a leave-one-out feature selection algorithm which provides an approximate solution. The leave-one- out algorithm improves the conventional greedy backward elimination algorithm by preserving more interactions among features in the selection process, so that the various feature selection objectives can be optimized in a unified way. Experiments on six real-world datasets with different feature evaluation metrics have shown that this algorithm outperforms popular feature selection algorithms in most situations.展开更多
Six criteria were used to evaluate 12 metrics for their sensitivity to effluent flowing from the Ferris-Haggarty copper mine into Haggarty Creek and then into Battle Creek West Fork.Through the evaluation process,we f...Six criteria were used to evaluate 12 metrics for their sensitivity to effluent flowing from the Ferris-Haggarty copper mine into Haggarty Creek and then into Battle Creek West Fork.Through the evaluation process,we found that the Shannon-Wiener index,the random runs value,and Ephemeroptera taxa richness appeared to best reflect the impacts that have occurred in both Haggarty Creek and Battle Creek West Fork.In addition,Ephemeroptera/Plecoptera/Trichoptera taxa richness,total taxa richness,and Plecoptera taxa richness,were useful in reflecting those impacts.In contrast,we found that the abundance ratios,the Hilsenhoff Biotic Index,as well as Trichoptera taxa richness,did not reflect the impacts that occurred in Haggarty Creek and Battle Creek West Fork.Finally,this study provided information about the benthic insect communities that are present in the impacted reaches of Haggarty Creek.Such information is needed to assess the potential of those reaches as habitat for the Colorado River cutthroat trout,Oncorhynchus clarki pleuriticus,which is a species of special concern to the Wyoming Department of Game and Fish.展开更多
A recommender system is employed to accurately recommend items,which are expected to attract the user's attention.The over-emphasis on the accuracy of the recommendations can cause information over-specialization ...A recommender system is employed to accurately recommend items,which are expected to attract the user's attention.The over-emphasis on the accuracy of the recommendations can cause information over-specialization and make recommendations boring and even predictable.Novelty and diversity are two partly useful solutions to these problems.However,novel and diverse recommendations cannot merely ensure that users are attracted since such recommendations may not be relevant to the user's interests.Hence,it is necessary to consider other criteria,such as unexpectedness and relevance.Serendipity is a criterion for making appealing and useful recommendations.The usefulness of serendipitous recommendations is the main superiority of this criterion over novelty and diversity.The bulk of studies of recommender systems have focused on serendipity in recent years.Thus,a systematic literature review is conducted in this paper on previous studies of serendipity-oriented recommender systems.Accordingly,this paper focuses on the contextual convergence of serendipity definitions,datasets,serendipitous recommendation methods,and their evaluation techniques.Finally,the trends and existing potentials of the serendipity-oriented recommender systems are discussed for future studies.The results of the systematic literature review present that the quality and the quantity of articles in the serendipity-oriented recommender systems are progressing.展开更多
基金Supported by National "863" High-Tech Program (No.2006AA04Z231)Foundation of State Key Laboratory of Robotics and Systems (No.SKLRS-200801A02)+1 种基金the College Discipline Innovation Wisdom Plan (No.B07018)Natural Science Foundation of Heilongjiang Province (No.ZJG0709)
文摘A method of topology synthesis based on graph theory and mechanism combination theory was applied to the configuration design of locomotion systems of lunar exploration rovers(LER).Through topology combination of wheel structural unit,suspension unit,and connecting device unit between suspension and load platform,some new locomotion system configurations were proposed and the metrics and indexes to evaluate the performance of the new locomotion system were analyzed.Performance evaluation and comparison between two LER with locomotion systems of different configurations were analyzed.The analysis results indicate that the new locomotion system configuration has good trafficability performance.
文摘Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis applications.This paper proposes a deep learning model for the medical image fusion process.This model depends on Convolutional Neural Network(CNN).The basic idea of the proposed model is to extract features from both CT and MR images.Then,an additional process is executed on the extracted features.After that,the fused feature map is reconstructed to obtain the resulting fused image.Finally,the quality of the resulting fused image is enhanced by various enhancement techniques such as Histogram Matching(HM),Histogram Equalization(HE),fuzzy technique,fuzzy type,and Contrast Limited Histogram Equalization(CLAHE).The performance of the proposed fusion-based CNN model is measured by various metrics of the fusion and enhancement quality.Different realistic datasets of different modalities and diseases are tested and implemented.Also,real datasets are tested in the simulation analysis.
文摘Weed is a plant that grows along with nearly allfield crops,including rice,wheat,cotton,millets and sugar cane,affecting crop yield and quality.Classification and accurate identification of all types of weeds is a challenging task for farmers in earlier stage of crop growth because of similarity.To address this issue,an efficient weed classification model is proposed with the Deep Convolutional Neural Network(CNN)that implements automatic feature extraction and performs complex feature learning for image classification.Throughout this work,weed images were trained using the proposed CNN model with evolutionary computing approach to classify the weeds based on the two publicly available weed datasets.The Tamil Nadu Agricultural University(TNAU)dataset used as afirst dataset that consists of 40 classes of weed images and the other dataset is from Indian Council of Agriculture Research–Directorate of Weed Research(ICAR-DWR)which contains 50 classes of weed images.An effective Particle Swarm Optimization(PSO)technique is applied in the proposed CNN to automa-tically evolve and improve its classification accuracy.The proposed model was evaluated and compared with pre-trained transfer learning models such as GoogLeNet,AlexNet,Residual neural Network(ResNet)and Visual Geometry Group Network(VGGNet)for weed classification.This work shows that the performance of the PSO assisted proposed CNN model is significantly improved the success rate by 98.58%for TNAU and 97.79%for ICAR-DWR weed datasets.
基金supported by Phase 4,Software Engineering(Software Service Engineering)under Grant No.XXKZD1301
文摘When dealing with the ratings from users,traditional collaborative filtering algorithms do not consider the credibility of rating data,which affects the accuracy of similarity.To address this issue,the paper proposes an improved algorithm based on classification and user trust.It firstly classifies all the ratings by the categories of items.And then,for each category,it evaluates the trustworthy degree of each user on the category and imposes the degree on the ratings of the user.Finally,the algorithm explores the similarities between users,finds the nearest neighbors,and makes recommendations within each category.Simulations show that the improved algorithm outperforms the traditional collaborative filtering algorithms and enhances the accuracy of recommendation.
文摘While optimizing model parameters with respect to evaluation metrics has recently proven to benefit end to-end neural machine translation (NMT), the evaluation metrics used in the training are restricted to be defined at the sentence level to facilitate online learning algorithms. This is undesirable because the final evaluation metrics used in the testing phase are usually non-decomposable (i.e., they are defined at the corpus level and cannot be expressed as the sum of sentence-level metrics). To minimize the discrepancy between the training and the testing, we propose to extend the minimum risk training (MRT) algorithm to take non-decomposable corpus-level evaluation metrics into consideration while still keeping the advantages of online training. This can be done by calculating corpus-level evaluation metrics on a subset of training data at each step in online training. Experiments on Chinese-English and English-French translation show that our approach improves the correlation between training and testing and significantly outperforms the MRT algorithm using decomposable evaluation metrics.
文摘With the explosive growth of information, more and more organizations are deploying private cloud systems or renting public cloud systems to process big data. However, there is no existing benchmark suite for evaluating cloud performance on the whole system level. To the best of our knowledge, this paper proposes the first benchmark suite CloudRank-D to benchmark and rank cloud computing sys- tems that are shared for running big data applications. We an- alyze the limitations of previous metrics, e.g., floating point operations, for evaluating a cloud computing system, and propose two simple metrics: data processed per second and data processed per Joule as two complementary metrics for evaluating cloud computing systems. We detail the design of CloudRank-D that considers representative applications, di- versity of data characteristics, and dynamic behaviors of both applications and system software platforms. Through experi- ments, we demonstrate the advantages of our proposed met- tics. In several case studies, we evaluate two small-scale de- ployments of cloud computing systems using CloudRank-D.
基金Project supported by the National Natural Science Foundation of China(No.61702074)the Liaoning Provincial Natural Science Foundation of China(No.20170520196)the Fundamental Research Funds for the Central Universities,China(Nos.3132019205 and 3132019354)。
文摘In underwater scenes,the quality of the video and image acquired by the underwater imaging system suffers from severe degradation,influencing target detection and recognition.Thus,restoring real scenes from blurred videos and images is of great significance.Owing to the light absorption and scattering by suspended particles,the images acquired often have poor visibility,including color shift,low contrast,noise,and blurring issues.This paper aims to classify and compare some of the significant technologies in underwater image defogging,presenting a comprehensive picture of the current research landscape for researchers.First we analyze the reasons for degradation of underwater images and the underwater optical imaging model.Then we classify the underwater image defogging technologies into three categories,including image restoration approaches,image enhancement approaches,and deep learning approaches.Afterward,we present the objective evaluation metrics and analyze the state-of-the-art approaches.Finally,we summarize the shortcomings of the defogging approaches for underwater images and propose seven research directions.
基金supported by National Natural Science Foundation of China(Nos.61772242,61976106 and 61572239)the China Postdoctoral Science Foundation(No.2017M611737)+1 种基金the Six Talent Peaks Project in Jiangsu Province(No.DZXX-122)the Key Special Project of Health and Family Planning Science and Technology in Zhenjiang City(No.SHW2017019)。
文摘Abdominal organ segmentation is the segregation of a single or multiple abdominal organ(s) into semantic image segments of pixels identified with homogeneous features such as color and texture, and intensity. The abdominal organ(s) condition is mostly connected with greater morbidity and mortality. Most patients often have asymptomatic abdominal conditions and symptoms, which are often recognized late;hence the abdomen has been the third most common cause of damage to the human body. That notwithstanding,there may be improved outcomes where the condition of an abdominal organ is detected earlier. Over the years, supervised and semi-supervised machine learning methods have been used to segment abdominal organ(s) in order to detect the organ(s) condition. The supervised methods perform well when the used training data represents the target data, but the methods require large manually annotated data and have adaptation problems. The semi-supervised methods are fast but record poor performance than the supervised if assumptions about the data fail to hold. Current state-of-the-art methods of supervised segmentation are largely based on deep learning techniques due to their good accuracy and success in real world applications. Though it requires a large amount of training data for automatic feature extraction, deep learning can hardly be used. As regards the semi-supervised methods of segmentation, self-training and graph-based techniques have attracted much research attention. Self-training can be used with any classifier but does not have a mechanism to rectify mistakes early. Graph-based techniques thrive on their convexity, scalability, and effectiveness in application but have an out-of-sample problem. In this review paper, a study has been carried out on supervised and semi-supervised methods of performing abdominal organ segmentation. An observation of the current approaches, connection and gaps are identified, and prospective future research opportunities are enumerated.
基金National Natural Science Foundation of China(Nos.61071131 and 61271388)Beijing Natural Science Foundation(No.4122040)+1 种基金Research Project of Tsinghua University(No.2012Z01011)Doctoral Fund of the Ministry of Education of China(No.20120002110036)
文摘Feature selection is a key task in statistical pattern recognition. Most feature selection algorithms have been proposed based on specific objective functions which are usually intuitively reasonable but can sometimes be far from the more basic objectives of the feature selection. This paper describes how to select features such that the basic objectives, e.g., classification or clustering accuracies, can be optimized in a more direct way. The analysis requires that the contribution of each feature to the evaluation metrics can be quantitatively described by some score function. Motivated by the conditional independence structure in probabilistic distributions, the analysis uses a leave-one-out feature selection algorithm which provides an approximate solution. The leave-one- out algorithm improves the conventional greedy backward elimination algorithm by preserving more interactions among features in the selection process, so that the various feature selection objectives can be optimized in a unified way. Experiments on six real-world datasets with different feature evaluation metrics have shown that this algorithm outperforms popular feature selection algorithms in most situations.
文摘Six criteria were used to evaluate 12 metrics for their sensitivity to effluent flowing from the Ferris-Haggarty copper mine into Haggarty Creek and then into Battle Creek West Fork.Through the evaluation process,we found that the Shannon-Wiener index,the random runs value,and Ephemeroptera taxa richness appeared to best reflect the impacts that have occurred in both Haggarty Creek and Battle Creek West Fork.In addition,Ephemeroptera/Plecoptera/Trichoptera taxa richness,total taxa richness,and Plecoptera taxa richness,were useful in reflecting those impacts.In contrast,we found that the abundance ratios,the Hilsenhoff Biotic Index,as well as Trichoptera taxa richness,did not reflect the impacts that occurred in Haggarty Creek and Battle Creek West Fork.Finally,this study provided information about the benthic insect communities that are present in the impacted reaches of Haggarty Creek.Such information is needed to assess the potential of those reaches as habitat for the Colorado River cutthroat trout,Oncorhynchus clarki pleuriticus,which is a species of special concern to the Wyoming Department of Game and Fish.
文摘A recommender system is employed to accurately recommend items,which are expected to attract the user's attention.The over-emphasis on the accuracy of the recommendations can cause information over-specialization and make recommendations boring and even predictable.Novelty and diversity are two partly useful solutions to these problems.However,novel and diverse recommendations cannot merely ensure that users are attracted since such recommendations may not be relevant to the user's interests.Hence,it is necessary to consider other criteria,such as unexpectedness and relevance.Serendipity is a criterion for making appealing and useful recommendations.The usefulness of serendipitous recommendations is the main superiority of this criterion over novelty and diversity.The bulk of studies of recommender systems have focused on serendipity in recent years.Thus,a systematic literature review is conducted in this paper on previous studies of serendipity-oriented recommender systems.Accordingly,this paper focuses on the contextual convergence of serendipity definitions,datasets,serendipitous recommendation methods,and their evaluation techniques.Finally,the trends and existing potentials of the serendipity-oriented recommender systems are discussed for future studies.The results of the systematic literature review present that the quality and the quantity of articles in the serendipity-oriented recommender systems are progressing.