Determining homogeneous domains statistically is helpful for engineering geological modeling and rock mass stability evaluation.In this text,a technique that can integrate lithology,geotechnical and structural informa...Determining homogeneous domains statistically is helpful for engineering geological modeling and rock mass stability evaluation.In this text,a technique that can integrate lithology,geotechnical and structural information is proposed to delineate homogeneous domains.This technique is then applied to a high and steep slope along a road.First,geological and geotechnical domains were described based on lithology,faults,and shear zones.Next,topological manifolds were used to eliminate the incompatibility between orientations and other parameters(i.e.trace length and roughness)so that the data concerning various properties of each discontinuity can be matched and characterized in the same Euclidean space.Thus,the influence of implicit combined effect in between parameter sequences on the homogeneous domains could be considered.Deep learning technique was employed to quantify abstract features of the characterization images of discontinuity properties,and to assess the similarity of rock mass structures.The results show that the technique can effectively distinguish structural variations and outperform conventional methods.It can handle multisource engineering geological information and multiple discontinuity parameters.This technique can also minimize the interference of human factors and delineate homogeneous domains based on orientations or multi-parameter with arbitrary distributions to satisfy different engineering requirements.展开更多
This study aimed to address the challenge of accurately and reliably detecting tomatoes in dense planting environments,a critical prerequisite for the automation implementation of robotic harvesting.However,the heavy ...This study aimed to address the challenge of accurately and reliably detecting tomatoes in dense planting environments,a critical prerequisite for the automation implementation of robotic harvesting.However,the heavy reliance on extensive manually annotated datasets for training deep learning models still poses significant limitations to their application in real-world agricultural production environments.To overcome these limitations,we employed domain adaptive learning approach combined with the YOLOv5 model to develop a novel tomato detection model called as TDA-YOLO(tomato detection domain adaptation).We designated the normal illumination scenes in dense planting environments as the source domain and utilized various other illumination scenes as the target domain.To construct bridge mechanism between source and target domains,neural preset for color style transfer is introduced to generate a pseudo-dataset,which served to deal with domain discrepancy.Furthermore,this study combines the semi-supervised learning method to enable the model to extract domain-invariant features more fully,and uses knowledge distillation to improve the model's ability to adapt to the target domain.Additionally,for purpose of promoting inference speed and low computational demand,the lightweight FasterNet network was integrated into the YOLOv5's C3 module,creating a modified C3_Faster module.The experimental results demonstrated that the proposed TDA-YOLO model significantly outperformed original YOLOv5s model,achieving a mAP(mean average precision)of 96.80%for tomato detection across diverse scenarios in dense planting environments,increasing by 7.19 percentage points;Compared with the latest YOLOv8 and YOLOv9,it is also 2.17 and 1.19 percentage points higher,respectively.The model's average detection time per image was an impressive 15 milliseconds,with a FLOPs(floating point operations per second)count of 13.8 G.After acceleration processing,the detection accuracy of the TDA-YOLO model on the Jetson Xavier NX development board is 90.95%,the mAP value is 91.35%,and the detection time of each image is 21 ms,which can still meet the requirements of real-time detection of tomatoes in dense planting environment.The experimental results show that the proposed TDA-YOLO model can accurately and quickly detect tomatoes in dense planting environment,and at the same time avoid the use of a large number of annotated data,which provides technical support for the development of automatic harvesting systems for tomatoes and other fruits.展开更多
Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing...Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing interest in applying this technology to diverse applications in medical image analysis.Automated three dimensional Breast Ultrasound is a vital tool for detecting breast cancer,and computer-assisted diagnosis software,developed based on deep learning,can effectively assist radiologists in diagnosis.However,the network model is prone to overfitting during training,owing to challenges such as insufficient training data.This study attempts to solve the problem caused by small datasets and improve model detection performance.Methods We propose a breast cancer detection framework based on deep learning(a transfer learning method based on cross-organ cancer detection)and a contrastive learning method based on breast imaging reporting and data systems(BI-RADS).Results When using cross organ transfer learning and BIRADS based contrastive learning,the average sensitivity of the model increased by a maximum of 16.05%.Conclusion Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced,and contrastive learning method based on BI-RADS can improve the detection performance of the model.展开更多
Accurate displacement prediction is critical for the early warning of landslides.The complexity of the coupling relationship between multiple influencing factors and displacement makes the accurate prediction of displ...Accurate displacement prediction is critical for the early warning of landslides.The complexity of the coupling relationship between multiple influencing factors and displacement makes the accurate prediction of displacement difficult.Moreover,in engineering practice,insufficient monitoring data limit the performance of prediction models.To alleviate this problem,a displacement prediction method based on multisource domain transfer learning,which helps accurately predict data in the target domain through the knowledge of one or more source domains,is proposed.First,an optimized variational mode decomposition model based on the minimum sample entropy is used to decompose the cumulative displacement into the trend,periodic,and stochastic components.The trend component is predicted by an autoregressive model,and the periodic component is predicted by the long short-term memory.For the stochastic component,because it is affected by uncertainties,it is predicted by a combination of a Wasserstein generative adversarial network and multisource domain transfer learning for improved prediction accuracy.Considering a real mine slope as a case study,the proposed prediction method was validated.Therefore,this study provides new insights that can be applied to scenarios lacking sample data.展开更多
Zero-shot learning enables the recognition of new class samples by migrating models learned from semanticfeatures and existing sample features to things that have never been seen before. The problems of consistencyof ...Zero-shot learning enables the recognition of new class samples by migrating models learned from semanticfeatures and existing sample features to things that have never been seen before. The problems of consistencyof different types of features and domain shift problems are two of the critical issues in zero-shot learning. Toaddress both of these issues, this paper proposes a new modeling structure. The traditional approach mappedsemantic features and visual features into the same feature space;based on this, a dual discriminator approachis used in the proposed model. This dual discriminator approach can further enhance the consistency betweensemantic and visual features. At the same time, this approach can also align unseen class semantic features andtraining set samples, providing a portion of information about the unseen classes. In addition, a new feature fusionmethod is proposed in the model. This method is equivalent to adding perturbation to the seen class features,which can reduce the degree to which the classification results in the model are biased towards the seen classes.At the same time, this feature fusion method can provide part of the information of the unseen classes, improvingits classification accuracy in generalized zero-shot learning and reducing domain bias. The proposed method isvalidated and compared with othermethods on four datasets, and fromthe experimental results, it can be seen thatthe method proposed in this paper achieves promising results.展开更多
This article studies the effective traffic signal control problem of multiple intersections in a city-level traffic system.A novel regional multi-agent cooperative reinforcement learning algorithm called RegionSTLight...This article studies the effective traffic signal control problem of multiple intersections in a city-level traffic system.A novel regional multi-agent cooperative reinforcement learning algorithm called RegionSTLight is proposed to improve the traffic efficiency.Firstly a regional multi-agent Q-learning framework is proposed,which can equivalently decompose the global Q value of the traffic system into the local values of several regions Based on the framework and the idea of human-machine cooperation,a dynamic zoning method is designed to divide the traffic network into several strong-coupled regions according to realtime traffic flow densities.In order to achieve better cooperation inside each region,a lightweight spatio-temporal fusion feature extraction network is designed.The experiments in synthetic real-world and city-level scenarios show that the proposed RegionS TLight converges more quickly,is more stable,and obtains better asymptotic performance compared to state-of-theart models.展开更多
The wear of metal cutting tools will progressively rise as the cutting time goes on. Wearing heavily on the toolwill generate significant noise and vibration, negatively impacting the accuracy of the forming and the s...The wear of metal cutting tools will progressively rise as the cutting time goes on. Wearing heavily on the toolwill generate significant noise and vibration, negatively impacting the accuracy of the forming and the surfaceintegrity of the workpiece. Hence, during the cutting process, it is imperative to continually monitor the tool wearstate andpromptly replace anyheavilyworn tools toguarantee thequality of the cutting.The conventional tool wearmonitoring models, which are based on machine learning, are specifically built for the intended cutting conditions.However, these models require retraining when the cutting conditions undergo any changes. This method has noapplication value if the cutting conditions frequently change. This manuscript proposes a method for monitoringtool wear basedonunsuperviseddeep transfer learning. Due to the similarity of the tool wear process under varyingworking conditions, a tool wear recognitionmodel that can adapt to both current and previous working conditionshas been developed by utilizing cutting monitoring data from history. To extract and classify cutting vibrationsignals, the unsupervised deep transfer learning network comprises a one-dimensional (1D) convolutional neuralnetwork (CNN) with a multi-layer perceptron (MLP). To achieve distribution alignment of deep features throughthe maximum mean discrepancy algorithm, a domain adaptive layer is embedded in the penultimate layer of thenetwork. A platformformonitoring tool wear during endmilling has been constructed. The proposedmethod wasverified through the execution of a full life test of end milling under multiple working conditions with a Cr12MoVsteel workpiece. Our experiments demonstrate that the transfer learning model maintains a classification accuracyof over 80%. In comparisonwith the most advanced tool wearmonitoring methods, the presentedmodel guaranteessuperior performance in the target domains.展开更多
Despite the big success of transfer learning techniques in anomaly detection,it is still challenging to achieve good transition of detection rules merely based on the preferred data in the anomaly detection with one-c...Despite the big success of transfer learning techniques in anomaly detection,it is still challenging to achieve good transition of detection rules merely based on the preferred data in the anomaly detection with one-class classification,especially for the data with a large distribution difference.To address this challenge,a novel deep one-class transfer learning algorithm with domain-adversarial training is proposed in this paper.First,by integrating a hypersphere adaptation constraint into domainadversarial neural network,a new hypersphere adversarial training mechanism is designed.Second,an alternative optimization method is derived to seek the optimal network parameters while pushing the hyperspheres built in the source domain and target domain to be as identical as possible.Through transferring oneclass detection rule in the adaptive extraction of domain-invariant feature representation,the end-to-end anomaly detection with one-class classification is then enhanced.Furthermore,a theoretical analysis about the model reliability,as well as the strategy of avoiding invalid and negative transfer,is provided.Experiments are conducted on two typical anomaly detection problems,i.e.,image recognition detection and online early fault detection of rolling bearings.The results demonstrate that the proposed algorithm outperforms the state-of-the-art methods in terms of detection accuracy and robustness.展开更多
Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global...Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global sampling but neglect to exploit global and local sampling simultaneously;ⅱ)they either transfer knowledge from a global perspective or a local perspective,while overlooking transmission of confident knowledge from both perspectives;and ⅲ) they apply repeated sampling during iteration,which takes a lot of time.To address these problems,knowledge transfer learning via dual density sampling(KTL-DDS) is proposed in this study,which consists of three parts:ⅰ) Dual density sampling(DDS) that jointly leverages two sampling methods associated with different views,i.e.,global density sampling that extracts representative samples with the most common features and local density sampling that selects representative samples with critical boundary information;ⅱ)Consistent maximum mean discrepancy(CMMD) that reduces intra-and cross-domain risks and guarantees high consistency of knowledge by shortening the distances of every two subsets among the four subsets collected by DDS;and ⅲ) Knowledge dissemination(KD) that transmits confident and consistent knowledge from the representative target samples with global and local properties to the whole target domain by preserving the neighboring relationships of the target domain.Mathematical analyses show that DDS avoids repeated sampling during the iteration.With the above three actions,confident knowledge with both global and local properties is transferred,and the memory and running time are greatly reduced.In addition,a general framework named dual density sampling approximation(DDSA) is extended,which can be easily applied to other DA algorithms.Extensive experiments on five datasets in clean,label corruption(LC),feature missing(FM),and LC&FM environments demonstrate the encouraging performance of KTL-DDS.展开更多
The performance of the state-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient, and Soft Actor-Critic for generating a quadruped walking gai...The performance of the state-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient, and Soft Actor-Critic for generating a quadruped walking gait in a virtual environment was presented in previous research work titled “A Comparison of PPO, TD3, and SAC Reinforcement Algorithms for Quadruped Walking Gait Generation”. We demonstrated that the Soft Actor-Critic Reinforcement algorithm had the best performance generating the walking gait for a quadruped in certain instances of sensor configurations in the virtual environment. In this work, we present the performance analysis of the state-of-the-art Deep Reinforcement algorithms above for quadruped walking gait generation in a physical environment. The performance is determined in the physical environment by transfer learning augmented by real-time reinforcement learning for gait generation on a physical quadruped. The performance is analyzed on a quadruped equipped with a range of sensors such as position tracking using a stereo camera, contact sensing of each of the robot legs through force resistive sensors, and proprioceptive information of the robot body and legs using nine inertial measurement units. The performance comparison is presented using the metrics associated with the walking gait: average forward velocity (m/s), average forward velocity variance, average lateral velocity (m/s), average lateral velocity variance, and quaternion root mean square deviation. The strengths and weaknesses of each algorithm for the given task on the physical quadruped are discussed.展开更多
As failure data is usually scarce in practice upon preventive maintenance strategy in prognostics and health management(PHM)domain,transfer learning provides a fundamental solution to enhance generalization of datadri...As failure data is usually scarce in practice upon preventive maintenance strategy in prognostics and health management(PHM)domain,transfer learning provides a fundamental solution to enhance generalization of datadriven methods.In this paper,we briefly discuss general idea and advances of various transfer learning techniques in PHM domain,including domain adaptation,domain generalization,federated learning,and knowledge-driven transfer learning.Based on the observations from state of the art,we provide extensive discussions on possible challenges and opportunities of transfer learning in PHM domain to direct future development.展开更多
The majority of big data analytics applied to transportation datasets suffer from being too domain-specific,that is,they draw conclusions for a dataset based on analytics on the same dataset.This makes models trained ...The majority of big data analytics applied to transportation datasets suffer from being too domain-specific,that is,they draw conclusions for a dataset based on analytics on the same dataset.This makes models trained from one domain(e.g.taxi data)applies badly to a different domain(e.g.Uber data).To achieve accurate analyses on a new domain,substantial amounts of data must be available,which limits practical applications.To remedy this,we propose to use semi-supervised and active learning of big data to accomplish the domain adaptation task:Selectively choosing a small amount of datapoints from a new domain while achieving comparable performances to using all the datapoints.We choose the New York City(NYC)transportation data of taxi and Uber as our dataset,simulating different domains with 90%as the source data domain for training and the remaining 10%as the target data domain for evaluation.We propose semi-supervised and active learning strategies and apply it to the source domain for selecting datapoints.Experimental results show that our adaptation achieves a comparable performance of using all datapoints while using only a fraction of them,substantially reducing the amount of data required.Our approach has two major advantages:It can make accurate analytics and predictions when big datasets are not available,and even if big datasets are available,our approach chooses the most informative datapoints out of the dataset,making the process much more efficient without having to process huge amounts of data.展开更多
:Cross-project defect prediction(CPDP)aims to predict the defects on target project by using a prediction model built on source projects.The main problem in CPDP is the huge distribution gap between the source project...:Cross-project defect prediction(CPDP)aims to predict the defects on target project by using a prediction model built on source projects.The main problem in CPDP is the huge distribution gap between the source project and the target project,which prevents the prediction model from performing well.Most existing methods overlook the class discrimination of the learned features.Seeking an effective transferable model from the source project to the target project for CPDP is challenging.In this paper,we propose an unsupervised domain adaptation based on the discriminative subspace learning(DSL)approach for CPDP.DSL treats the data from two projects as being from two domains and maps the data into a common feature space.It employs crossdomain alignment with discriminative information from different projects to reduce the distribution difference of the data between different projects and incorporates the class discriminative information.Specifically,DSL first utilizes subspace learning based domain adaptation to reduce the distribution gap of data between different projects.Then,it makes full use of the class label information of the source project and transfers the discrimination ability of the source project to the target project in the common space.Comprehensive experiments on five projects verify that DSL can build an effective prediction model and improve the performance over the related competing methods by at least 7.10%and 11.08%in terms of G-measure and AUC.展开更多
Recent works have shown that neural networks are promising parameter-free limiters for a variety of numerical schemes(Morgan et al.in A machine learning approach for detect-ing shocks with high-order hydrodynamic meth...Recent works have shown that neural networks are promising parameter-free limiters for a variety of numerical schemes(Morgan et al.in A machine learning approach for detect-ing shocks with high-order hydrodynamic methods.et al.in J Comput Phys 367:166-191.,2018;Veiga et al.in European Conference on Computational Mechanics andⅦEuropean Conference on Computational Fluid Dynamics,vol.1,pp.2525-2550.ECCM.,2018).Following this trend,we train a neural network to serve as a shock-indicator function using simulation data from a Runge-Kutta discontinuous Galer-kin(RKDG)method and a modal high-order limiter(Krivodonova in J Comput Phys 226:879-896.,2007).With this methodology,we obtain one-and two-dimensional black-box shock-indicators which are then coupled to a standard limiter.Furthermore,we describe a strategy to transfer the shock-indicator to a residual distribution(RD)scheme without the need for a full training cycle and large data-set,by finding a mapping between the solution feature spaces from an RD scheme to an RKDG scheme,both in one-and two-dimensional problems,and on Cartesian and unstruc-tured meshes.We report on the quality of the numerical solutions when using the neural network shock-indicator coupled to a limiter,comparing its performance to traditional lim-iters,for both RKDG and RD schemes.展开更多
A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is oppo...A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is opposite to that of naive 1-fold cross validation. As opposed to naive l-fold cross validation, fast l-fold cross validation takes the advantage in terms of computational time, especially for the large fold number such as l 〉 20. To corroborate the efficacy and feasibility of fast l-fold cross validation, experiments on five benchmark regression data sets are evaluated.展开更多
Population-based algorithms have been used in many real-world problems.Bat algorithm(BA)is one of the states of the art of these approaches.Because of the super bat,on the one hand,BA can converge quickly;on the other...Population-based algorithms have been used in many real-world problems.Bat algorithm(BA)is one of the states of the art of these approaches.Because of the super bat,on the one hand,BA can converge quickly;on the other hand,it is easy to fall into local optimum.Therefore,for typical BA algorithms,the ability of exploration and exploitation is not strong enough and it is hard to find a precise result.In this paper,we propose a novel bat algorithm based on cross boundary learning(CBL)and uniform explosion strategy(UES),namely BABLUE in short,to avoid the above contradiction and achieve both fast convergence and high quality.Different from previous opposition-based learning,the proposed CBL can expand the search area of population and then maintain the ability of global exploration in the process of fast convergence.In order to enhance the ability of local exploitation of the proposed algorithm,we propose UES,which can achieve almost the same search precise as that of firework explosion algorithm but consume less computation resource.BABLUE is tested with numerous experiments on unimodal,multimodal,one-dimensional,high-dimensional and discrete problems,and then compared with other typical intelligent optimization algorithms.The results show that the proposed algorithm outperforms other algorithms.展开更多
In recent years, spiking neural networks(SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spa...In recent years, spiking neural networks(SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spatio-temporal information.However, the non-differential spike activity makes SNNs more difficult to train in supervised training. Most existing methods focusing on introducing an approximated derivative to replace it, while they are often based on static surrogate functions. In this paper, we propose a progressive surrogate gradient learning for backpropagation of SNNs, which is able to approximate the step function gradually and to reduce information loss. Furthermore, memristor cross arrays are used for speeding up calculation and reducing system energy consumption for their hardware advantage. The proposed algorithm is evaluated on both static and neuromorphic datasets using fully connected and convolutional network architecture, and the experimental results indicate that our approach has a high performance compared with previous research.展开更多
Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have le...Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have led to the inefficient development of thermoelectric materials. In this study, we proposed a two-stage machine learning framework with physical interpretability incorporating domain knowledge to calculate high/low thermal conductivity rapidly. Specifically, crystal graph convolutional neural network(CGCNN) is constructed to predict the fundamental physical parameters related to lattice thermal conductivity. Based on the above physical parameters, an interpretable machine learning model–sure independence screening and sparsifying operator(SISSO), is trained to predict the lattice thermal conductivity. We have predicted the lattice thermal conductivity of all available materials in the open quantum materials database(OQMD)(https://www.oqmd.org/). The proposed approach guides the next step of searching for materials with ultra-high or ultralow lattice thermal conductivity and promotes the development of new thermal insulation materials and thermoelectric materials.展开更多
基金the National Natural Science Foundation of China(Grant Nos.41941017 and U1702241).
文摘Determining homogeneous domains statistically is helpful for engineering geological modeling and rock mass stability evaluation.In this text,a technique that can integrate lithology,geotechnical and structural information is proposed to delineate homogeneous domains.This technique is then applied to a high and steep slope along a road.First,geological and geotechnical domains were described based on lithology,faults,and shear zones.Next,topological manifolds were used to eliminate the incompatibility between orientations and other parameters(i.e.trace length and roughness)so that the data concerning various properties of each discontinuity can be matched and characterized in the same Euclidean space.Thus,the influence of implicit combined effect in between parameter sequences on the homogeneous domains could be considered.Deep learning technique was employed to quantify abstract features of the characterization images of discontinuity properties,and to assess the similarity of rock mass structures.The results show that the technique can effectively distinguish structural variations and outperform conventional methods.It can handle multisource engineering geological information and multiple discontinuity parameters.This technique can also minimize the interference of human factors and delineate homogeneous domains based on orientations or multi-parameter with arbitrary distributions to satisfy different engineering requirements.
基金The National Natural Science Foundation of China (32371993)The Natural Science Research Key Project of Anhui Provincial University(2022AH040125&2023AH040135)The Key Research and Development Plan of Anhui Province (202204c06020022&2023n06020057)。
文摘This study aimed to address the challenge of accurately and reliably detecting tomatoes in dense planting environments,a critical prerequisite for the automation implementation of robotic harvesting.However,the heavy reliance on extensive manually annotated datasets for training deep learning models still poses significant limitations to their application in real-world agricultural production environments.To overcome these limitations,we employed domain adaptive learning approach combined with the YOLOv5 model to develop a novel tomato detection model called as TDA-YOLO(tomato detection domain adaptation).We designated the normal illumination scenes in dense planting environments as the source domain and utilized various other illumination scenes as the target domain.To construct bridge mechanism between source and target domains,neural preset for color style transfer is introduced to generate a pseudo-dataset,which served to deal with domain discrepancy.Furthermore,this study combines the semi-supervised learning method to enable the model to extract domain-invariant features more fully,and uses knowledge distillation to improve the model's ability to adapt to the target domain.Additionally,for purpose of promoting inference speed and low computational demand,the lightweight FasterNet network was integrated into the YOLOv5's C3 module,creating a modified C3_Faster module.The experimental results demonstrated that the proposed TDA-YOLO model significantly outperformed original YOLOv5s model,achieving a mAP(mean average precision)of 96.80%for tomato detection across diverse scenarios in dense planting environments,increasing by 7.19 percentage points;Compared with the latest YOLOv8 and YOLOv9,it is also 2.17 and 1.19 percentage points higher,respectively.The model's average detection time per image was an impressive 15 milliseconds,with a FLOPs(floating point operations per second)count of 13.8 G.After acceleration processing,the detection accuracy of the TDA-YOLO model on the Jetson Xavier NX development board is 90.95%,the mAP value is 91.35%,and the detection time of each image is 21 ms,which can still meet the requirements of real-time detection of tomatoes in dense planting environment.The experimental results show that the proposed TDA-YOLO model can accurately and quickly detect tomatoes in dense planting environment,and at the same time avoid the use of a large number of annotated data,which provides technical support for the development of automatic harvesting systems for tomatoes and other fruits.
基金Macao Polytechnic University Grant(RP/FCSD-01/2022RP/FCA-05/2022)Science and Technology Development Fund of Macao(0105/2022/A).
文摘Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing interest in applying this technology to diverse applications in medical image analysis.Automated three dimensional Breast Ultrasound is a vital tool for detecting breast cancer,and computer-assisted diagnosis software,developed based on deep learning,can effectively assist radiologists in diagnosis.However,the network model is prone to overfitting during training,owing to challenges such as insufficient training data.This study attempts to solve the problem caused by small datasets and improve model detection performance.Methods We propose a breast cancer detection framework based on deep learning(a transfer learning method based on cross-organ cancer detection)and a contrastive learning method based on breast imaging reporting and data systems(BI-RADS).Results When using cross organ transfer learning and BIRADS based contrastive learning,the average sensitivity of the model increased by a maximum of 16.05%.Conclusion Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced,and contrastive learning method based on BI-RADS can improve the detection performance of the model.
基金supported by the National Natural Science Foundation of China(Grant No.51674169)Department of Education of Hebei Province of China(Grant No.ZD2019140)+1 种基金Natural Science Foundation of Hebei Province of China(Grant No.F2019210243)S&T Program of Hebei(Grant No.22375413D)School of Electrical and Electronics Engineering。
文摘Accurate displacement prediction is critical for the early warning of landslides.The complexity of the coupling relationship between multiple influencing factors and displacement makes the accurate prediction of displacement difficult.Moreover,in engineering practice,insufficient monitoring data limit the performance of prediction models.To alleviate this problem,a displacement prediction method based on multisource domain transfer learning,which helps accurately predict data in the target domain through the knowledge of one or more source domains,is proposed.First,an optimized variational mode decomposition model based on the minimum sample entropy is used to decompose the cumulative displacement into the trend,periodic,and stochastic components.The trend component is predicted by an autoregressive model,and the periodic component is predicted by the long short-term memory.For the stochastic component,because it is affected by uncertainties,it is predicted by a combination of a Wasserstein generative adversarial network and multisource domain transfer learning for improved prediction accuracy.Considering a real mine slope as a case study,the proposed prediction method was validated.Therefore,this study provides new insights that can be applied to scenarios lacking sample data.
文摘Zero-shot learning enables the recognition of new class samples by migrating models learned from semanticfeatures and existing sample features to things that have never been seen before. The problems of consistencyof different types of features and domain shift problems are two of the critical issues in zero-shot learning. Toaddress both of these issues, this paper proposes a new modeling structure. The traditional approach mappedsemantic features and visual features into the same feature space;based on this, a dual discriminator approachis used in the proposed model. This dual discriminator approach can further enhance the consistency betweensemantic and visual features. At the same time, this approach can also align unseen class semantic features andtraining set samples, providing a portion of information about the unseen classes. In addition, a new feature fusionmethod is proposed in the model. This method is equivalent to adding perturbation to the seen class features,which can reduce the degree to which the classification results in the model are biased towards the seen classes.At the same time, this feature fusion method can provide part of the information of the unseen classes, improvingits classification accuracy in generalized zero-shot learning and reducing domain bias. The proposed method isvalidated and compared with othermethods on four datasets, and fromthe experimental results, it can be seen thatthe method proposed in this paper achieves promising results.
基金supported by the National Science and Technology Major Project (2021ZD0112702)the National Natural Science Foundation (NNSF)of China (62373100,62233003)the Natural Science Foundation of Jiangsu Province of China (BK20202006)。
文摘This article studies the effective traffic signal control problem of multiple intersections in a city-level traffic system.A novel regional multi-agent cooperative reinforcement learning algorithm called RegionSTLight is proposed to improve the traffic efficiency.Firstly a regional multi-agent Q-learning framework is proposed,which can equivalently decompose the global Q value of the traffic system into the local values of several regions Based on the framework and the idea of human-machine cooperation,a dynamic zoning method is designed to divide the traffic network into several strong-coupled regions according to realtime traffic flow densities.In order to achieve better cooperation inside each region,a lightweight spatio-temporal fusion feature extraction network is designed.The experiments in synthetic real-world and city-level scenarios show that the proposed RegionS TLight converges more quickly,is more stable,and obtains better asymptotic performance compared to state-of-theart models.
基金the National Key Research and Development Program of China(No.2020YFB1713500)the Natural Science Basic Research Program of Shaanxi(Grant No.2023JCYB289)+1 种基金the National Natural Science Foundation of China(Grant No.52175112)the Fundamental Research Funds for the Central Universities(Grant No.ZYTS23102).
文摘The wear of metal cutting tools will progressively rise as the cutting time goes on. Wearing heavily on the toolwill generate significant noise and vibration, negatively impacting the accuracy of the forming and the surfaceintegrity of the workpiece. Hence, during the cutting process, it is imperative to continually monitor the tool wearstate andpromptly replace anyheavilyworn tools toguarantee thequality of the cutting.The conventional tool wearmonitoring models, which are based on machine learning, are specifically built for the intended cutting conditions.However, these models require retraining when the cutting conditions undergo any changes. This method has noapplication value if the cutting conditions frequently change. This manuscript proposes a method for monitoringtool wear basedonunsuperviseddeep transfer learning. Due to the similarity of the tool wear process under varyingworking conditions, a tool wear recognitionmodel that can adapt to both current and previous working conditionshas been developed by utilizing cutting monitoring data from history. To extract and classify cutting vibrationsignals, the unsupervised deep transfer learning network comprises a one-dimensional (1D) convolutional neuralnetwork (CNN) with a multi-layer perceptron (MLP). To achieve distribution alignment of deep features throughthe maximum mean discrepancy algorithm, a domain adaptive layer is embedded in the penultimate layer of thenetwork. A platformformonitoring tool wear during endmilling has been constructed. The proposedmethod wasverified through the execution of a full life test of end milling under multiple working conditions with a Cr12MoVsteel workpiece. Our experiments demonstrate that the transfer learning model maintains a classification accuracyof over 80%. In comparisonwith the most advanced tool wearmonitoring methods, the presentedmodel guaranteessuperior performance in the target domains.
基金supported by the National Natural Science Foundation of China(NSFC)(U1704158)Henan Province Technologies Research and Development Project of China(212102210103)+1 种基金the NSFC Development Funding of Henan Normal University(2020PL09)the University of Manitoba Research Grants Program(URGP)。
文摘Despite the big success of transfer learning techniques in anomaly detection,it is still challenging to achieve good transition of detection rules merely based on the preferred data in the anomaly detection with one-class classification,especially for the data with a large distribution difference.To address this challenge,a novel deep one-class transfer learning algorithm with domain-adversarial training is proposed in this paper.First,by integrating a hypersphere adaptation constraint into domainadversarial neural network,a new hypersphere adversarial training mechanism is designed.Second,an alternative optimization method is derived to seek the optimal network parameters while pushing the hyperspheres built in the source domain and target domain to be as identical as possible.Through transferring oneclass detection rule in the adaptive extraction of domain-invariant feature representation,the end-to-end anomaly detection with one-class classification is then enhanced.Furthermore,a theoretical analysis about the model reliability,as well as the strategy of avoiding invalid and negative transfer,is provided.Experiments are conducted on two typical anomaly detection problems,i.e.,image recognition detection and online early fault detection of rolling bearings.The results demonstrate that the proposed algorithm outperforms the state-of-the-art methods in terms of detection accuracy and robustness.
基金supported in part by the Key-Area Research and Development Program of Guangdong Province (2020B010166006)the National Natural Science Foundation of China (61972102)+1 种基金the Guangzhou Science and Technology Plan Project (023A04J1729)the Science and Technology development fund (FDCT),Macao SAR (015/2020/AMJ)。
文摘Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global sampling but neglect to exploit global and local sampling simultaneously;ⅱ)they either transfer knowledge from a global perspective or a local perspective,while overlooking transmission of confident knowledge from both perspectives;and ⅲ) they apply repeated sampling during iteration,which takes a lot of time.To address these problems,knowledge transfer learning via dual density sampling(KTL-DDS) is proposed in this study,which consists of three parts:ⅰ) Dual density sampling(DDS) that jointly leverages two sampling methods associated with different views,i.e.,global density sampling that extracts representative samples with the most common features and local density sampling that selects representative samples with critical boundary information;ⅱ)Consistent maximum mean discrepancy(CMMD) that reduces intra-and cross-domain risks and guarantees high consistency of knowledge by shortening the distances of every two subsets among the four subsets collected by DDS;and ⅲ) Knowledge dissemination(KD) that transmits confident and consistent knowledge from the representative target samples with global and local properties to the whole target domain by preserving the neighboring relationships of the target domain.Mathematical analyses show that DDS avoids repeated sampling during the iteration.With the above three actions,confident knowledge with both global and local properties is transferred,and the memory and running time are greatly reduced.In addition,a general framework named dual density sampling approximation(DDSA) is extended,which can be easily applied to other DA algorithms.Extensive experiments on five datasets in clean,label corruption(LC),feature missing(FM),and LC&FM environments demonstrate the encouraging performance of KTL-DDS.
文摘The performance of the state-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient, and Soft Actor-Critic for generating a quadruped walking gait in a virtual environment was presented in previous research work titled “A Comparison of PPO, TD3, and SAC Reinforcement Algorithms for Quadruped Walking Gait Generation”. We demonstrated that the Soft Actor-Critic Reinforcement algorithm had the best performance generating the walking gait for a quadruped in certain instances of sensor configurations in the virtual environment. In this work, we present the performance analysis of the state-of-the-art Deep Reinforcement algorithms above for quadruped walking gait generation in a physical environment. The performance is determined in the physical environment by transfer learning augmented by real-time reinforcement learning for gait generation on a physical quadruped. The performance is analyzed on a quadruped equipped with a range of sensors such as position tracking using a stereo camera, contact sensing of each of the robot legs through force resistive sensors, and proprioceptive information of the robot body and legs using nine inertial measurement units. The performance comparison is presented using the metrics associated with the walking gait: average forward velocity (m/s), average forward velocity variance, average lateral velocity (m/s), average lateral velocity variance, and quaternion root mean square deviation. The strengths and weaknesses of each algorithm for the given task on the physical quadruped are discussed.
文摘As failure data is usually scarce in practice upon preventive maintenance strategy in prognostics and health management(PHM)domain,transfer learning provides a fundamental solution to enhance generalization of datadriven methods.In this paper,we briefly discuss general idea and advances of various transfer learning techniques in PHM domain,including domain adaptation,domain generalization,federated learning,and knowledge-driven transfer learning.Based on the observations from state of the art,we provide extensive discussions on possible challenges and opportunities of transfer learning in PHM domain to direct future development.
文摘The majority of big data analytics applied to transportation datasets suffer from being too domain-specific,that is,they draw conclusions for a dataset based on analytics on the same dataset.This makes models trained from one domain(e.g.taxi data)applies badly to a different domain(e.g.Uber data).To achieve accurate analyses on a new domain,substantial amounts of data must be available,which limits practical applications.To remedy this,we propose to use semi-supervised and active learning of big data to accomplish the domain adaptation task:Selectively choosing a small amount of datapoints from a new domain while achieving comparable performances to using all the datapoints.We choose the New York City(NYC)transportation data of taxi and Uber as our dataset,simulating different domains with 90%as the source data domain for training and the remaining 10%as the target data domain for evaluation.We propose semi-supervised and active learning strategies and apply it to the source domain for selecting datapoints.Experimental results show that our adaptation achieves a comparable performance of using all datapoints while using only a fraction of them,substantially reducing the amount of data required.Our approach has two major advantages:It can make accurate analytics and predictions when big datasets are not available,and even if big datasets are available,our approach chooses the most informative datapoints out of the dataset,making the process much more efficient without having to process huge amounts of data.
基金This paper was supported by the National Natural Science Foundation of China(61772286,61802208,and 61876089)China Postdoctoral Science Foundation Grant 2019M651923Natural Science Foundation of Jiangsu Province of China(BK0191381).
文摘:Cross-project defect prediction(CPDP)aims to predict the defects on target project by using a prediction model built on source projects.The main problem in CPDP is the huge distribution gap between the source project and the target project,which prevents the prediction model from performing well.Most existing methods overlook the class discrimination of the learned features.Seeking an effective transferable model from the source project to the target project for CPDP is challenging.In this paper,we propose an unsupervised domain adaptation based on the discriminative subspace learning(DSL)approach for CPDP.DSL treats the data from two projects as being from two domains and maps the data into a common feature space.It employs crossdomain alignment with discriminative information from different projects to reduce the distribution difference of the data between different projects and incorporates the class discriminative information.Specifically,DSL first utilizes subspace learning based domain adaptation to reduce the distribution gap of data between different projects.Then,it makes full use of the class label information of the source project and transfers the discrimination ability of the source project to the target project in the common space.Comprehensive experiments on five projects verify that DSL can build an effective prediction model and improve the performance over the related competing methods by at least 7.10%and 11.08%in terms of G-measure and AUC.
文摘Recent works have shown that neural networks are promising parameter-free limiters for a variety of numerical schemes(Morgan et al.in A machine learning approach for detect-ing shocks with high-order hydrodynamic methods.et al.in J Comput Phys 367:166-191.,2018;Veiga et al.in European Conference on Computational Mechanics andⅦEuropean Conference on Computational Fluid Dynamics,vol.1,pp.2525-2550.ECCM.,2018).Following this trend,we train a neural network to serve as a shock-indicator function using simulation data from a Runge-Kutta discontinuous Galer-kin(RKDG)method and a modal high-order limiter(Krivodonova in J Comput Phys 226:879-896.,2007).With this methodology,we obtain one-and two-dimensional black-box shock-indicators which are then coupled to a standard limiter.Furthermore,we describe a strategy to transfer the shock-indicator to a residual distribution(RD)scheme without the need for a full training cycle and large data-set,by finding a mapping between the solution feature spaces from an RD scheme to an RKDG scheme,both in one-and two-dimensional problems,and on Cartesian and unstruc-tured meshes.We report on the quality of the numerical solutions when using the neural network shock-indicator coupled to a limiter,comparing its performance to traditional lim-iters,for both RKDG and RD schemes.
基金supported by the National Natural Science Foundation of China(51006052)the NUST Outstanding Scholar Supporting Program
文摘A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is opposite to that of naive 1-fold cross validation. As opposed to naive l-fold cross validation, fast l-fold cross validation takes the advantage in terms of computational time, especially for the large fold number such as l 〉 20. To corroborate the efficacy and feasibility of fast l-fold cross validation, experiments on five benchmark regression data sets are evaluated.
基金Supported by the National Natural Science Foundation of China(61472289)the Open Project Program of the State Key Laboratory of Digital Manufacturing Equipment and Technology(DMETKF2017016)
文摘Population-based algorithms have been used in many real-world problems.Bat algorithm(BA)is one of the states of the art of these approaches.Because of the super bat,on the one hand,BA can converge quickly;on the other hand,it is easy to fall into local optimum.Therefore,for typical BA algorithms,the ability of exploration and exploitation is not strong enough and it is hard to find a precise result.In this paper,we propose a novel bat algorithm based on cross boundary learning(CBL)and uniform explosion strategy(UES),namely BABLUE in short,to avoid the above contradiction and achieve both fast convergence and high quality.Different from previous opposition-based learning,the proposed CBL can expand the search area of population and then maintain the ability of global exploration in the process of fast convergence.In order to enhance the ability of local exploitation of the proposed algorithm,we propose UES,which can achieve almost the same search precise as that of firework explosion algorithm but consume less computation resource.BABLUE is tested with numerous experiments on unimodal,multimodal,one-dimensional,high-dimensional and discrete problems,and then compared with other typical intelligent optimization algorithms.The results show that the proposed algorithm outperforms other algorithms.
基金Project supported by the Natural Science Foundation of Chongqing(Grant No.cstc2021jcyj-msxmX0565)the Fundamental Research Funds for the Central Universities(Grant No.SWU021002)the Graduate Research Innovation Project of Chongqing(Grant No.CYS22242)。
文摘In recent years, spiking neural networks(SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spatio-temporal information.However, the non-differential spike activity makes SNNs more difficult to train in supervised training. Most existing methods focusing on introducing an approximated derivative to replace it, while they are often based on static surrogate functions. In this paper, we propose a progressive surrogate gradient learning for backpropagation of SNNs, which is able to approximate the step function gradually and to reduce information loss. Furthermore, memristor cross arrays are used for speeding up calculation and reducing system energy consumption for their hardware advantage. The proposed algorithm is evaluated on both static and neuromorphic datasets using fully connected and convolutional network architecture, and the experimental results indicate that our approach has a high performance compared with previous research.
基金support of the National Natural Science Foundation of China(Grant Nos.12104356 and52250191)China Postdoctoral Science Foundation(Grant No.2022M712552)+2 种基金the Opening Project of Shanghai Key Laboratory of Special Artificial Microstructure Materials and Technology(Grant No.Ammt2022B-1)the Fundamental Research Funds for the Central Universitiessupport by HPC Platform,Xi’an Jiaotong University。
文摘Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have led to the inefficient development of thermoelectric materials. In this study, we proposed a two-stage machine learning framework with physical interpretability incorporating domain knowledge to calculate high/low thermal conductivity rapidly. Specifically, crystal graph convolutional neural network(CGCNN) is constructed to predict the fundamental physical parameters related to lattice thermal conductivity. Based on the above physical parameters, an interpretable machine learning model–sure independence screening and sparsifying operator(SISSO), is trained to predict the lattice thermal conductivity. We have predicted the lattice thermal conductivity of all available materials in the open quantum materials database(OQMD)(https://www.oqmd.org/). The proposed approach guides the next step of searching for materials with ultra-high or ultralow lattice thermal conductivity and promotes the development of new thermal insulation materials and thermoelectric materials.