Recent research in cross-domain intelligence fault diagnosis of machinery still has some problems,such as relatively ideal speed conditions and sample conditions.In engineering practice,the rotational speed of the mac...Recent research in cross-domain intelligence fault diagnosis of machinery still has some problems,such as relatively ideal speed conditions and sample conditions.In engineering practice,the rotational speed of the machine is often transient and time-varying,which makes the sample annotation increasingly expensive.Meanwhile,the number of samples collected from different health states is often unbalanced.To deal with the above challenges,a complementary-label(CL)adversarial domain adaptation fault diagnosis network(CLADAN)is proposed under time-varying rotational speed and weakly-supervised conditions.In the weakly supervised learning condition,machine prior information is used for sample annotation via cost-friendly complementary label learning.A diagnosticmodel learning strategywith discretized category probabilities is designed to avoidmulti-peak distribution of prediction results.In adversarial training process,we developed virtual adversarial regularization(VAR)strategy,which further enhances the robustness of the model by adding adversarial perturbations in the target domain.Comparative experiments on two case studies validated the superior performance of the proposed method.展开更多
Intelligent diagnosis driven by big data for mechanical fault is an important means to ensure the safe operation ofequipment. In these methods, deep learning-based machinery fault diagnosis approaches have received in...Intelligent diagnosis driven by big data for mechanical fault is an important means to ensure the safe operation ofequipment. In these methods, deep learning-based machinery fault diagnosis approaches have received increasingattention and achieved some results. It might lead to insufficient performance for using transfer learning alone andcause misclassification of target samples for domain bias when building deep models to learn domain-invariantfeatures. To address the above problems, a deep discriminative adversarial domain adaptation neural networkfor the bearing fault diagnosis model is proposed (DDADAN). In this method, the raw vibration data are firstlyconverted into frequency domain data by Fast Fourier Transform, and an improved deep convolutional neuralnetwork with wide first-layer kernels is used as a feature extractor to extract deep fault features. Then, domaininvariant features are learned from the fault data with correlation alignment-based domain adversarial training.Furthermore, to enhance the discriminative property of features, discriminative feature learning is embeddedinto this network to make the features compact, as well as separable between classes within the class. Finally, theperformance and anti-noise capability of the proposedmethod are evaluated using two sets of bearing fault datasets.The results demonstrate that the proposed method is capable of handling domain offset caused by differentworkingconditions and maintaining more than 97.53% accuracy on various transfer tasks. Furthermore, the proposedmethod can achieve high diagnostic accuracy under varying noise levels.展开更多
Domain adaptation(DA) aims to find a subspace,where the discrepancies between the source and target domains are reduced. Based on this subspace, the classifier trained by the labeled source samples can classify unlabe...Domain adaptation(DA) aims to find a subspace,where the discrepancies between the source and target domains are reduced. Based on this subspace, the classifier trained by the labeled source samples can classify unlabeled target samples well.Existing approaches leverage Graph Embedding Learning to explore such a subspace. Unfortunately, due to 1) the interaction of the consistency and specificity between samples, and 2) the joint impact of the degenerated features and incorrect labels in the samples, the existing approaches might assign unsuitable similarity, which restricts their performance. In this paper, we propose an approach called adaptive graph embedding with consistency and specificity(AGE-CS) to cope with these issues. AGE-CS consists of two methods, i.e., graph embedding with consistency and specificity(GECS), and adaptive graph embedding(AGE).GECS jointly learns the similarity of samples under the geometric distance and semantic similarity metrics, while AGE adaptively adjusts the relative importance between the geometric distance and semantic similarity during the iterations. By AGE-CS,the neighborhood samples with the same label are rewarded,while the neighborhood samples with different labels are punished. As a result, compact structures are preserved, and advanced performance is achieved. Extensive experiments on five benchmark datasets demonstrate that the proposed method performs better than other Graph Embedding methods.展开更多
Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In ...Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives.展开更多
Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data mu...Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data must be fused.In our research,self-adaptive weighted data fusion method is used to respectively integrate the data from the PH value,temperature,oxygen dissolved and NH3 concentration of water quality environment.Based on the fusion,the Grubbs method is used to detect the abnormal data so as to provide data support for estimation,prediction and early warning of the water quality.展开更多
Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global...Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global sampling but neglect to exploit global and local sampling simultaneously;ⅱ)they either transfer knowledge from a global perspective or a local perspective,while overlooking transmission of confident knowledge from both perspectives;and ⅲ) they apply repeated sampling during iteration,which takes a lot of time.To address these problems,knowledge transfer learning via dual density sampling(KTL-DDS) is proposed in this study,which consists of three parts:ⅰ) Dual density sampling(DDS) that jointly leverages two sampling methods associated with different views,i.e.,global density sampling that extracts representative samples with the most common features and local density sampling that selects representative samples with critical boundary information;ⅱ)Consistent maximum mean discrepancy(CMMD) that reduces intra-and cross-domain risks and guarantees high consistency of knowledge by shortening the distances of every two subsets among the four subsets collected by DDS;and ⅲ) Knowledge dissemination(KD) that transmits confident and consistent knowledge from the representative target samples with global and local properties to the whole target domain by preserving the neighboring relationships of the target domain.Mathematical analyses show that DDS avoids repeated sampling during the iteration.With the above three actions,confident knowledge with both global and local properties is transferred,and the memory and running time are greatly reduced.In addition,a general framework named dual density sampling approximation(DDSA) is extended,which can be easily applied to other DA algorithms.Extensive experiments on five datasets in clean,label corruption(LC),feature missing(FM),and LC&FM environments demonstrate the encouraging performance of KTL-DDS.展开更多
The state of health(SOH)is a critical factor in evaluating the performance of the lithium-ion batteries(LIBs).Due to various end-user behaviors,the LIBs exhibit different degradation modes,which makes it challenging t...The state of health(SOH)is a critical factor in evaluating the performance of the lithium-ion batteries(LIBs).Due to various end-user behaviors,the LIBs exhibit different degradation modes,which makes it challenging to estimate the SOHs in a personalized way.In this article,we present a novel particle swarm optimization-assisted deep domain adaptation(PSO-DDA)method to estimate the SOH of LIBs in a personalized manner,where a new domain adaptation strategy is put forward to reduce cross-domain distribution discrepancy.The standard PSO algorithm is exploited to automatically adjust the chosen hyperparameters of developed DDA-based method.The proposed PSODDA method is validated by extensive experiments on two LIB datasets with different battery chemistry materials,ambient temperatures and charge-discharge configurations.Experimental results indicate that the proposed PSO-DDA method surpasses the convolutional neural network-based method and the standard DDA-based method.The Py Torch implementation of the proposed PSO-DDA method is available at https://github.com/mxt0607/PSO-DDA.展开更多
This paper examines the prediction of film ratings.Firstly,in the data feature engineering,feature construction is performed based on the original features of the film dataset.Secondly,the clustering algorithm is util...This paper examines the prediction of film ratings.Firstly,in the data feature engineering,feature construction is performed based on the original features of the film dataset.Secondly,the clustering algorithm is utilized to remove singular film samples,and feature selections are carried out.When solving the problem that film samples of the target domain are unlabelled,it is impossible to train a model and address the inconsistency in the feature dimension for film samples from the source domain.Therefore,the domain adaptive transfer learning model combined with dimensionality reduction algorithms is adopted in this paper.At the same time,in order to reduce the prediction error of models,the stacking ensemble learning model for regression is also used.Finally,through comparative experiments,the effectiveness of the proposed method is verified,which proves to be better predicting film ratings in the target domain.展开更多
Labeled data scarcity of an interested domain is often a serious problem in machine learning.Leveraging the labeled data from other semantic-related yet co-variate shifted source domain to facilitate the interested do...Labeled data scarcity of an interested domain is often a serious problem in machine learning.Leveraging the labeled data from other semantic-related yet co-variate shifted source domain to facilitate the interested domain is a consensus.In order to solve the domain shift between domains and reduce the learning ambiguity,unsupervised domain adaptation(UDA)greatly promotes the transferability of model parameters.However,the dilemma of over-fitting(negative transfer)and under-fitting(under-adaptation)is always an overlooked challenge and potential risk.In this paper,we rethink the shallow learning paradigm and this intractable over/under-fitting problem,and propose a safer UDA model,coined as Bilateral Co-Transfer(BCT),which is essentially beyond previous well-known unilateral transfer.With bilateral co-transfer between domains,the risk of over/under-fitting is therefore largely reduced.Technically,the proposed BCT is a symmetrical structure,with joint distribution discrepancy(JDD)modeled for domain alignment and category discrimination.Specifically,a symmetrical bilateral transfer(SBT)loss between source and target domains is proposed under the philosophy of mutual checks and balances.First,each target sample is represented by source samples with low-rankness constraint in a common subspace,such that the most informative and transferable source data can be used to alleviate negative transfer.Second,each source sample is symmetrically and sparsely represented by target samples,such that the most reliable target samples can be exploited to tackle underadaptation.Experiments on various benchmarks show that our BCT outperforms many previous outstanding work.展开更多
Accurate multi-source fusion is based on the reliability, quantity, and fusion mode of the sources. The problem of selecting the optimal set for participating in the fusion process is nondeterministic-polynomial-time-...Accurate multi-source fusion is based on the reliability, quantity, and fusion mode of the sources. The problem of selecting the optimal set for participating in the fusion process is nondeterministic-polynomial-time-hard and is neither sub-modular nor super-modular. Furthermore, in the case of the Kalman filter(KF) fusion algorithm, accurate statistical characteristics of noise are difficult to obtain, and this leads to an unsatisfactory fusion result. To settle the referred cases, a distributed and adaptive weighted fusion algorithm based on KF has been proposed in this paper. In this method, on the basis of the pseudo prior probability of the estimated state of each source, the reliability of the sources is evaluated and the optimal set is selected on a certain threshold. Experiments were performed on multi-source pedestrian dead reckoning for verifying the proposed algorithm. The results obtained from these experiments indicate that the optimal set can be selected accurately with minimal computation, and the fusion error is reduced by 16.6% as compared to the corresponding value resulting from the algorithm without improvements.The proposed adaptive source reliability and fusion weight evaluation is effective against the varied-noise multi-source fusion system, and the fusion error caused by inaccurate statistical characteristics of the noise is reduced by the adaptive weight evaluation.The proposed algorithm exhibits good robustness, adaptability,and value on applications.展开更多
In machinery fault diagnosis,labeled data are always difficult or even impossible to obtain.Transfer learning can leverage related fault diagnosis knowledge from fully labeled source domain to enhance the fault diagno...In machinery fault diagnosis,labeled data are always difficult or even impossible to obtain.Transfer learning can leverage related fault diagnosis knowledge from fully labeled source domain to enhance the fault diagnosis performance in sparsely labeled or unlabeled target domain,which has been widely used for cross domain fault diagnosis.However,existing methods focus on either marginal distribution adaptation(MDA)or conditional distribution adaptation(CDA).In practice,marginal and conditional distributions discrepancies both have significant but different influences on the domain divergence.In this paper,a dynamic distribution adaptation based transfer network(DDATN)is proposed for cross domain bearing fault diagnosis.DDATN utilizes the proposed instance-weighted dynamic maximum mean discrepancy(IDMMD)for dynamic distribution adaptation(DDA),which can dynamically estimate the influences of marginal and conditional distribution and adapt target domain with source domain.The experimental evaluation on cross domain bearing fault diagnosis demonstrates that DDATN can outperformance the state-of-the-art cross domain fault diagnosis methods.展开更多
Traditional electroencephalograph(EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject,which restricts the application of the affective brain computer i...Traditional electroencephalograph(EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject,which restricts the application of the affective brain computer interface(BCI)in practice.We attempt to use the multi-modal data from the past session to realize emotion recognition in the case of a small amount of calibration samples.To solve this problem,we propose a multimodal domain adaptive variational autoencoder(MMDA-VAE)method,which learns shared cross-domain latent representations of the multi-modal data.Our method builds a multi-modal variational autoencoder(MVAE)to project the data of multiple modalities into a common space.Through adversarial learning and cycle-consistency regularization,our method can reduce the distribution difference of each domain on the shared latent representation layer and realize the transfer of knowledge.Extensive experiments are conducted on two public datasets,SEED and SEED-IV,and the results show the superiority of our proposed method.Our work can effectively improve the performance of emotion recognition with a small amount of labelled multi-modal data.展开更多
Multi-source domain adaptation utilizes multiple source domains to learn the knowledge and transfers it to an unlabeled target domain.To address the problem,most of the existing methods aim to minimize the domain shif...Multi-source domain adaptation utilizes multiple source domains to learn the knowledge and transfers it to an unlabeled target domain.To address the problem,most of the existing methods aim to minimize the domain shift by auxiliary distribution alignment objectives,which reduces the effect of domain-specific features.However,without explicitly modeling the domain-specific features,it is not easy to guarantee that the domain-invariant representation extracted from input domains contains domain-specific information as few as possible.In this work,we present a different perspective on MSDA,which employs the idea of feature elimination to reduce the influence of domain-specific features.We design two different ways to extract domain-specific features and total features and construct the domain-invariant representations by eliminating the domain-specific features from total features.The experimental results on different domain adaptation datasets demonstrate the effectiveness of our method and the generalization ability of our model.展开更多
A fast image segmentation algorithm based on salient features model and spatial-frequency domain adaptive kernel is proposed to solve the accurate discriminate objects problem of online visual detection in such scenes...A fast image segmentation algorithm based on salient features model and spatial-frequency domain adaptive kernel is proposed to solve the accurate discriminate objects problem of online visual detection in such scenes of variable sample morphological characteristics,low contrast and complex background texture.Firstly,by analyzing the spectral component distribution and spatial contour feature of the image,a salient feature model is established in spatial-frequency domain.Then,the salient object detection method based on Gaussian band-pass filter and the design criterion of adaptive convolution kernel are proposed to extract the salient contour feature of the target in spatial and frequency domain.Finally,the selection and growth rules of seed points are improved by integrating the gray level and contour features of the target,and the target is segmented by seeded region growing.Experiments have been performed on Berkeley Segmentation Data Set,as well as sample images of online detection,to verify the effectiveness of the algorithm.The experimental results show that the Jaccard Similarity Coefficient of the segmentation is more than 90%,which indicates that the proposed algorithm can availably extract the target feature information,suppress the background texture and resist noise interference.Besides,the Hausdorff Distance of the segmentation is less than 10,which infers that the proposed algorithm obtains a high evaluation on the target contour preservation.The experimental results also show that the proposed algorithm significantly improves the operation efficiency while obtaining comparable segmentation performance over other algorithms.展开更多
Detection and recognition of a stairway as upstairs,downstairs and negative(e.g.,ladder,level ground)are the fundamentals of assisting the visually impaired to travel independently in unfamiliar environments.Previous ...Detection and recognition of a stairway as upstairs,downstairs and negative(e.g.,ladder,level ground)are the fundamentals of assisting the visually impaired to travel independently in unfamiliar environments.Previous studies have focused on using massive amounts of RGB-D scene data to train traditional machine learning(ML)-based models to detect and recognize stationary stairway and escalator stairway separately.Nevertheless,none of them consider jointly training these two similar but different datasets to achieve better performance.This paper applies an adversarial learning algorithm on the indicated unsupervised domain adaptation scenario to transfer knowledge learned from the labeled RGB-D escalator stairway dataset to the unlabeled RGB-D stationary dataset.By utilizing the developed method,a feedforward convolutional neural network(CNN)-based feature extractor with five convolution layers can achieve 100%classification accuracy on testing the labeled escalator stairway data distributions and 80.6%classification accuracy on testing the unlabeled stationary data distributions.The success of the developed approach is demonstrated for classifying stairway on these two domains with a limited amount of data.To further demonstrate the effectiveness of the proposed method,the same CNN model is evaluated without domain adaptation and the results are compared with those of the presented architecture.展开更多
In non-homogeneous environment, traditional space-time adaptive processing doesn’t effectively suppress interference and detect target, because the secondary data don’t exactly reflect the statistical characteristic...In non-homogeneous environment, traditional space-time adaptive processing doesn’t effectively suppress interference and detect target, because the secondary data don’t exactly reflect the statistical characteristic of the range cell under test. A novel methodology utilizing the direct data domain approach to space-time adaptive processing (STAP) in airborne radar non-homogeneous environments is presented. The deterministic least squares adaptive signal processing technique operates on a “snapshot-by-snapshot” basis to determine the adaptive weights for nulling interferences and estimating signal of interest (SOI). Furthermore, this approach eliminates the requirement for estimating the covariance through the data of neighboring range cell, which eliminates calculating the inverse of covariance, and can be implemented to operate in real-time. Simulation results illustrate the efficiency of interference suppression in non-homogeneous environment.展开更多
Previous studies have already shown that Raman spectroscopy can be used in the encoding of suspension array technology.However,almost all existing convolutional neural network-based decoding approaches rely on supervi...Previous studies have already shown that Raman spectroscopy can be used in the encoding of suspension array technology.However,almost all existing convolutional neural network-based decoding approaches rely on supervision with ground truth,and may not be well generalized to unseen datasets,which were collected under different experimental conditions,applying with the same coded material.In this study,we propose an improved model based on CyCADA,named as Detail constraint Cycle Domain Adaptive Model(DCDA).DCDA implements the clasification of unseen datasets through domain adaptation,adapts representations at the encode level with decoder-share,and enforces coding features while leveraging a feat loss.To improve detailed structural constraints,DCDA takes downsample connection and skips connection.Our model improves the poor generalization of existing models and saves the cost of the labeling process for unseen target datasets.Compared with other models,extensive experiments and ablation studies show the superiority of DCDA in terms of classification stability and generalization.The model proposed by the research achieves a classification with an accuracy of 100%when applied in datasets,in which the spectrum in the source domain is far less than the target domain.展开更多
The Thoracic Electrical Bioimpedance(TEB)helps to determine the stroke volume during cardiac arrest.While measuring cardiac signal it is contaminated with artifacts.The commonly encountered artifacts are Baseline wand...The Thoracic Electrical Bioimpedance(TEB)helps to determine the stroke volume during cardiac arrest.While measuring cardiac signal it is contaminated with artifacts.The commonly encountered artifacts are Baseline wander(BW)and Muscle artifact(MA),these are physiological and nonstationary.As the nature of these artifacts is random,adaptive filtering is needed than conventional fixed coefficient filtering techniques.To address this,a new block based adaptive learning scheme is proposed to remove artifacts from TEB signals in clinical scenario.The proposed block least mean square(BLMS)algorithm is mathematically normalized with reference to data and error.This normalization leads,block normalized LMS(BNLMS)and block error normalized LMS(BENLMS)algorithms.Various adaptive artifact cancellers are developed in both time and frequency domains and applied on real TEB quantities contaminated with physiological signals.The ability of these techniques is measured by calculating signal to noise ratio improvement(SNRI),Excess Mean Square Error(EMSE),and Misadjustment(Mad).Among the considered algorithms,the frequency domain version of BENLMS algorithm removes the physiological artifacts effectively then the other counter parts.Hence,this adaptive artifact canceller is suitable for real time applications like wearable,remove health care monitoring units.展开更多
:Cross-project defect prediction(CPDP)aims to predict the defects on target project by using a prediction model built on source projects.The main problem in CPDP is the huge distribution gap between the source project...:Cross-project defect prediction(CPDP)aims to predict the defects on target project by using a prediction model built on source projects.The main problem in CPDP is the huge distribution gap between the source project and the target project,which prevents the prediction model from performing well.Most existing methods overlook the class discrimination of the learned features.Seeking an effective transferable model from the source project to the target project for CPDP is challenging.In this paper,we propose an unsupervised domain adaptation based on the discriminative subspace learning(DSL)approach for CPDP.DSL treats the data from two projects as being from two domains and maps the data into a common feature space.It employs crossdomain alignment with discriminative information from different projects to reduce the distribution difference of the data between different projects and incorporates the class discriminative information.Specifically,DSL first utilizes subspace learning based domain adaptation to reduce the distribution gap of data between different projects.Then,it makes full use of the class label information of the source project and transfers the discrimination ability of the source project to the target project in the common space.Comprehensive experiments on five projects verify that DSL can build an effective prediction model and improve the performance over the related competing methods by at least 7.10%and 11.08%in terms of G-measure and AUC.展开更多
The segmentation of unlabeled medical images is troublesome due to the high cost of annotation, and unsupervised domain adaptation is one solution to this. In this paper, an improved unsupervised domain adaptation met...The segmentation of unlabeled medical images is troublesome due to the high cost of annotation, and unsupervised domain adaptation is one solution to this. In this paper, an improved unsupervised domain adaptation method was proposed. The proposed method considered both global alignment and category-wise alignment. First, we aligned the appearance of two domains by image transformation. Second, we aligned the output maps of two domains in a global way. Then, we decomposed the semantic prediction map by category, aligning the prediction maps in a category-wise manner. Finally, we evaluated the proposed method on the 2017 Multi-Modality Whole Heart Segmentation Challenge dataset, and obtained 82.1 on the dice similarity coefficient and 4.6 on the average symmetric surface distance, demonstrating the effectiveness of the combination of global alignment and category-wise alignment.展开更多
基金Shanxi Scholarship Council of China(2022-141)Fundamental Research Program of Shanxi Province(202203021211096).
文摘Recent research in cross-domain intelligence fault diagnosis of machinery still has some problems,such as relatively ideal speed conditions and sample conditions.In engineering practice,the rotational speed of the machine is often transient and time-varying,which makes the sample annotation increasingly expensive.Meanwhile,the number of samples collected from different health states is often unbalanced.To deal with the above challenges,a complementary-label(CL)adversarial domain adaptation fault diagnosis network(CLADAN)is proposed under time-varying rotational speed and weakly-supervised conditions.In the weakly supervised learning condition,machine prior information is used for sample annotation via cost-friendly complementary label learning.A diagnosticmodel learning strategywith discretized category probabilities is designed to avoidmulti-peak distribution of prediction results.In adversarial training process,we developed virtual adversarial regularization(VAR)strategy,which further enhances the robustness of the model by adding adversarial perturbations in the target domain.Comparative experiments on two case studies validated the superior performance of the proposed method.
基金the Natural Science Foundation of Henan Province(232300420094)the Science and TechnologyResearch Project of Henan Province(222102220092).
文摘Intelligent diagnosis driven by big data for mechanical fault is an important means to ensure the safe operation ofequipment. In these methods, deep learning-based machinery fault diagnosis approaches have received increasingattention and achieved some results. It might lead to insufficient performance for using transfer learning alone andcause misclassification of target samples for domain bias when building deep models to learn domain-invariantfeatures. To address the above problems, a deep discriminative adversarial domain adaptation neural networkfor the bearing fault diagnosis model is proposed (DDADAN). In this method, the raw vibration data are firstlyconverted into frequency domain data by Fast Fourier Transform, and an improved deep convolutional neuralnetwork with wide first-layer kernels is used as a feature extractor to extract deep fault features. Then, domaininvariant features are learned from the fault data with correlation alignment-based domain adversarial training.Furthermore, to enhance the discriminative property of features, discriminative feature learning is embeddedinto this network to make the features compact, as well as separable between classes within the class. Finally, theperformance and anti-noise capability of the proposedmethod are evaluated using two sets of bearing fault datasets.The results demonstrate that the proposed method is capable of handling domain offset caused by differentworkingconditions and maintaining more than 97.53% accuracy on various transfer tasks. Furthermore, the proposedmethod can achieve high diagnostic accuracy under varying noise levels.
基金supported in part by the Key-Area Research and Development Program of Guangdong Province (2020B010166006)the National Natural Science Foundation of China (61972102)+2 种基金the Guangzhou Science and Technology Plan Project (023A04J1729)the Science and Technology development fund (FDCT)Macao SAR (015/2020/AMJ)。
文摘Domain adaptation(DA) aims to find a subspace,where the discrepancies between the source and target domains are reduced. Based on this subspace, the classifier trained by the labeled source samples can classify unlabeled target samples well.Existing approaches leverage Graph Embedding Learning to explore such a subspace. Unfortunately, due to 1) the interaction of the consistency and specificity between samples, and 2) the joint impact of the degenerated features and incorrect labels in the samples, the existing approaches might assign unsuitable similarity, which restricts their performance. In this paper, we propose an approach called adaptive graph embedding with consistency and specificity(AGE-CS) to cope with these issues. AGE-CS consists of two methods, i.e., graph embedding with consistency and specificity(GECS), and adaptive graph embedding(AGE).GECS jointly learns the similarity of samples under the geometric distance and semantic similarity metrics, while AGE adaptively adjusts the relative importance between the geometric distance and semantic similarity during the iterations. By AGE-CS,the neighborhood samples with the same label are rewarded,while the neighborhood samples with different labels are punished. As a result, compact structures are preserved, and advanced performance is achieved. Extensive experiments on five benchmark datasets demonstrate that the proposed method performs better than other Graph Embedding methods.
基金This work was supported in part by the National Natural Science Foundation of China(82260360)the Foreign Young Talent Program(QN2021033002L).
文摘Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives.
基金This study was supported by National Key Research and Development Project(Project No.2017YFD0301506)National Social Science Foundation(Project No.71774052)+1 种基金Hunan Education Department Scientific Research Project(Project No.17K04417A092).
文摘Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data must be fused.In our research,self-adaptive weighted data fusion method is used to respectively integrate the data from the PH value,temperature,oxygen dissolved and NH3 concentration of water quality environment.Based on the fusion,the Grubbs method is used to detect the abnormal data so as to provide data support for estimation,prediction and early warning of the water quality.
基金supported in part by the Key-Area Research and Development Program of Guangdong Province (2020B010166006)the National Natural Science Foundation of China (61972102)+1 种基金the Guangzhou Science and Technology Plan Project (023A04J1729)the Science and Technology development fund (FDCT),Macao SAR (015/2020/AMJ)。
文摘Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global sampling but neglect to exploit global and local sampling simultaneously;ⅱ)they either transfer knowledge from a global perspective or a local perspective,while overlooking transmission of confident knowledge from both perspectives;and ⅲ) they apply repeated sampling during iteration,which takes a lot of time.To address these problems,knowledge transfer learning via dual density sampling(KTL-DDS) is proposed in this study,which consists of three parts:ⅰ) Dual density sampling(DDS) that jointly leverages two sampling methods associated with different views,i.e.,global density sampling that extracts representative samples with the most common features and local density sampling that selects representative samples with critical boundary information;ⅱ)Consistent maximum mean discrepancy(CMMD) that reduces intra-and cross-domain risks and guarantees high consistency of knowledge by shortening the distances of every two subsets among the four subsets collected by DDS;and ⅲ) Knowledge dissemination(KD) that transmits confident and consistent knowledge from the representative target samples with global and local properties to the whole target domain by preserving the neighboring relationships of the target domain.Mathematical analyses show that DDS avoids repeated sampling during the iteration.With the above three actions,confident knowledge with both global and local properties is transferred,and the memory and running time are greatly reduced.In addition,a general framework named dual density sampling approximation(DDSA) is extended,which can be easily applied to other DA algorithms.Extensive experiments on five datasets in clean,label corruption(LC),feature missing(FM),and LC&FM environments demonstrate the encouraging performance of KTL-DDS.
基金supported in part by the National Natural Science Foundation of China(92167201,62273264,61933007)。
文摘The state of health(SOH)is a critical factor in evaluating the performance of the lithium-ion batteries(LIBs).Due to various end-user behaviors,the LIBs exhibit different degradation modes,which makes it challenging to estimate the SOHs in a personalized way.In this article,we present a novel particle swarm optimization-assisted deep domain adaptation(PSO-DDA)method to estimate the SOH of LIBs in a personalized manner,where a new domain adaptation strategy is put forward to reduce cross-domain distribution discrepancy.The standard PSO algorithm is exploited to automatically adjust the chosen hyperparameters of developed DDA-based method.The proposed PSODDA method is validated by extensive experiments on two LIB datasets with different battery chemistry materials,ambient temperatures and charge-discharge configurations.Experimental results indicate that the proposed PSO-DDA method surpasses the convolutional neural network-based method and the standard DDA-based method.The Py Torch implementation of the proposed PSO-DDA method is available at https://github.com/mxt0607/PSO-DDA.
基金Supported by the Scientific Research Foundation of Liaoning Provincial Department of Education(No.LJKZ0139).
文摘This paper examines the prediction of film ratings.Firstly,in the data feature engineering,feature construction is performed based on the original features of the film dataset.Secondly,the clustering algorithm is utilized to remove singular film samples,and feature selections are carried out.When solving the problem that film samples of the target domain are unlabelled,it is impossible to train a model and address the inconsistency in the feature dimension for film samples from the source domain.Therefore,the domain adaptive transfer learning model combined with dimensionality reduction algorithms is adopted in this paper.At the same time,in order to reduce the prediction error of models,the stacking ensemble learning model for regression is also used.Finally,through comparative experiments,the effectiveness of the proposed method is verified,which proves to be better predicting film ratings in the target domain.
基金supported by National Key R&D Program of China(2021YFB3100800)National Natural Science Foundation of China(62271090)+1 种基金Chongqing Natural Science Fund(cstc2021jcyjjqX0023)supported by Huawei computational power of Chongqing Artificial Intelligence Innovation Center.
文摘Labeled data scarcity of an interested domain is often a serious problem in machine learning.Leveraging the labeled data from other semantic-related yet co-variate shifted source domain to facilitate the interested domain is a consensus.In order to solve the domain shift between domains and reduce the learning ambiguity,unsupervised domain adaptation(UDA)greatly promotes the transferability of model parameters.However,the dilemma of over-fitting(negative transfer)and under-fitting(under-adaptation)is always an overlooked challenge and potential risk.In this paper,we rethink the shallow learning paradigm and this intractable over/under-fitting problem,and propose a safer UDA model,coined as Bilateral Co-Transfer(BCT),which is essentially beyond previous well-known unilateral transfer.With bilateral co-transfer between domains,the risk of over/under-fitting is therefore largely reduced.Technically,the proposed BCT is a symmetrical structure,with joint distribution discrepancy(JDD)modeled for domain alignment and category discrimination.Specifically,a symmetrical bilateral transfer(SBT)loss between source and target domains is proposed under the philosophy of mutual checks and balances.First,each target sample is represented by source samples with low-rankness constraint in a common subspace,such that the most informative and transferable source data can be used to alleviate negative transfer.Second,each source sample is symmetrically and sparsely represented by target samples,such that the most reliable target samples can be exploited to tackle underadaptation.Experiments on various benchmarks show that our BCT outperforms many previous outstanding work.
文摘Accurate multi-source fusion is based on the reliability, quantity, and fusion mode of the sources. The problem of selecting the optimal set for participating in the fusion process is nondeterministic-polynomial-time-hard and is neither sub-modular nor super-modular. Furthermore, in the case of the Kalman filter(KF) fusion algorithm, accurate statistical characteristics of noise are difficult to obtain, and this leads to an unsatisfactory fusion result. To settle the referred cases, a distributed and adaptive weighted fusion algorithm based on KF has been proposed in this paper. In this method, on the basis of the pseudo prior probability of the estimated state of each source, the reliability of the sources is evaluated and the optimal set is selected on a certain threshold. Experiments were performed on multi-source pedestrian dead reckoning for verifying the proposed algorithm. The results obtained from these experiments indicate that the optimal set can be selected accurately with minimal computation, and the fusion error is reduced by 16.6% as compared to the corresponding value resulting from the algorithm without improvements.The proposed adaptive source reliability and fusion weight evaluation is effective against the varied-noise multi-source fusion system, and the fusion error caused by inaccurate statistical characteristics of the noise is reduced by the adaptive weight evaluation.The proposed algorithm exhibits good robustness, adaptability,and value on applications.
基金Supported by National Natural Science Foundation of China(Grant Nos.51875208,51475170)National Key Research and Development Program of China(Grant No.2018YFB1702400).
文摘In machinery fault diagnosis,labeled data are always difficult or even impossible to obtain.Transfer learning can leverage related fault diagnosis knowledge from fully labeled source domain to enhance the fault diagnosis performance in sparsely labeled or unlabeled target domain,which has been widely used for cross domain fault diagnosis.However,existing methods focus on either marginal distribution adaptation(MDA)or conditional distribution adaptation(CDA).In practice,marginal and conditional distributions discrepancies both have significant but different influences on the domain divergence.In this paper,a dynamic distribution adaptation based transfer network(DDATN)is proposed for cross domain bearing fault diagnosis.DDATN utilizes the proposed instance-weighted dynamic maximum mean discrepancy(IDMMD)for dynamic distribution adaptation(DDA),which can dynamically estimate the influences of marginal and conditional distribution and adapt target domain with source domain.The experimental evaluation on cross domain bearing fault diagnosis demonstrates that DDATN can outperformance the state-of-the-art cross domain fault diagnosis methods.
基金National Natural Science Foundation of China(61976209,62020106015,U21A20388)in part by the CAS International Collaboration Key Project(173211KYSB20190024)in part by the Strategic Priority Research Program of CAS(XDB32040000)。
文摘Traditional electroencephalograph(EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject,which restricts the application of the affective brain computer interface(BCI)in practice.We attempt to use the multi-modal data from the past session to realize emotion recognition in the case of a small amount of calibration samples.To solve this problem,we propose a multimodal domain adaptive variational autoencoder(MMDA-VAE)method,which learns shared cross-domain latent representations of the multi-modal data.Our method builds a multi-modal variational autoencoder(MVAE)to project the data of multiple modalities into a common space.Through adversarial learning and cycle-consistency regularization,our method can reduce the distribution difference of each domain on the shared latent representation layer and realize the transfer of knowledge.Extensive experiments are conducted on two public datasets,SEED and SEED-IV,and the results show the superiority of our proposed method.Our work can effectively improve the performance of emotion recognition with a small amount of labelled multi-modal data.
基金supported by the National Natural Science Foundation of China(NSFC)(Grant Nos.61876130 and 61932009).
文摘Multi-source domain adaptation utilizes multiple source domains to learn the knowledge and transfers it to an unlabeled target domain.To address the problem,most of the existing methods aim to minimize the domain shift by auxiliary distribution alignment objectives,which reduces the effect of domain-specific features.However,without explicitly modeling the domain-specific features,it is not easy to guarantee that the domain-invariant representation extracted from input domains contains domain-specific information as few as possible.In this work,we present a different perspective on MSDA,which employs the idea of feature elimination to reduce the influence of domain-specific features.We design two different ways to extract domain-specific features and total features and construct the domain-invariant representations by eliminating the domain-specific features from total features.The experimental results on different domain adaptation datasets demonstrate the effectiveness of our method and the generalization ability of our model.
基金supported by National Natural Science Foundation of China[grant numbers 61573233]Natural Science Foundation of Guangdong,China[grant numbers 2021A1515010661]+1 种基金Special projects in key fields of colleges and universities in Guangdong Province[grant numbers 2020ZDZX2005]Innovation Team Project of University in Guangdong Province[grant numbers 2015KCXTD018].
文摘A fast image segmentation algorithm based on salient features model and spatial-frequency domain adaptive kernel is proposed to solve the accurate discriminate objects problem of online visual detection in such scenes of variable sample morphological characteristics,low contrast and complex background texture.Firstly,by analyzing the spectral component distribution and spatial contour feature of the image,a salient feature model is established in spatial-frequency domain.Then,the salient object detection method based on Gaussian band-pass filter and the design criterion of adaptive convolution kernel are proposed to extract the salient contour feature of the target in spatial and frequency domain.Finally,the selection and growth rules of seed points are improved by integrating the gray level and contour features of the target,and the target is segmented by seeded region growing.Experiments have been performed on Berkeley Segmentation Data Set,as well as sample images of online detection,to verify the effectiveness of the algorithm.The experimental results show that the Jaccard Similarity Coefficient of the segmentation is more than 90%,which indicates that the proposed algorithm can availably extract the target feature information,suppress the background texture and resist noise interference.Besides,the Hausdorff Distance of the segmentation is less than 10,which infers that the proposed algorithm obtains a high evaluation on the target contour preservation.The experimental results also show that the proposed algorithm significantly improves the operation efficiency while obtaining comparable segmentation performance over other algorithms.
文摘Detection and recognition of a stairway as upstairs,downstairs and negative(e.g.,ladder,level ground)are the fundamentals of assisting the visually impaired to travel independently in unfamiliar environments.Previous studies have focused on using massive amounts of RGB-D scene data to train traditional machine learning(ML)-based models to detect and recognize stationary stairway and escalator stairway separately.Nevertheless,none of them consider jointly training these two similar but different datasets to achieve better performance.This paper applies an adversarial learning algorithm on the indicated unsupervised domain adaptation scenario to transfer knowledge learned from the labeled RGB-D escalator stairway dataset to the unlabeled RGB-D stationary dataset.By utilizing the developed method,a feedforward convolutional neural network(CNN)-based feature extractor with five convolution layers can achieve 100%classification accuracy on testing the labeled escalator stairway data distributions and 80.6%classification accuracy on testing the unlabeled stationary data distributions.The success of the developed approach is demonstrated for classifying stairway on these two domains with a limited amount of data.To further demonstrate the effectiveness of the proposed method,the same CNN model is evaluated without domain adaptation and the results are compared with those of the presented architecture.
文摘In non-homogeneous environment, traditional space-time adaptive processing doesn’t effectively suppress interference and detect target, because the secondary data don’t exactly reflect the statistical characteristic of the range cell under test. A novel methodology utilizing the direct data domain approach to space-time adaptive processing (STAP) in airborne radar non-homogeneous environments is presented. The deterministic least squares adaptive signal processing technique operates on a “snapshot-by-snapshot” basis to determine the adaptive weights for nulling interferences and estimating signal of interest (SOI). Furthermore, this approach eliminates the requirement for estimating the covariance through the data of neighboring range cell, which eliminates calculating the inverse of covariance, and can be implemented to operate in real-time. Simulation results illustrate the efficiency of interference suppression in non-homogeneous environment.
基金The authors gratefully acknowledge the financial support from the National Natural Science Foundation of China under Grant 81871395.
文摘Previous studies have already shown that Raman spectroscopy can be used in the encoding of suspension array technology.However,almost all existing convolutional neural network-based decoding approaches rely on supervision with ground truth,and may not be well generalized to unseen datasets,which were collected under different experimental conditions,applying with the same coded material.In this study,we propose an improved model based on CyCADA,named as Detail constraint Cycle Domain Adaptive Model(DCDA).DCDA implements the clasification of unseen datasets through domain adaptation,adapts representations at the encode level with decoder-share,and enforces coding features while leveraging a feat loss.To improve detailed structural constraints,DCDA takes downsample connection and skips connection.Our model improves the poor generalization of existing models and saves the cost of the labeling process for unseen target datasets.Compared with other models,extensive experiments and ablation studies show the superiority of DCDA in terms of classification stability and generalization.The model proposed by the research achieves a classification with an accuracy of 100%when applied in datasets,in which the spectrum in the source domain is far less than the target domain.
文摘The Thoracic Electrical Bioimpedance(TEB)helps to determine the stroke volume during cardiac arrest.While measuring cardiac signal it is contaminated with artifacts.The commonly encountered artifacts are Baseline wander(BW)and Muscle artifact(MA),these are physiological and nonstationary.As the nature of these artifacts is random,adaptive filtering is needed than conventional fixed coefficient filtering techniques.To address this,a new block based adaptive learning scheme is proposed to remove artifacts from TEB signals in clinical scenario.The proposed block least mean square(BLMS)algorithm is mathematically normalized with reference to data and error.This normalization leads,block normalized LMS(BNLMS)and block error normalized LMS(BENLMS)algorithms.Various adaptive artifact cancellers are developed in both time and frequency domains and applied on real TEB quantities contaminated with physiological signals.The ability of these techniques is measured by calculating signal to noise ratio improvement(SNRI),Excess Mean Square Error(EMSE),and Misadjustment(Mad).Among the considered algorithms,the frequency domain version of BENLMS algorithm removes the physiological artifacts effectively then the other counter parts.Hence,this adaptive artifact canceller is suitable for real time applications like wearable,remove health care monitoring units.
基金This paper was supported by the National Natural Science Foundation of China(61772286,61802208,and 61876089)China Postdoctoral Science Foundation Grant 2019M651923Natural Science Foundation of Jiangsu Province of China(BK0191381).
文摘:Cross-project defect prediction(CPDP)aims to predict the defects on target project by using a prediction model built on source projects.The main problem in CPDP is the huge distribution gap between the source project and the target project,which prevents the prediction model from performing well.Most existing methods overlook the class discrimination of the learned features.Seeking an effective transferable model from the source project to the target project for CPDP is challenging.In this paper,we propose an unsupervised domain adaptation based on the discriminative subspace learning(DSL)approach for CPDP.DSL treats the data from two projects as being from two domains and maps the data into a common feature space.It employs crossdomain alignment with discriminative information from different projects to reduce the distribution difference of the data between different projects and incorporates the class discriminative information.Specifically,DSL first utilizes subspace learning based domain adaptation to reduce the distribution gap of data between different projects.Then,it makes full use of the class label information of the source project and transfers the discrimination ability of the source project to the target project in the common space.Comprehensive experiments on five projects verify that DSL can build an effective prediction model and improve the performance over the related competing methods by at least 7.10%and 11.08%in terms of G-measure and AUC.
文摘The segmentation of unlabeled medical images is troublesome due to the high cost of annotation, and unsupervised domain adaptation is one solution to this. In this paper, an improved unsupervised domain adaptation method was proposed. The proposed method considered both global alignment and category-wise alignment. First, we aligned the appearance of two domains by image transformation. Second, we aligned the output maps of two domains in a global way. Then, we decomposed the semantic prediction map by category, aligning the prediction maps in a category-wise manner. Finally, we evaluated the proposed method on the 2017 Multi-Modality Whole Heart Segmentation Challenge dataset, and obtained 82.1 on the dice similarity coefficient and 4.6 on the average symmetric surface distance, demonstrating the effectiveness of the combination of global alignment and category-wise alignment.