Traditional feature-based image stitching techniques often encounter obstacles when dealing with images lackingunique attributes or suffering from quality degradation. The scarcity of annotated datasets in real-life s...Traditional feature-based image stitching techniques often encounter obstacles when dealing with images lackingunique attributes or suffering from quality degradation. The scarcity of annotated datasets in real-life scenesseverely undermines the reliability of supervised learning methods in image stitching. Furthermore, existing deeplearning architectures designed for image stitching are often too bulky to be deployed on mobile and peripheralcomputing devices. To address these challenges, this study proposes a novel unsupervised image stitching methodbased on the YOLOv8 (You Only Look Once version 8) framework that introduces deep homography networksand attentionmechanisms. Themethodology is partitioned into three distinct stages. The initial stage combines theattention mechanism with a pooling pyramid model to enhance the detection and recognition of compact objectsin images, the task of the deep homography networks module is to estimate the global homography of the inputimages consideringmultiple viewpoints. The second stage involves preliminary stitching of the masks generated inthe initial stage and further enhancement through weighted computation to eliminate common stitching artifacts.The final stage is characterized by adaptive reconstruction and careful refinement of the initial stitching results.Comprehensive experiments acrossmultiple datasets are executed tometiculously assess the proposed model. Ourmethod’s Peak Signal-to-Noise Ratio (PSNR) and Structure Similarity Index Measure (SSIM) improved by 10.6%and 6%. These experimental results confirm the efficacy and utility of the presented model in this paper.展开更多
Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconst...Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection.展开更多
In Unsupervised Domain Adaptation(UDA)for person re-identification(re-ID),the primary challenge is reducing the distribution discrepancy between the source and target domains.This can be achieved by implicitly or expl...In Unsupervised Domain Adaptation(UDA)for person re-identification(re-ID),the primary challenge is reducing the distribution discrepancy between the source and target domains.This can be achieved by implicitly or explicitly constructing an appropriate intermediate domain to enhance recognition capability on the target domain.Implicit construction is difficult due to the absence of intermediate state supervision,making smooth knowledge transfer from the source to the target domain a challenge.To explicitly construct the most suitable intermediate domain for the model to gradually adapt to the feature distribution changes from the source to the target domain,we propose the Minimal Transfer Cost Framework(MTCF).MTCF considers all scenarios of the intermediate domain during the transfer process,ensuring smoother and more efficient domain alignment.Our framework mainly includes threemodules:Intermediate Domain Generator(IDG),Cross-domain Feature Constraint Module(CFCM),and Residual Channel Space Module(RCSM).First,the IDG Module is introduced to generate all possible intermediate domains,ensuring a smooth transition of knowledge fromthe source to the target domain.To reduce the cross-domain feature distribution discrepancy,we propose the CFCM Module,which quantifies the difficulty of knowledge transfer and ensures the diversity of intermediate domain features and their semantic relevance,achieving alignment between the source and target domains by incorporating mutual information and maximum mean discrepancy.We also design the RCSM,which utilizes attention mechanism to enhance the model’s focus on personnel features in low-resolution images,improving the accuracy and efficiency of person re-ID.Our proposed method outperforms existing technologies in all common UDA re-ID tasks and improves the Mean Average Precision(mAP)by 2.3%in the Market to Duke task compared to the state-of-the-art(SOTA)methods.展开更多
Image classification and unsupervised image segmentation can be achieved using the Gaussian mixture model.Although the Gaussian mixture model enhances the flexibility of image segmentation,it does not reflect spatial ...Image classification and unsupervised image segmentation can be achieved using the Gaussian mixture model.Although the Gaussian mixture model enhances the flexibility of image segmentation,it does not reflect spatial information and is sensitive to the segmentation parameter.In this study,we first present an efficient algorithm that incorporates spatial information into the Gaussian mixture model(GMM)without parameter estimation.The proposed model highlights the residual region with considerable information and constructs color saliency.Second,we incorporate the content-based color saliency as spatial information in the Gaussian mixture model.The segmentation is performed by clustering each pixel into an appropriate component according to the expectation maximization and maximum criteria.Finally,the random color histogram assigns a unique color to each cluster and creates an attractive color by default for segmentation.A random color histogram serves as an effective tool for data visualization and is instrumental in the creation of generative art,facilitating both analytical and aesthetic objectives.For experiments,we have used the Berkeley segmentation dataset BSDS-500 and Microsoft Research in Cambridge dataset.In the study,the proposed model showcases notable advancements in unsupervised image segmentation,with probabilistic rand index(PRI)values reaching 0.80,BDE scores as low as 12.25 and 12.02,compactness variations at 0.59 and 0.7,and variation of information(VI)reduced to 2.0 and 1.49 for the BSDS-500 and MSRC datasets,respectively,outperforming current leading-edge methods and yielding more precise segmentations.展开更多
Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ign...Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ignored that the R,G and B channels of underwater degraded images present varied degrees of degradation,due to the selective absorption for the light.To address this issue,we propose an unsupervised multi-expert learning model by considering the enhancement of each color channel.Specifically,an unsupervised architecture based on generative adversarial network is employed to alleviate the need for paired underwater images.Based on this,we design a generator,including a multi-expert encoder,a feature fusion module and a feature fusion-guided decoder,to generate the clear underwater image.Accordingly,a multi-expert discriminator is proposed to verify the authenticity of the R,G and B channels,respectively.In addition,content perceptual loss and edge loss are introduced into the loss function to further improve the content and details of the enhanced images.Extensive experiments on public datasets demonstrate that our method achieves more pleasing results in vision quality.Various metrics(PSNR,SSIM,UIQM and UCIQE) evaluated on our enhanced images have been improved obviously.展开更多
Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features ma...Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features may be redundant, and others may be irrelevant and noisy. The conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. However, for many data mining applications, decision class labels are often unknown or incomplete, thus indicating the significance of unsupervised feature selection. However, in unsupervised learning, decision class labels are not provided. In this paper, we propose a new unsupervised quick reduct (QR) algorithm using rough set theory. The quality of the reduced data is measured by the classification performance and it is evaluated using WEKA classifier tool. The method is compared with existing supervised methods and the result demonstrates the efficiency of the proposed algorithm.展开更多
This paper presents a fuzzy logic approach to efficiently perform unsupervised character classification for improvement in robustness, correctness and speed of a character recognition system. The characters are first ...This paper presents a fuzzy logic approach to efficiently perform unsupervised character classification for improvement in robustness, correctness and speed of a character recognition system. The characters are first split into eight typographical categories. The classification scheme uses pattern matching to classify the characters in each category into a set of fuzzy prototypes based on a nonlinear weighted similarity function. The fuzzy unsupervised character classification, which is natural in the repre...展开更多
The performance of traditional vibration based fault diagnosis methods greatly depends on those hand- crafted features extracted using signal processing algo- rithms, which require significant amounts of domain knowle...The performance of traditional vibration based fault diagnosis methods greatly depends on those hand- crafted features extracted using signal processing algo- rithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised represen- tation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal struc- tures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at dif- ferent scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multi- scale representations. Finally, the multiscale representa- tions are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.展开更多
Traditional unsupervised seismic facies analysis techniques need to assume that seismic data obey mixed Gaussian distribution.However,fi eld seismic data may not meet this condition,thereby leading to wrong classifi c...Traditional unsupervised seismic facies analysis techniques need to assume that seismic data obey mixed Gaussian distribution.However,fi eld seismic data may not meet this condition,thereby leading to wrong classifi cation in the application of this technology.This paper introduces a spectral clustering technique for unsupervised seismic facies analysis.This algorithm is based on on the idea of a graph to cluster the data.Its kem is that seismic data are regarded as points in space,points can be connected with the edge and construct to graphs.When the graphs are divided,the weights of the edges between the different subgraphs are as low as possible,whereas the weights of the inner edges of the subgraph should be as high as possible.That has high computational complexity and entails large memory consumption for spectral clustering algorithm.To solve the problem this paper introduces the idea of sparse representation into spectral clustering.Through the selection of a small number of local sparse representation points,the spectral clustering matrix of all sample points is approximately represented to reduce the cost of spectral clustering operation.Verifi cation of physical model and fi eld data shows that the proposed approach can obtain more accurate seismic facies classification results without considering the data meet any hypothesis.The computing efficiency of this new method is better than that of the conventional spectral clustering method,thereby meeting the application needs of fi eld seismic data.展开更多
Color quantization is bound to lose spatial information of color distribution. If too much necessary spatial distribution information of color is lost in JSEG, it is difficult or even impossible for JSEG to segment im...Color quantization is bound to lose spatial information of color distribution. If too much necessary spatial distribution information of color is lost in JSEG, it is difficult or even impossible for JSEG to segment image correctly. Enlightened from segmentation based on fuzzy theories, soft class-map is constracted to solve that problem. The definitions of values and other related ones are adjusted according to the soft class-map. With more detailed values obtained from soft class map, more color distribution information is preserved. Experiments on a synthetic image and many other color images illustrate that JSEG with soft class-map can solve efficiently the problem that in a region there may exist color gradual variation in a smooth transition. It is a more robust method especially for images which haven' t been heavily blurred near boundaries of underlying regions.展开更多
To make the quantitative results of nuclear magnetic resonance(NMR) transverse relaxation(T;) spectrums reflect the type and pore structure of reservoir more directly, an unsupervised clustering method was developed t...To make the quantitative results of nuclear magnetic resonance(NMR) transverse relaxation(T;) spectrums reflect the type and pore structure of reservoir more directly, an unsupervised clustering method was developed to obtain the quantitative pore structure information from the NMR T;spectrums based on the Gaussian mixture model(GMM). Firstly, We conducted the principal component analysis on T;spectrums in order to reduce the dimension data and the dependence of the original variables. Secondly, the dimension-reduced data was fitted using the GMM probability density function, and the model parameters and optimal clustering numbers were obtained according to the expectation-maximization algorithm and the change of the Akaike information criterion. Finally, the T;spectrum features and pore structure types of different clustering groups were analyzed and compared with T;geometric mean and T;arithmetic mean. The effectiveness of the algorithm has been verified by numerical simulation and field NMR logging data. The research shows that the clustering results based on GMM method have good correlations with the shape and distribution of the T;spectrum, pore structure, and petroleum productivity, providing a new means for quantitative identification of pore structure, reservoir grading, and oil and gas productivity evaluation.展开更多
In this letter, a new method is proposed for unsupervised classification of terrain types and man-made objects using POLarimetric Synthetic Aperture Radar (POLSAR) data. This technique is a combi-nation of the usage o...In this letter, a new method is proposed for unsupervised classification of terrain types and man-made objects using POLarimetric Synthetic Aperture Radar (POLSAR) data. This technique is a combi-nation of the usage of polarimetric information of SAR images and the unsupervised classification method based on fuzzy set theory. Image quantization and image enhancement are used to preprocess the POLSAR data. Then the polarimetric information and Fuzzy C-Means (FCM) clustering algorithm are used to classify the preprocessed images. The advantages of this algorithm are the automated classification, its high classifica-tion accuracy, fast convergence and high stability. The effectiveness of this algorithm is demonstrated by ex-periments using SIR-C/X-SAR (Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar) data.展开更多
As a generative model,Latent Dirichlet Allocation Model,which lacks optimization of topics' discrimination capability focuses on how to generate data,This paper aims to improve the discrimination capability throug...As a generative model,Latent Dirichlet Allocation Model,which lacks optimization of topics' discrimination capability focuses on how to generate data,This paper aims to improve the discrimination capability through unsupervised feature selection.Theoretical analysis shows that the discrimination capability of a topic is limited by the discrimination capability of its representative words.The discrimination capability of a word is approximated by the Information Gain of the word for topics,which is used to distinguish between "general word" and "special word" in LDA topics.Therefore,we add a constraint to the LDA objective function to let the "general words" only happen in "general topics" other than "special topics".Then a heuristic algorithm is presented to get the solution.Experiments show that this method can not only improve the information gain of topics,but also make the topics easier to understand by human.展开更多
Anomaly detection in high dimensional data is a critical research issue with serious implication in the real-world problems.Many issues in this field still unsolved,so several modern anomaly detection methods struggle...Anomaly detection in high dimensional data is a critical research issue with serious implication in the real-world problems.Many issues in this field still unsolved,so several modern anomaly detection methods struggle to maintain adequate accuracy due to the highly descriptive nature of big data.Such a phenomenon is referred to as the“curse of dimensionality”that affects traditional techniques in terms of both accuracy and performance.Thus,this research proposed a hybrid model based on Deep Autoencoder Neural Network(DANN)with five layers to reduce the difference between the input and output.The proposed model was applied to a real-world gas turbine(GT)dataset that contains 87620 columns and 56 rows.During the experiment,two issues have been investigated and solved to enhance the results.The first is the dataset class imbalance,which solved using SMOTE technique.The second issue is the poor performance,which can be solved using one of the optimization algorithms.Several optimization algorithms have been investigated and tested,including stochastic gradient descent(SGD),RMSprop,Adam and Adamax.However,Adamax optimization algorithm showed the best results when employed to train theDANNmodel.The experimental results show that our proposed model can detect the anomalies by efficiently reducing the high dimensionality of dataset with accuracy of 99.40%,F1-score of 0.9649,Area Under the Curve(AUC)rate of 0.9649,and a minimal loss function during the hybrid model training.展开更多
An implementation of adaptive filtering, composed of an unsupervised adaptive filter (UAF), a multi-step forward linear predictor (FLP), and an unsupervised multi-step adaptive predictor (UMAP), is built for sup...An implementation of adaptive filtering, composed of an unsupervised adaptive filter (UAF), a multi-step forward linear predictor (FLP), and an unsupervised multi-step adaptive predictor (UMAP), is built for suppressing impulsive noise in unknown circumstances. This filtering scheme, called unsupervised robust adaptive filter (URAF), possesses a switching structure, which ensures the robustness against impulsive noise. The FLP is used to detect the possible impulsive noise added to the signal, if the signal is "impulse-free", the filter UAF can estimate the clean sig- nal. If there exists impulsive noise, the impulse corrupted samples are replaced by predicted ones from the FLP, and then the UMAP estimates the clean signal. Both the simulation and experimental results show that the URAF has a better rate of convergence than the most recent universal filter, and is effective to restrict large disturbance like impulsive noise when the universal filter fails.展开更多
Considering the sparsity of hyperspectral images(HSIs),dictionary learning frameworks have been widely used in the field of unsupervised spectral unmixing.However,it is worth mentioning here that existing dictionary l...Considering the sparsity of hyperspectral images(HSIs),dictionary learning frameworks have been widely used in the field of unsupervised spectral unmixing.However,it is worth mentioning here that existing dictionary learning method-based unmixing methods are found to be short of robustness in noisy contexts.To improve the performance,this study specifically puts forward a new unsupervised spectral unmixing solution.For the reason that the solution only functions in a condition that both endmembers and the abundances meet non-negative con-straints,a model is built to solve the unsupervised spectral un-mixing problem on the account of the dictionary learning me-thod.To raise the screening accuracy of final members,a new form of the target function is introduced into dictionary learning practice,which is conducive to the growing robustness of noisy HSI statistics.Then,by introducing the total variation(TV)terms into the proposed spectral unmixing based on robust nonnega-tive dictionary learning(RNDLSU),the context information under HSI space is to be cited as prior knowledge to compute the abundances when performing sparse unmixing operations.Ac-cording to the final results of the experiment,this method makes favorable performance under varying noise conditions,which is especially true under low signal to noise conditions.展开更多
:Cross-project defect prediction(CPDP)aims to predict the defects on target project by using a prediction model built on source projects.The main problem in CPDP is the huge distribution gap between the source project...:Cross-project defect prediction(CPDP)aims to predict the defects on target project by using a prediction model built on source projects.The main problem in CPDP is the huge distribution gap between the source project and the target project,which prevents the prediction model from performing well.Most existing methods overlook the class discrimination of the learned features.Seeking an effective transferable model from the source project to the target project for CPDP is challenging.In this paper,we propose an unsupervised domain adaptation based on the discriminative subspace learning(DSL)approach for CPDP.DSL treats the data from two projects as being from two domains and maps the data into a common feature space.It employs crossdomain alignment with discriminative information from different projects to reduce the distribution difference of the data between different projects and incorporates the class discriminative information.Specifically,DSL first utilizes subspace learning based domain adaptation to reduce the distribution gap of data between different projects.Then,it makes full use of the class label information of the source project and transfers the discrimination ability of the source project to the target project in the common space.Comprehensive experiments on five projects verify that DSL can build an effective prediction model and improve the performance over the related competing methods by at least 7.10%and 11.08%in terms of G-measure and AUC.展开更多
In this paper, the IHSL transform and the Fuzzy C-Means (FCM) segmentation algorithm are combined together to perform the unsupervised classification for fully polarimetric Synthetic Ap-erture Rader (SAR) data. We app...In this paper, the IHSL transform and the Fuzzy C-Means (FCM) segmentation algorithm are combined together to perform the unsupervised classification for fully polarimetric Synthetic Ap-erture Rader (SAR) data. We apply the IHSL colour transform to H/α/SPANspace to obtain a new space (RGB colour space) which has a uniform distinguishability among inner parameters and contains the whole polarimetric information in H/α/SPAN.Then the FCM algorithm is applied to this RGB space to finish the classification procedure. The main advantages of this method are that the parameters in the color space have similar interclass distinguishability, thus it can achieve a high performance in the pixel based segmentation algorithm, and since we can treat the parameters in the same way, the segmentation procedure can be simplified. The experiments show that it can provide an improved classification result compared with the method which uses the H/α/SPANspace di-rectly during the segmentation procedure.展开更多
Image segmentation denotes a process for partitioning an image into distinct regions, it plays an important role in interpretation and decision making. A large variety of segmentation methods has been developed;among ...Image segmentation denotes a process for partitioning an image into distinct regions, it plays an important role in interpretation and decision making. A large variety of segmentation methods has been developed;among them, multidimensional histogram methods have been investigated but their implementation stays difficult due to the big size of histograms. We present an original method for segmenting n-D (where n is the number of components in image) images or multidimensional images in an unsupervised way using a fuzzy neighbourhood model. It is based on the hierarchical analysis of full n-D compact histograms integrating a fuzzy connected components labelling algorithm that we have realized in this work. Each peak of the histo- gram constitutes a class kernel, as soon as it encloses a number of pixels greater than or equal to a secondary arbitrary threshold knowing that a first threshold was set to define the degree of binary fuzzy similarity be- tween pixels. The use of a lossless compact n-D histogram allows a drastic reduction of the memory space necessary for coding it. As a consequence, the segmentation can be achieved without reducing the colors population of images in the classification step. It is shown that using n-D compact histograms, instead of 1-D and 2-D ones, leads to better segmentation results. Various images were segmented;the evaluation of the quality of segmentation in supervised and unsupervised of segmentation method proposed compare to the classification method k-means gives better results. It thus highlights the relevance of our approach, which can be used for solving many problems of segmentation.展开更多
This study aimed at a comparison of the effectiveness of social skills training and anger management on adjustment of unsupervised girl adolescents between 15 - 18 years old in Tehran. This research was an experimenta...This study aimed at a comparison of the effectiveness of social skills training and anger management on adjustment of unsupervised girl adolescents between 15 - 18 years old in Tehran. This research was an experimental one with plan of pre-test and post-test control groups. The statistical universe of this research was consisted of all unsupervised girl adolescents between 15 - 18 years old in Tehran. The subject was 35 unsupervised girl adolescents who are assigned to two groups: experimental and control group. Data were collected by using the Adjustment Inventory for School Students (AISS). Multivariate analysis of covariance showed that the social skills training and anger management significantly increased social, emotional and educational adjustment on the experimental group (P < 0. 05). But Tukey’s follow-up test showed that there wasn’t significant difference between effectiveness of training anger control on compatibility and effectiveness of training social skills on individuals’ total compatibility. Findings showed that both trainings could be used in the same extent in order to enhance compatibility level.展开更多
基金Science and Technology Research Project of the Henan Province(222102240014).
文摘Traditional feature-based image stitching techniques often encounter obstacles when dealing with images lackingunique attributes or suffering from quality degradation. The scarcity of annotated datasets in real-life scenesseverely undermines the reliability of supervised learning methods in image stitching. Furthermore, existing deeplearning architectures designed for image stitching are often too bulky to be deployed on mobile and peripheralcomputing devices. To address these challenges, this study proposes a novel unsupervised image stitching methodbased on the YOLOv8 (You Only Look Once version 8) framework that introduces deep homography networksand attentionmechanisms. Themethodology is partitioned into three distinct stages. The initial stage combines theattention mechanism with a pooling pyramid model to enhance the detection and recognition of compact objectsin images, the task of the deep homography networks module is to estimate the global homography of the inputimages consideringmultiple viewpoints. The second stage involves preliminary stitching of the masks generated inthe initial stage and further enhancement through weighted computation to eliminate common stitching artifacts.The final stage is characterized by adaptive reconstruction and careful refinement of the initial stitching results.Comprehensive experiments acrossmultiple datasets are executed tometiculously assess the proposed model. Ourmethod’s Peak Signal-to-Noise Ratio (PSNR) and Structure Similarity Index Measure (SSIM) improved by 10.6%and 6%. These experimental results confirm the efficacy and utility of the presented model in this paper.
基金supported in part by the National Natural Science Foundation of China(Grants 62376172,62006163,62376043)in part by the National Postdoctoral Program for Innovative Talents(Grant BX20200226)in part by Sichuan Science and Technology Planning Project(Grants 2022YFSY0047,2022YFQ0014,2023ZYD0143,2022YFH0021,2023YFQ0020,24QYCX0354,24NSFTD0025).
文摘Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection.
文摘In Unsupervised Domain Adaptation(UDA)for person re-identification(re-ID),the primary challenge is reducing the distribution discrepancy between the source and target domains.This can be achieved by implicitly or explicitly constructing an appropriate intermediate domain to enhance recognition capability on the target domain.Implicit construction is difficult due to the absence of intermediate state supervision,making smooth knowledge transfer from the source to the target domain a challenge.To explicitly construct the most suitable intermediate domain for the model to gradually adapt to the feature distribution changes from the source to the target domain,we propose the Minimal Transfer Cost Framework(MTCF).MTCF considers all scenarios of the intermediate domain during the transfer process,ensuring smoother and more efficient domain alignment.Our framework mainly includes threemodules:Intermediate Domain Generator(IDG),Cross-domain Feature Constraint Module(CFCM),and Residual Channel Space Module(RCSM).First,the IDG Module is introduced to generate all possible intermediate domains,ensuring a smooth transition of knowledge fromthe source to the target domain.To reduce the cross-domain feature distribution discrepancy,we propose the CFCM Module,which quantifies the difficulty of knowledge transfer and ensures the diversity of intermediate domain features and their semantic relevance,achieving alignment between the source and target domains by incorporating mutual information and maximum mean discrepancy.We also design the RCSM,which utilizes attention mechanism to enhance the model’s focus on personnel features in low-resolution images,improving the accuracy and efficiency of person re-ID.Our proposed method outperforms existing technologies in all common UDA re-ID tasks and improves the Mean Average Precision(mAP)by 2.3%in the Market to Duke task compared to the state-of-the-art(SOTA)methods.
基金supported by the MOE(Ministry of Education of China)Project of Humanities and Social Sciences(23YJAZH169)the Hubei Provincial Department of Education Outstanding Youth Scientific Innovation Team Support Foundation(T2020017)Henan Foreign Experts Project No.HNGD2023027.
文摘Image classification and unsupervised image segmentation can be achieved using the Gaussian mixture model.Although the Gaussian mixture model enhances the flexibility of image segmentation,it does not reflect spatial information and is sensitive to the segmentation parameter.In this study,we first present an efficient algorithm that incorporates spatial information into the Gaussian mixture model(GMM)without parameter estimation.The proposed model highlights the residual region with considerable information and constructs color saliency.Second,we incorporate the content-based color saliency as spatial information in the Gaussian mixture model.The segmentation is performed by clustering each pixel into an appropriate component according to the expectation maximization and maximum criteria.Finally,the random color histogram assigns a unique color to each cluster and creates an attractive color by default for segmentation.A random color histogram serves as an effective tool for data visualization and is instrumental in the creation of generative art,facilitating both analytical and aesthetic objectives.For experiments,we have used the Berkeley segmentation dataset BSDS-500 and Microsoft Research in Cambridge dataset.In the study,the proposed model showcases notable advancements in unsupervised image segmentation,with probabilistic rand index(PRI)values reaching 0.80,BDE scores as low as 12.25 and 12.02,compactness variations at 0.59 and 0.7,and variation of information(VI)reduced to 2.0 and 1.49 for the BSDS-500 and MSRC datasets,respectively,outperforming current leading-edge methods and yielding more precise segmentations.
基金supported in part by the National Key Research and Development Program of China(2020YFB1313002)the National Natural Science Foundation of China(62276023,U22B2055,62222302,U2013202)+1 种基金the Fundamental Research Funds for the Central Universities(FRF-TP-22-003C1)the Postgraduate Education Reform Project of Henan Province(2021SJGLX260Y)。
文摘Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ignored that the R,G and B channels of underwater degraded images present varied degrees of degradation,due to the selective absorption for the light.To address this issue,we propose an unsupervised multi-expert learning model by considering the enhancement of each color channel.Specifically,an unsupervised architecture based on generative adversarial network is employed to alleviate the need for paired underwater images.Based on this,we design a generator,including a multi-expert encoder,a feature fusion module and a feature fusion-guided decoder,to generate the clear underwater image.Accordingly,a multi-expert discriminator is proposed to verify the authenticity of the R,G and B channels,respectively.In addition,content perceptual loss and edge loss are introduced into the loss function to further improve the content and details of the enhanced images.Extensive experiments on public datasets demonstrate that our method achieves more pleasing results in vision quality.Various metrics(PSNR,SSIM,UIQM and UCIQE) evaluated on our enhanced images have been improved obviously.
基金supported by the UGC, SERO, Hyderabad under FDP during XI plan periodthe UGC, New Delhi for financial assistance under major research project Grant No. F-34-105/2008
文摘Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features may be redundant, and others may be irrelevant and noisy. The conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. However, for many data mining applications, decision class labels are often unknown or incomplete, thus indicating the significance of unsupervised feature selection. However, in unsupervised learning, decision class labels are not provided. In this paper, we propose a new unsupervised quick reduct (QR) algorithm using rough set theory. The quality of the reduced data is measured by the classification performance and it is evaluated using WEKA classifier tool. The method is compared with existing supervised methods and the result demonstrates the efficiency of the proposed algorithm.
文摘This paper presents a fuzzy logic approach to efficiently perform unsupervised character classification for improvement in robustness, correctness and speed of a character recognition system. The characters are first split into eight typographical categories. The classification scheme uses pattern matching to classify the characters in each category into a set of fuzzy prototypes based on a nonlinear weighted similarity function. The fuzzy unsupervised character classification, which is natural in the repre...
基金Supported by Hebei Provincial Natural Science Foundation of China(Grant No.F2016203421)
文摘The performance of traditional vibration based fault diagnosis methods greatly depends on those hand- crafted features extracted using signal processing algo- rithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised represen- tation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal struc- tures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at dif- ferent scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multi- scale representations. Finally, the multiscale representa- tions are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.
基金This work was supported by National Natural Science Foundation of China(Nos.U1562218,41604107,and 41804126).
文摘Traditional unsupervised seismic facies analysis techniques need to assume that seismic data obey mixed Gaussian distribution.However,fi eld seismic data may not meet this condition,thereby leading to wrong classifi cation in the application of this technology.This paper introduces a spectral clustering technique for unsupervised seismic facies analysis.This algorithm is based on on the idea of a graph to cluster the data.Its kem is that seismic data are regarded as points in space,points can be connected with the edge and construct to graphs.When the graphs are divided,the weights of the edges between the different subgraphs are as low as possible,whereas the weights of the inner edges of the subgraph should be as high as possible.That has high computational complexity and entails large memory consumption for spectral clustering algorithm.To solve the problem this paper introduces the idea of sparse representation into spectral clustering.Through the selection of a small number of local sparse representation points,the spectral clustering matrix of all sample points is approximately represented to reduce the cost of spectral clustering operation.Verifi cation of physical model and fi eld data shows that the proposed approach can obtain more accurate seismic facies classification results without considering the data meet any hypothesis.The computing efficiency of this new method is better than that of the conventional spectral clustering method,thereby meeting the application needs of fi eld seismic data.
文摘Color quantization is bound to lose spatial information of color distribution. If too much necessary spatial distribution information of color is lost in JSEG, it is difficult or even impossible for JSEG to segment image correctly. Enlightened from segmentation based on fuzzy theories, soft class-map is constracted to solve that problem. The definitions of values and other related ones are adjusted according to the soft class-map. With more detailed values obtained from soft class map, more color distribution information is preserved. Experiments on a synthetic image and many other color images illustrate that JSEG with soft class-map can solve efficiently the problem that in a region there may exist color gradual variation in a smooth transition. It is a more robust method especially for images which haven' t been heavily blurred near boundaries of underlying regions.
基金Supported by the National Natural Science Foundation of China (42174142)National Science and Technology Major Project (2017ZX05039-002)+2 种基金Operation Fund of China National Petroleum Corporation Logging Key Laboratory (2021DQ20210107-11)Fundamental Research Funds for Central Universities (19CX02006A)Major Science and Technology Project of China National Petroleum Corporation (ZD2019-183-006)。
文摘To make the quantitative results of nuclear magnetic resonance(NMR) transverse relaxation(T;) spectrums reflect the type and pore structure of reservoir more directly, an unsupervised clustering method was developed to obtain the quantitative pore structure information from the NMR T;spectrums based on the Gaussian mixture model(GMM). Firstly, We conducted the principal component analysis on T;spectrums in order to reduce the dimension data and the dependence of the original variables. Secondly, the dimension-reduced data was fitted using the GMM probability density function, and the model parameters and optimal clustering numbers were obtained according to the expectation-maximization algorithm and the change of the Akaike information criterion. Finally, the T;spectrum features and pore structure types of different clustering groups were analyzed and compared with T;geometric mean and T;arithmetic mean. The effectiveness of the algorithm has been verified by numerical simulation and field NMR logging data. The research shows that the clustering results based on GMM method have good correlations with the shape and distribution of the T;spectrum, pore structure, and petroleum productivity, providing a new means for quantitative identification of pore structure, reservoir grading, and oil and gas productivity evaluation.
基金Supported by the University Doctorate Special Research Fund (No. 20030614001) and the Youth Scholarship Leader Fund of Univ. of Electro. Sci. and Tech. of China.
文摘In this letter, a new method is proposed for unsupervised classification of terrain types and man-made objects using POLarimetric Synthetic Aperture Radar (POLSAR) data. This technique is a combi-nation of the usage of polarimetric information of SAR images and the unsupervised classification method based on fuzzy set theory. Image quantization and image enhancement are used to preprocess the POLSAR data. Then the polarimetric information and Fuzzy C-Means (FCM) clustering algorithm are used to classify the preprocessed images. The advantages of this algorithm are the automated classification, its high classifica-tion accuracy, fast convergence and high stability. The effectiveness of this algorithm is demonstrated by ex-periments using SIR-C/X-SAR (Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar) data.
基金supported by National Nature Science Foundation of China under Grant No.60905017,61072061National High Technical Research and Development Program of China(863 Program)under Grant No.2009AA01A346+1 种基金111 Project of China under Grant No.B08004the Special Project for Innovative Young Researchers of Beijing University of Posts and Telecommunications
文摘As a generative model,Latent Dirichlet Allocation Model,which lacks optimization of topics' discrimination capability focuses on how to generate data,This paper aims to improve the discrimination capability through unsupervised feature selection.Theoretical analysis shows that the discrimination capability of a topic is limited by the discrimination capability of its representative words.The discrimination capability of a word is approximated by the Information Gain of the word for topics,which is used to distinguish between "general word" and "special word" in LDA topics.Therefore,we add a constraint to the LDA objective function to let the "general words" only happen in "general topics" other than "special topics".Then a heuristic algorithm is presented to get the solution.Experiments show that this method can not only improve the information gain of topics,but also make the topics easier to understand by human.
基金This research/paper was fully supported by Universiti Teknologi PETRONAS,under the Yayasan Universiti Teknologi PETRONAS(YUTP)Fundamental Research Grant Scheme(YUTP-015LC0-123).
文摘Anomaly detection in high dimensional data is a critical research issue with serious implication in the real-world problems.Many issues in this field still unsolved,so several modern anomaly detection methods struggle to maintain adequate accuracy due to the highly descriptive nature of big data.Such a phenomenon is referred to as the“curse of dimensionality”that affects traditional techniques in terms of both accuracy and performance.Thus,this research proposed a hybrid model based on Deep Autoencoder Neural Network(DANN)with five layers to reduce the difference between the input and output.The proposed model was applied to a real-world gas turbine(GT)dataset that contains 87620 columns and 56 rows.During the experiment,two issues have been investigated and solved to enhance the results.The first is the dataset class imbalance,which solved using SMOTE technique.The second issue is the poor performance,which can be solved using one of the optimization algorithms.Several optimization algorithms have been investigated and tested,including stochastic gradient descent(SGD),RMSprop,Adam and Adamax.However,Adamax optimization algorithm showed the best results when employed to train theDANNmodel.The experimental results show that our proposed model can detect the anomalies by efficiently reducing the high dimensionality of dataset with accuracy of 99.40%,F1-score of 0.9649,Area Under the Curve(AUC)rate of 0.9649,and a minimal loss function during the hybrid model training.
基金supported by the National Science Fund for Distinguished Young Scholars of China (60925011)
文摘An implementation of adaptive filtering, composed of an unsupervised adaptive filter (UAF), a multi-step forward linear predictor (FLP), and an unsupervised multi-step adaptive predictor (UMAP), is built for suppressing impulsive noise in unknown circumstances. This filtering scheme, called unsupervised robust adaptive filter (URAF), possesses a switching structure, which ensures the robustness against impulsive noise. The FLP is used to detect the possible impulsive noise added to the signal, if the signal is "impulse-free", the filter UAF can estimate the clean sig- nal. If there exists impulsive noise, the impulse corrupted samples are replaced by predicted ones from the FLP, and then the UMAP estimates the clean signal. Both the simulation and experimental results show that the URAF has a better rate of convergence than the most recent universal filter, and is effective to restrict large disturbance like impulsive noise when the universal filter fails.
基金supported by the National Natural Science Foundation of China(61801513).
文摘Considering the sparsity of hyperspectral images(HSIs),dictionary learning frameworks have been widely used in the field of unsupervised spectral unmixing.However,it is worth mentioning here that existing dictionary learning method-based unmixing methods are found to be short of robustness in noisy contexts.To improve the performance,this study specifically puts forward a new unsupervised spectral unmixing solution.For the reason that the solution only functions in a condition that both endmembers and the abundances meet non-negative con-straints,a model is built to solve the unsupervised spectral un-mixing problem on the account of the dictionary learning me-thod.To raise the screening accuracy of final members,a new form of the target function is introduced into dictionary learning practice,which is conducive to the growing robustness of noisy HSI statistics.Then,by introducing the total variation(TV)terms into the proposed spectral unmixing based on robust nonnega-tive dictionary learning(RNDLSU),the context information under HSI space is to be cited as prior knowledge to compute the abundances when performing sparse unmixing operations.Ac-cording to the final results of the experiment,this method makes favorable performance under varying noise conditions,which is especially true under low signal to noise conditions.
基金This paper was supported by the National Natural Science Foundation of China(61772286,61802208,and 61876089)China Postdoctoral Science Foundation Grant 2019M651923Natural Science Foundation of Jiangsu Province of China(BK0191381).
文摘:Cross-project defect prediction(CPDP)aims to predict the defects on target project by using a prediction model built on source projects.The main problem in CPDP is the huge distribution gap between the source project and the target project,which prevents the prediction model from performing well.Most existing methods overlook the class discrimination of the learned features.Seeking an effective transferable model from the source project to the target project for CPDP is challenging.In this paper,we propose an unsupervised domain adaptation based on the discriminative subspace learning(DSL)approach for CPDP.DSL treats the data from two projects as being from two domains and maps the data into a common feature space.It employs crossdomain alignment with discriminative information from different projects to reduce the distribution difference of the data between different projects and incorporates the class discriminative information.Specifically,DSL first utilizes subspace learning based domain adaptation to reduce the distribution gap of data between different projects.Then,it makes full use of the class label information of the source project and transfers the discrimination ability of the source project to the target project in the common space.Comprehensive experiments on five projects verify that DSL can build an effective prediction model and improve the performance over the related competing methods by at least 7.10%and 11.08%in terms of G-measure and AUC.
文摘In this paper, the IHSL transform and the Fuzzy C-Means (FCM) segmentation algorithm are combined together to perform the unsupervised classification for fully polarimetric Synthetic Ap-erture Rader (SAR) data. We apply the IHSL colour transform to H/α/SPANspace to obtain a new space (RGB colour space) which has a uniform distinguishability among inner parameters and contains the whole polarimetric information in H/α/SPAN.Then the FCM algorithm is applied to this RGB space to finish the classification procedure. The main advantages of this method are that the parameters in the color space have similar interclass distinguishability, thus it can achieve a high performance in the pixel based segmentation algorithm, and since we can treat the parameters in the same way, the segmentation procedure can be simplified. The experiments show that it can provide an improved classification result compared with the method which uses the H/α/SPANspace di-rectly during the segmentation procedure.
文摘Image segmentation denotes a process for partitioning an image into distinct regions, it plays an important role in interpretation and decision making. A large variety of segmentation methods has been developed;among them, multidimensional histogram methods have been investigated but their implementation stays difficult due to the big size of histograms. We present an original method for segmenting n-D (where n is the number of components in image) images or multidimensional images in an unsupervised way using a fuzzy neighbourhood model. It is based on the hierarchical analysis of full n-D compact histograms integrating a fuzzy connected components labelling algorithm that we have realized in this work. Each peak of the histo- gram constitutes a class kernel, as soon as it encloses a number of pixels greater than or equal to a secondary arbitrary threshold knowing that a first threshold was set to define the degree of binary fuzzy similarity be- tween pixels. The use of a lossless compact n-D histogram allows a drastic reduction of the memory space necessary for coding it. As a consequence, the segmentation can be achieved without reducing the colors population of images in the classification step. It is shown that using n-D compact histograms, instead of 1-D and 2-D ones, leads to better segmentation results. Various images were segmented;the evaluation of the quality of segmentation in supervised and unsupervised of segmentation method proposed compare to the classification method k-means gives better results. It thus highlights the relevance of our approach, which can be used for solving many problems of segmentation.
文摘This study aimed at a comparison of the effectiveness of social skills training and anger management on adjustment of unsupervised girl adolescents between 15 - 18 years old in Tehran. This research was an experimental one with plan of pre-test and post-test control groups. The statistical universe of this research was consisted of all unsupervised girl adolescents between 15 - 18 years old in Tehran. The subject was 35 unsupervised girl adolescents who are assigned to two groups: experimental and control group. Data were collected by using the Adjustment Inventory for School Students (AISS). Multivariate analysis of covariance showed that the social skills training and anger management significantly increased social, emotional and educational adjustment on the experimental group (P < 0. 05). But Tukey’s follow-up test showed that there wasn’t significant difference between effectiveness of training anger control on compatibility and effectiveness of training social skills on individuals’ total compatibility. Findings showed that both trainings could be used in the same extent in order to enhance compatibility level.