Unsupervised methods based on density representation have shown their abilities in anomaly detection,but detection performance still needs to be improved.Specifically,approaches using normalizing flows can accurately ...Unsupervised methods based on density representation have shown their abilities in anomaly detection,but detection performance still needs to be improved.Specifically,approaches using normalizing flows can accurately evaluate sample distributions,mapping normal features to the normal distribution and anomalous features outside it.Consequently,this paper proposes a Normalizing Flow-based Bidirectional Mapping Residual Network(NF-BMR).It utilizes pre-trained Convolutional Neural Networks(CNN)and normalizing flows to construct discriminative source and target domain feature spaces.Additionally,to better learn feature information in both domain spaces,we propose the Bidirectional Mapping Residual Network(BMR),which maps sample features to these two spaces for anomaly detection.The two detection spaces effectively complement each other’s deficiencies and provide a comprehensive feature evaluation from two perspectives,which leads to the improvement of detection performance.Comparative experimental results on the MVTec AD and DAGM datasets against the Bidirectional Pre-trained Feature Mapping Network(B-PFM)and other state-of-the-art methods demonstrate that the proposed approach achieves superior performance.On the MVTec AD dataset,NF-BMR achieves an average AUROC of 98.7%for all 15 categories.Especially,it achieves 100%optimal detection performance in five categories.On the DAGM dataset,the average AUROC across ten categories is 98.7%,which is very close to supervised methods.展开更多
In recent years,various adversarial defense methods have been proposed to improve the robustness of deep neural networks.Adversarial training is one of the most potent methods to defend against adversarial attacks.How...In recent years,various adversarial defense methods have been proposed to improve the robustness of deep neural networks.Adversarial training is one of the most potent methods to defend against adversarial attacks.However,the difference in the feature space between natural and adversarial examples hinders the accuracy and robustness of the model in adversarial training.This paper proposes a learnable distribution adversarial training method,aiming to construct the same distribution for training data utilizing the Gaussian mixture model.The distribution centroid is built to classify samples and constrain the distribution of the sample features.The natural and adversarial examples are pushed to the same distribution centroid to improve the accuracy and robustness of the model.The proposed method generates adversarial examples to close the distribution gap between the natural and adversarial examples through an attack algorithm explicitly designed for adversarial training.This algorithm gradually increases the accuracy and robustness of the model by scaling perturbation.Finally,the proposed method outputs the predicted labels and the distance between the sample and the distribution centroid.The distribution characteristics of the samples can be utilized to detect adversarial cases that can potentially evade the model defense.The effectiveness of the proposed method is demonstrated through comprehensive experiments.展开更多
Modeling soil salinity in an arid salt-affected ecosystem is a difficult task when using remote sensing data because of the complicated soil context (vegetation cover, moisture, surface roughness, and organic matter...Modeling soil salinity in an arid salt-affected ecosystem is a difficult task when using remote sensing data because of the complicated soil context (vegetation cover, moisture, surface roughness, and organic matter) and the weak spectral features of salinized soil. Therefore, an index such as the salinity index (SI) that only uses soil spectra may not detect soil salinity effectively and quantitatively. The use of vegetation reflectance as an indirect indicator can avoid limitations associated with the direct use of soil reflectance. The normalized difference vegetation index (NDVI), as the most common vegetation index, was found to be responsive to salinity but may not be available for retrieving sparse vegetation due to its sensitivity to background soil in arid areas. Therefore, the arid fraction integrated index (AFⅡ) was created as supported by the spectral mixture analysis (SMA), which is more appropriate for analyzing variations in vegetation cover (particularly halophytes) than NDVI in the study area. Using soil and vegetation separately for detecting salinity perhaps is not feasible. Then, we developed a new and operational model, the soil salinity detecting model (SDM) that combines AFⅡ and SI to quantitatively estimate the salt content in the surface soil. SDMs, including SDM1 and SDM2, were constructed through analyzing the spatial characteristics of soils with different salinization degree by integrating AFⅡ and SI using a scatterplot. The SDMs were then compared to the combined spectral response index (COSRI) from field measurements with respect to the soil salt content. The results indicate that the SDM values are highly correlated with soil salinity, in contrast to the performance of COSRI. Strong exponential relationships were observed between soil salinity and SDMs (R2〉0.86, RMSE〈6.86) compared to COSRI (R2=0.71, RMSE=16.21). These results suggest that the feature space related to biophysical properties combined with AFII and SI can effectively provide information on soil salinity.展开更多
The critical technical problem of underwater bottom object detection is founding a stable feature space for echo signals classification. The past literatures more focus on the characteristics of object echoes in featu...The critical technical problem of underwater bottom object detection is founding a stable feature space for echo signals classification. The past literatures more focus on the characteristics of object echoes in feature space and reverberation is only treated as interference. In this paper, reverberation is considered as a kind of signal with steady characteristic, and the clustering of reverberation in frequency discrete wavelet transform (FDWT) feature space is studied. In order to extract the identifying information of echo signals, feature compression and cluster analysis are adopted in this paper, and the criterion of separability between object echoes and reverberation is given. The experimental data processing results show that reverberation has steady pattern in FDWT feature space which differs from that of object echoes. It is proven that there is separability between reverberation and object echoes.展开更多
Spectral and spatial features in remotely sensed data play an irreplaceable role in classifying crop types for precision agriculture. Despite the thriving establishment of the handcrafted features, designing or select...Spectral and spatial features in remotely sensed data play an irreplaceable role in classifying crop types for precision agriculture. Despite the thriving establishment of the handcrafted features, designing or selecting such features valid for specific crop types requires prior knowledge and thus remains an open challenge. Convolutional neural networks(CNNs) can effectively overcome this issue with their advanced ability to generate high-level features automatically but are still inadequate in mining spectral features compared to mining spatial features. This study proposed an enhanced spectral feature called Stacked Spectral Feature Space Patch(SSFSP) for CNN-based crop classification. SSFSP is a stack of twodimensional(2 D) gridded spectral feature images that record various crop types’ spatial and intensity distribution characteristics in a 2 D feature space consisting of two spectral bands. SSFSP can be input into2 D-CNNs to support the simultaneous mining of spectral and spatial features, as the spectral features are successfully converted to 2 D images that can be processed by CNN. We tested the performance of SSFSP by using it as the input to seven CNN models and one multilayer perceptron model for crop type classification compared to using conventional spectral features as input. Using high spatial resolution hyperspectral datasets at three sites, the comparative study demonstrated that SSFSP outperforms conventional spectral features regarding classification accuracy, robustness, and training efficiency. The theoretical analysis summarizes three reasons for its excellent performance. First, SSFSP mines the spectral interrelationship with feature generality, which reduces the required number of training samples.Second, the intra-class variance can be largely reduced by grid partitioning. Third, SSFSP is a highly sparse feature, which reduces the dependence on the CNN model structure and enables early and fast convergence in model training. In conclusion, SSFSP has great potential for practical crop classification in precision agriculture.展开更多
Precise control of machining deformation is crucial for improving the manufacturing quality of structural aerospace components.In the machining process,different batches of blanks have different residual stress distri...Precise control of machining deformation is crucial for improving the manufacturing quality of structural aerospace components.In the machining process,different batches of blanks have different residual stress distributions,which pose a significant challenge to machining deformation control.In this study,a reinforcement learning method for machining deformation control based on a meta-invariant feature space was developed.The proposed method uses a reinforcement-learning model to dynamically control the machining process by monitoring the deformation force.Moreover,combined with a meta-invariant feature space,the proposed method learns the internal relationship of the deformation control approaches under different stress distributions to achieve the machining deformation control of different batches of blanks.Finally,the experimental results show that the proposed method achieves better deformation control than the two existing benchmarking methods.展开更多
With the development of large scale text processing, the dimension of text feature space has become larger and larger, which has added a lot of difficulties to natural language processing. How to reduce the dimension...With the development of large scale text processing, the dimension of text feature space has become larger and larger, which has added a lot of difficulties to natural language processing. How to reduce the dimension has become a practical problem in the field. Here we present two clustering methods, i.e. concept association and concept abstract, to achieve the goal. The first refers to the keyword clustering based on the co occurrence of展开更多
A novel mercer kernel based fuzzy clustering self-adaptive algorithm is presented. The mercer kernel method is introduced to the fuzzy c-means clustering. It may map implicitly the input data into the high-dimensional...A novel mercer kernel based fuzzy clustering self-adaptive algorithm is presented. The mercer kernel method is introduced to the fuzzy c-means clustering. It may map implicitly the input data into the high-dimensional feature space through the nonlinear transformation. Among other fuzzy c-means and its variants, the number of clusters is first determined. A self-adaptive algorithm is proposed. The number of clusters, which is not given in advance, can be gotten automatically by a validity measure function. Finally, experiments are given to show better performance with the method of kernel based fuzzy c-means self-adaptive algorithm.展开更多
Using the view point of nonlinear science and the method of selecting numerical features of pattern recognition for reference, the physical and numerical features of precursory ground tilt data are synthetically emplo...Using the view point of nonlinear science and the method of selecting numerical features of pattern recognition for reference, the physical and numerical features of precursory ground tilt data are synthetically employed. The dynamic changes of data series are described with the numerical features in multi dimensional space and their distributive relations instead of an unique factor. The relationship between the ground tilt data and earthquake is examined through recognition and classification.展开更多
In medical research and clinical diagnosis, automated or computer-assisted classification and retrieval methods are highly desirable to offset the high cost of manual classification and manipulation by medical experts...In medical research and clinical diagnosis, automated or computer-assisted classification and retrieval methods are highly desirable to offset the high cost of manual classification and manipulation by medical experts. To facilitate the decision-making in the health-care and the related areas, in this paper, a two-step content-based medical image retrieval algorithm is proposed. Firstly, in the preprocessing step, the image segmentation is performed to distinguish image objects, and on the basis of the ...展开更多
This paper presents a nonlinear multidimensional scaling model, called kernelized fourth quantifica- tion theory, which is an integration of kernel techniques and the fourth quantification theory. The model can deal w...This paper presents a nonlinear multidimensional scaling model, called kernelized fourth quantifica- tion theory, which is an integration of kernel techniques and the fourth quantification theory. The model can deal with the problem of mineral prediction without defining a training area. In mineral target prediction, the pre-defined statistical cells, such as grid cells, can be implicitly transformed using kernel techniques from input space to a high-dimensional feature space, where the nonlinearly separable clusters in the input space are ex- pected to be linearly separable. Then, the transformed cells in the feature space are mapped by the fourth quan- tifieation theory onto a low-dimensional scaling space, where the sealed cells can be visually clustered according to their spatial locations. At the same time, those cells, which are far away from the cluster center of the majority of the sealed cells, are recognized as anomaly cells. Finally, whether the anomaly cells can serve as mineral potential target cells can be tested by spatially superimposing the known mineral occurrences onto the anomaly ceils. A case study shows that nearly all the known mineral occurrences spatially coincide with the anomaly cells with nearly the smallest scaled coordinates in one-dimensional sealing space. In the case study, the mineral target cells delineated by the new model are similar to those predicted by the well-known WofE model.展开更多
Gene selection (feature selection) is generally pertormed in gene space(feature space), where a very serious curse of dimensionality problem always existsbecause the number of genes is much larger than the number of s...Gene selection (feature selection) is generally pertormed in gene space(feature space), where a very serious curse of dimensionality problem always existsbecause the number of genes is much larger than the number of samples in gene space(G-space). This results in difficulty in modeling the data set in this space and the lowconfidence of the result of gene selection. How to find a gene subset in this case is achallenging subject. In this paper, the above G-space is transformed into its dual space,referred to as class space (C-space) such that the number of dimensions is the verynumber of classes of the samples in G-space and the number of samples in C-space isthe number of genes in G-space. it is obvious that the curse of dimensionality in C-spacedoes not exist. A new gene selection method which is based on the principle of separatingdifferent classes as far as possible is presented with the help of Principal ComponentAnalysis (PCA). The experimental results on gene selection for real data set areevaluated with Fisher criterion, weighted Fisher criterion as well as leave-one-out crossvalidation, showing that the method presented here is effective and efficient.展开更多
Recognition and counting of greenhouse pests are important for monitoring and forecasting pest population dynamics.This study used image processing techniques to recognize and count whiteflies and thrips on a sticky t...Recognition and counting of greenhouse pests are important for monitoring and forecasting pest population dynamics.This study used image processing techniques to recognize and count whiteflies and thrips on a sticky trap located in a greenhouse environment.The digital images of sticky traps were collected using an image-acquisition system under different greenhouse conditions.If a single color space is used,it is difficult to segment the small pests correctly because of the detrimental effects of non-uniform illumination in complex scenarios.Therefore,a method that first segments object pests in two color spaces using the Prewitt operator in I component of the hue-saturation-intensity(HSI)color space and the Canny operator in the B component of the Lab color space was proposed.Then,the segmented results for the two-color spaces were summed and achieved 91.57%segmentation accuracy.Next,because different features of pests contribute differently to the classification of pest species,the study extracted multiple features(e.g.,color and shape features)in different color spaces for each segmented pest region to improve the recognition performance.Twenty decision trees were used to form a strong ensemble learning classifier that used a majority voting mechanism and obtains 95.73%recognition accuracy.The proposed method is a feasible and effective way to process greenhouse pest images.The system accurately recognized and counted pests in sticky trap images captured under real greenhouse conditions.展开更多
The traditional space target detection methods mainly use the spatial characteristics of the star map to detect the targets, which can not make full use of the time domain information. This paper presents a new space ...The traditional space target detection methods mainly use the spatial characteristics of the star map to detect the targets, which can not make full use of the time domain information. This paper presents a new space moving target detection method based on time domain features. We firstly construct the time spectral data of star map, then analyze the time domain features of the main objects(target, stars and the background) in star maps, finally detect the moving targets using single pulse feature of the time domain signal. The real star map target detection experimental results show that the proposed method can effectively detect the trajectory of moving targets in the star map sequence, and the detection probability achieves 99% when the false alarm rate is about 8×10^(-5), which outperforms those of compared algorithms.展开更多
Facial emotion have great significance in human-computer interaction,virtual reality and people's communication.Existing methods for facial emotion privacy mainly concentrate on the perturbation of facial emotion ...Facial emotion have great significance in human-computer interaction,virtual reality and people's communication.Existing methods for facial emotion privacy mainly concentrate on the perturbation of facial emotion images.However,cryptography-based perturbation algorithms are highly computationally expensive,and transformation-based perturbation algorithms only target specific recognition models.In this paper,we propose a universal feature vector-based privacy-preserving perturbation algorithm for facial emotion.Our method implements privacy-preserving facial emotion images on the feature space by computing tiny perturbations and adding them to the original images.In addition,the proposed algorithm can also enable expression images to be recognized as specific labels.Experiments show that the protection success rate of our method is above 95%and the image quality evaluation degrades no more than 0.003.The quantitative and qualitative results show that our proposed method has a balance between privacy and usability.展开更多
Under the pressure of SDG15.3.1 compliance,it is imperative to solve the land salinization degradation problem in the Yellow River Basin as China’s granary.From the view of geographical scale,six zoning units were di...Under the pressure of SDG15.3.1 compliance,it is imperative to solve the land salinization degradation problem in the Yellow River Basin as China’s granary.From the view of geographical scale,six zoning units were divided in the Yellow River Basin with‘climate-meteorology-geomorphology’as the main controlling factor,and a salinization inversion model was constructed for each zoning unit.Appropriate surface parameters were selected to construct a three-dimensional feature space according to the individual geographical zones.Based on the cloud data processing capability of the Google Earth Engine platform,a feature space inversion process was applied for automatic inversion of salinization.Salinization distribution maps of the Yellow River Basin in 2015 and 2020 were obtained at 30 m resolution by classifying the salinization inversion result.The distribution and spatiotemporal variation of salinization as well as the causes of salinization were analyzed.Reasonable prevention and control suggestions were subsequently proposed.This study could also be scaled up to larger and more complex geographical regions.展开更多
In this paper,improvement on man-computer interactive classification of clouds based on hispeetral satellite imagery has been synthesized by using the maximum likelihood automatic clustering(MLAC)and the unit feature ...In this paper,improvement on man-computer interactive classification of clouds based on hispeetral satellite imagery has been synthesized by using the maximum likelihood automatic clustering(MLAC)and the unit feature space classification(UFSC)approaches.The improved classification not only shortens the time of sample-training in UFSC method,but also eliminates the inevitable shortcomings of the MLAC method.(e.g.,1.sample selecting and training is confined only to one cloud image:2.the result of clustering is pretty sensitive to the selection of initial cluster center:3.the actual classification basically can not satisfy the supposition of normal distribution required by MLAC method;4.errors in classification are difficult to be modified.) Moreover,it makes full use of the professionals'accumulated knowledge and experience of visual cloud classifications and the cloud report of ground observation,having ensured both the higher accuracy of classification and its wide application as well.展开更多
This paper proposes a structure-aware nonlocal energy optimization framework for interactive image colo- rization with sparse scribbles. Our colorization technique propagates colors to both local intensity-continuous ...This paper proposes a structure-aware nonlocal energy optimization framework for interactive image colo- rization with sparse scribbles. Our colorization technique propagates colors to both local intensity-continuous regions and remote texture-similar regions without explicit image segmentation. We implement the nonlocal principle by computing k nearest neighbors in the high-dimensional feature space. The feature space contains not only image coordinates and intensities but also statistical texture features obtained with the direction-aligned Gabor wavelet filter. Structure maps are utilized to scale texture features to avoid artifacts along high-contrast boundaries. We show various experimental results and comparisons on image colorization, selective recoloring and decoloring, and progressive color editing to demonstrate the effectiveness of the proposed approach.展开更多
China’s Yellow River Delta represents a typical area with moist semi-humid soil salinization,and its salinization has seriously affected the sustainable use of local resources.The use of remote sensing technology to ...China’s Yellow River Delta represents a typical area with moist semi-humid soil salinization,and its salinization has seriously affected the sustainable use of local resources.The use of remote sensing technology to understand changes in the spatial and temporal patterns of salinization is key to combating regional land degradation.In this study,a feature space model was constructed for remote sensing and monitoring land salinization using Landsat 8 OIL multi-spectral images.The feature parameters were paired to construct a feature space model;a total of eight feature space models were obtained.An accuracy analysis was conducted by combining salt-loving vegetation data with measured data,and the model demonstrating the highest accuracy was selected to develop salinization inversion maps for 2015 and 2020.The results showed that:(1)The total salinization area of the Yellow River Delta displayed a slight upward trend,increasing from 4244 km^(2) in 2015 to 4629 km^(2) in 2020.However,the area’s salting degree reduced substantially,and the areas of saline soil and severe salinization were reduced in size;(2)The areas with reduced salinization severity were mainly concentrated in areas surrounding cities,and primarily comprised wetlands and some regions around the Bohai Sea;(3)Numerous factors such as the implementation of the“Bohai Granary”cultivation engagement plan,increase in human activities to greening local residential living environments,and seawater intrusion caused by the reduction of sediment contents have impacted the distribution of salinization areas in the Yellow River Delta;(4)The characteristic space method of salinization monitoring has better applicability and can be promoted in humid-sub humid regions.展开更多
In this paper, we target a similarity search among data supply chains, which plays an essential role in optimizing the supply chain and extending its value. This problem is very challenging for application-oriented da...In this paper, we target a similarity search among data supply chains, which plays an essential role in optimizing the supply chain and extending its value. This problem is very challenging for application-oriented data supply chains because the high complexity of the data supply chain makes the computation of similarity extremely complex and inefficient. In this paper, we propose a feature space representation model based on key points,which can extract the key features from the subsequences of the original data supply chain and simplify it into a feature vector form. Then, we formulate the similarity computation of the subsequences based on the multiscale features. Further, we propose an improved hierarchical clustering algorithm for a similarity search over the data supply chains. The main idea is to separate the subsequences into disjoint groups such that each group meets one specific clustering criteria; thus, the cluster containing the query object is the similarity search result. The experimental results show that the proposed approach is both effective and efficient for data supply chain retrieval.展开更多
基金This work was supported in part by the National Key R&D Program of China 2021YFE0110500in part by the National Natural Science Foundation of China under Grant 62062021in part by the Guiyang Scientific Plan Project[2023]48-11.
文摘Unsupervised methods based on density representation have shown their abilities in anomaly detection,but detection performance still needs to be improved.Specifically,approaches using normalizing flows can accurately evaluate sample distributions,mapping normal features to the normal distribution and anomalous features outside it.Consequently,this paper proposes a Normalizing Flow-based Bidirectional Mapping Residual Network(NF-BMR).It utilizes pre-trained Convolutional Neural Networks(CNN)and normalizing flows to construct discriminative source and target domain feature spaces.Additionally,to better learn feature information in both domain spaces,we propose the Bidirectional Mapping Residual Network(BMR),which maps sample features to these two spaces for anomaly detection.The two detection spaces effectively complement each other’s deficiencies and provide a comprehensive feature evaluation from two perspectives,which leads to the improvement of detection performance.Comparative experimental results on the MVTec AD and DAGM datasets against the Bidirectional Pre-trained Feature Mapping Network(B-PFM)and other state-of-the-art methods demonstrate that the proposed approach achieves superior performance.On the MVTec AD dataset,NF-BMR achieves an average AUROC of 98.7%for all 15 categories.Especially,it achieves 100%optimal detection performance in five categories.On the DAGM dataset,the average AUROC across ten categories is 98.7%,which is very close to supervised methods.
基金supported by the National Natural Science Foundation of China(No.U21B2003,62072250,62072250,62172435,U1804263,U20B2065,61872203,71802110,61802212)the National Key R&D Program of China(No.2021QY0700)+4 种基金the Key Laboratory of Intelligent Support Technology for Complex Environments(Nanjing University of Information Science and Technology),Ministry of Education,and the Natural Science Foundation of Jiangsu Province(No.BK20200750)Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022002)Post Graduate Research&Practice Innvoation Program of Jiangsu Province(No.KYCX200974)Open Project Fund of Shandong Provincial Key Laboratory of Computer Network(No.SDKLCN-2022-05)the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)Fund and Graduate Student Scientific Research Innovation Projects of Jiangsu Province(No.KYCX231359).
文摘In recent years,various adversarial defense methods have been proposed to improve the robustness of deep neural networks.Adversarial training is one of the most potent methods to defend against adversarial attacks.However,the difference in the feature space between natural and adversarial examples hinders the accuracy and robustness of the model in adversarial training.This paper proposes a learnable distribution adversarial training method,aiming to construct the same distribution for training data utilizing the Gaussian mixture model.The distribution centroid is built to classify samples and constrain the distribution of the sample features.The natural and adversarial examples are pushed to the same distribution centroid to improve the accuracy and robustness of the model.The proposed method generates adversarial examples to close the distribution gap between the natural and adversarial examples through an attack algorithm explicitly designed for adversarial training.This algorithm gradually increases the accuracy and robustness of the model by scaling perturbation.Finally,the proposed method outputs the predicted labels and the distance between the sample and the distribution centroid.The distribution characteristics of the samples can be utilized to detect adversarial cases that can potentially evade the model defense.The effectiveness of the proposed method is demonstrated through comprehensive experiments.
基金financially supported by the National Basic Research Program of China (2009CB825105)the National Natural Science Foundation of China (41261090)
文摘Modeling soil salinity in an arid salt-affected ecosystem is a difficult task when using remote sensing data because of the complicated soil context (vegetation cover, moisture, surface roughness, and organic matter) and the weak spectral features of salinized soil. Therefore, an index such as the salinity index (SI) that only uses soil spectra may not detect soil salinity effectively and quantitatively. The use of vegetation reflectance as an indirect indicator can avoid limitations associated with the direct use of soil reflectance. The normalized difference vegetation index (NDVI), as the most common vegetation index, was found to be responsive to salinity but may not be available for retrieving sparse vegetation due to its sensitivity to background soil in arid areas. Therefore, the arid fraction integrated index (AFⅡ) was created as supported by the spectral mixture analysis (SMA), which is more appropriate for analyzing variations in vegetation cover (particularly halophytes) than NDVI in the study area. Using soil and vegetation separately for detecting salinity perhaps is not feasible. Then, we developed a new and operational model, the soil salinity detecting model (SDM) that combines AFⅡ and SI to quantitatively estimate the salt content in the surface soil. SDMs, including SDM1 and SDM2, were constructed through analyzing the spatial characteristics of soils with different salinization degree by integrating AFⅡ and SI using a scatterplot. The SDMs were then compared to the combined spectral response index (COSRI) from field measurements with respect to the soil salt content. The results indicate that the SDM values are highly correlated with soil salinity, in contrast to the performance of COSRI. Strong exponential relationships were observed between soil salinity and SDMs (R2〉0.86, RMSE〈6.86) compared to COSRI (R2=0.71, RMSE=16.21). These results suggest that the feature space related to biophysical properties combined with AFII and SI can effectively provide information on soil salinity.
基金Supported by the National Natural Science Foundation of China, under Grant No.51279033.
文摘The critical technical problem of underwater bottom object detection is founding a stable feature space for echo signals classification. The past literatures more focus on the characteristics of object echoes in feature space and reverberation is only treated as interference. In this paper, reverberation is considered as a kind of signal with steady characteristic, and the clustering of reverberation in frequency discrete wavelet transform (FDWT) feature space is studied. In order to extract the identifying information of echo signals, feature compression and cluster analysis are adopted in this paper, and the criterion of separability between object echoes and reverberation is given. The experimental data processing results show that reverberation has steady pattern in FDWT feature space which differs from that of object echoes. It is proven that there is separability between reverberation and object echoes.
基金supported by the National Natural Science Foundation of China (67441830108 and 41871224)。
文摘Spectral and spatial features in remotely sensed data play an irreplaceable role in classifying crop types for precision agriculture. Despite the thriving establishment of the handcrafted features, designing or selecting such features valid for specific crop types requires prior knowledge and thus remains an open challenge. Convolutional neural networks(CNNs) can effectively overcome this issue with their advanced ability to generate high-level features automatically but are still inadequate in mining spectral features compared to mining spatial features. This study proposed an enhanced spectral feature called Stacked Spectral Feature Space Patch(SSFSP) for CNN-based crop classification. SSFSP is a stack of twodimensional(2 D) gridded spectral feature images that record various crop types’ spatial and intensity distribution characteristics in a 2 D feature space consisting of two spectral bands. SSFSP can be input into2 D-CNNs to support the simultaneous mining of spectral and spatial features, as the spectral features are successfully converted to 2 D images that can be processed by CNN. We tested the performance of SSFSP by using it as the input to seven CNN models and one multilayer perceptron model for crop type classification compared to using conventional spectral features as input. Using high spatial resolution hyperspectral datasets at three sites, the comparative study demonstrated that SSFSP outperforms conventional spectral features regarding classification accuracy, robustness, and training efficiency. The theoretical analysis summarizes three reasons for its excellent performance. First, SSFSP mines the spectral interrelationship with feature generality, which reduces the required number of training samples.Second, the intra-class variance can be largely reduced by grid partitioning. Third, SSFSP is a highly sparse feature, which reduces the dependence on the CNN model structure and enables early and fast convergence in model training. In conclusion, SSFSP has great potential for practical crop classification in precision agriculture.
基金This work is supported by National Key R&D Programs of China,No.2021YFB3301302the National Natural Science Foundation of China,No.52175467the National Science Fund of China for Distinguished Young Scholars,No.51925505。
文摘Precise control of machining deformation is crucial for improving the manufacturing quality of structural aerospace components.In the machining process,different batches of blanks have different residual stress distributions,which pose a significant challenge to machining deformation control.In this study,a reinforcement learning method for machining deformation control based on a meta-invariant feature space was developed.The proposed method uses a reinforcement-learning model to dynamically control the machining process by monitoring the deformation force.Moreover,combined with a meta-invariant feature space,the proposed method learns the internal relationship of the deformation control approaches under different stress distributions to achieve the machining deformation control of different batches of blanks.Finally,the experimental results show that the proposed method achieves better deformation control than the two existing benchmarking methods.
文摘With the development of large scale text processing, the dimension of text feature space has become larger and larger, which has added a lot of difficulties to natural language processing. How to reduce the dimension has become a practical problem in the field. Here we present two clustering methods, i.e. concept association and concept abstract, to achieve the goal. The first refers to the keyword clustering based on the co occurrence of
文摘A novel mercer kernel based fuzzy clustering self-adaptive algorithm is presented. The mercer kernel method is introduced to the fuzzy c-means clustering. It may map implicitly the input data into the high-dimensional feature space through the nonlinear transformation. Among other fuzzy c-means and its variants, the number of clusters is first determined. A self-adaptive algorithm is proposed. The number of clusters, which is not given in advance, can be gotten automatically by a validity measure function. Finally, experiments are given to show better performance with the method of kernel based fuzzy c-means self-adaptive algorithm.
文摘Using the view point of nonlinear science and the method of selecting numerical features of pattern recognition for reference, the physical and numerical features of precursory ground tilt data are synthetically employed. The dynamic changes of data series are described with the numerical features in multi dimensional space and their distributive relations instead of an unique factor. The relationship between the ground tilt data and earthquake is examined through recognition and classification.
文摘In medical research and clinical diagnosis, automated or computer-assisted classification and retrieval methods are highly desirable to offset the high cost of manual classification and manipulation by medical experts. To facilitate the decision-making in the health-care and the related areas, in this paper, a two-step content-based medical image retrieval algorithm is proposed. Firstly, in the preprocessing step, the image segmentation is performed to distinguish image objects, and on the basis of the ...
基金supported by National Natural Science Foundation of China (No.40872193)
文摘This paper presents a nonlinear multidimensional scaling model, called kernelized fourth quantifica- tion theory, which is an integration of kernel techniques and the fourth quantification theory. The model can deal with the problem of mineral prediction without defining a training area. In mineral target prediction, the pre-defined statistical cells, such as grid cells, can be implicitly transformed using kernel techniques from input space to a high-dimensional feature space, where the nonlinearly separable clusters in the input space are ex- pected to be linearly separable. Then, the transformed cells in the feature space are mapped by the fourth quan- tifieation theory onto a low-dimensional scaling space, where the sealed cells can be visually clustered according to their spatial locations. At the same time, those cells, which are far away from the cluster center of the majority of the sealed cells, are recognized as anomaly cells. Finally, whether the anomaly cells can serve as mineral potential target cells can be tested by spatially superimposing the known mineral occurrences onto the anomaly ceils. A case study shows that nearly all the known mineral occurrences spatially coincide with the anomaly cells with nearly the smallest scaled coordinates in one-dimensional sealing space. In the case study, the mineral target cells delineated by the new model are similar to those predicted by the well-known WofE model.
文摘Gene selection (feature selection) is generally pertormed in gene space(feature space), where a very serious curse of dimensionality problem always existsbecause the number of genes is much larger than the number of samples in gene space(G-space). This results in difficulty in modeling the data set in this space and the lowconfidence of the result of gene selection. How to find a gene subset in this case is achallenging subject. In this paper, the above G-space is transformed into its dual space,referred to as class space (C-space) such that the number of dimensions is the verynumber of classes of the samples in G-space and the number of samples in C-space isthe number of genes in G-space. it is obvious that the curse of dimensionality in C-spacedoes not exist. A new gene selection method which is based on the principle of separatingdifferent classes as far as possible is presented with the help of Principal ComponentAnalysis (PCA). The experimental results on gene selection for real data set areevaluated with Fisher criterion, weighted Fisher criterion as well as leave-one-out crossvalidation, showing that the method presented here is effective and efficient.
基金This work was financially supported by the National Natural Science Foundation of China(Grant No.61601034)and the National Natural Science Foundation of China(Grant No.31871525)The authors acknowledge Kimberly Moravec,PhD,from Liwen Bianji,Edanz Editing China(www.liwenbianji.cn/ac),for editing the English text of a draft of this manuscript.
文摘Recognition and counting of greenhouse pests are important for monitoring and forecasting pest population dynamics.This study used image processing techniques to recognize and count whiteflies and thrips on a sticky trap located in a greenhouse environment.The digital images of sticky traps were collected using an image-acquisition system under different greenhouse conditions.If a single color space is used,it is difficult to segment the small pests correctly because of the detrimental effects of non-uniform illumination in complex scenarios.Therefore,a method that first segments object pests in two color spaces using the Prewitt operator in I component of the hue-saturation-intensity(HSI)color space and the Canny operator in the B component of the Lab color space was proposed.Then,the segmented results for the two-color spaces were summed and achieved 91.57%segmentation accuracy.Next,because different features of pests contribute differently to the classification of pest species,the study extracted multiple features(e.g.,color and shape features)in different color spaces for each segmented pest region to improve the recognition performance.Twenty decision trees were used to form a strong ensemble learning classifier that used a majority voting mechanism and obtains 95.73%recognition accuracy.The proposed method is a feasible and effective way to process greenhouse pest images.The system accurately recognized and counted pests in sticky trap images captured under real greenhouse conditions.
基金supported by the National High Technology Research and Development Program of China(No.2011AAXXX2035)the Third Phase of Innovative Engineering Projects Foundation of the Changchun Institute of Optics,Fine Mechanics and Physics,Chinese Academy of Sciences(No.065X32CN60)
文摘The traditional space target detection methods mainly use the spatial characteristics of the star map to detect the targets, which can not make full use of the time domain information. This paper presents a new space moving target detection method based on time domain features. We firstly construct the time spectral data of star map, then analyze the time domain features of the main objects(target, stars and the background) in star maps, finally detect the moving targets using single pulse feature of the time domain signal. The real star map target detection experimental results show that the proposed method can effectively detect the trajectory of moving targets in the star map sequence, and the detection probability achieves 99% when the false alarm rate is about 8×10^(-5), which outperforms those of compared algorithms.
基金supported by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China(62121001).
文摘Facial emotion have great significance in human-computer interaction,virtual reality and people's communication.Existing methods for facial emotion privacy mainly concentrate on the perturbation of facial emotion images.However,cryptography-based perturbation algorithms are highly computationally expensive,and transformation-based perturbation algorithms only target specific recognition models.In this paper,we propose a universal feature vector-based privacy-preserving perturbation algorithm for facial emotion.Our method implements privacy-preserving facial emotion images on the feature space by computing tiny perturbations and adding them to the original images.In addition,the proposed algorithm can also enable expression images to be recognized as specific labels.Experiments show that the protection success rate of our method is above 95%and the image quality evaluation degrades no more than 0.003.The quantitative and qualitative results show that our proposed method has a balance between privacy and usability.
文摘Under the pressure of SDG15.3.1 compliance,it is imperative to solve the land salinization degradation problem in the Yellow River Basin as China’s granary.From the view of geographical scale,six zoning units were divided in the Yellow River Basin with‘climate-meteorology-geomorphology’as the main controlling factor,and a salinization inversion model was constructed for each zoning unit.Appropriate surface parameters were selected to construct a three-dimensional feature space according to the individual geographical zones.Based on the cloud data processing capability of the Google Earth Engine platform,a feature space inversion process was applied for automatic inversion of salinization.Salinization distribution maps of the Yellow River Basin in 2015 and 2020 were obtained at 30 m resolution by classifying the salinization inversion result.The distribution and spatiotemporal variation of salinization as well as the causes of salinization were analyzed.Reasonable prevention and control suggestions were subsequently proposed.This study could also be scaled up to larger and more complex geographical regions.
文摘In this paper,improvement on man-computer interactive classification of clouds based on hispeetral satellite imagery has been synthesized by using the maximum likelihood automatic clustering(MLAC)and the unit feature space classification(UFSC)approaches.The improved classification not only shortens the time of sample-training in UFSC method,but also eliminates the inevitable shortcomings of the MLAC method.(e.g.,1.sample selecting and training is confined only to one cloud image:2.the result of clustering is pretty sensitive to the selection of initial cluster center:3.the actual classification basically can not satisfy the supposition of normal distribution required by MLAC method;4.errors in classification are difficult to be modified.) Moreover,it makes full use of the professionals'accumulated knowledge and experience of visual cloud classifications and the cloud report of ground observation,having ensured both the higher accuracy of classification and its wide application as well.
基金This work was supported by the National Natural Science Foundation of China under Grant Nos. 61100146 and 61472351, and the Zhejiang Provincial Natural Science Foundation of China under Grant Nos. LY15F020019 and LQ14F020006. Pan was supported by the National Key Technology Research and Development Program of the Ministry of Science and Technology of China under Grant No. 2013BAH24F01. Acknowledgement CVM 2015 anonymous We would like to thank our reviewers for their constructive and helpful comments which definitely improve ttle quality of the paper.
文摘This paper proposes a structure-aware nonlocal energy optimization framework for interactive image colo- rization with sparse scribbles. Our colorization technique propagates colors to both local intensity-continuous regions and remote texture-similar regions without explicit image segmentation. We implement the nonlocal principle by computing k nearest neighbors in the high-dimensional feature space. The feature space contains not only image coordinates and intensities but also statistical texture features obtained with the direction-aligned Gabor wavelet filter. Structure maps are utilized to scale texture features to avoid artifacts along high-contrast boundaries. We show various experimental results and comparisons on image colorization, selective recoloring and decoloring, and progressive color editing to demonstrate the effectiveness of the proposed approach.
基金The Strategic Priority Research Program of Chinese Academy of Sciences(XDA19040501)The Construction Project of the China Knowledge Center for Engineering Sciences and Technology(CKCEST-2021-2-18)。
文摘China’s Yellow River Delta represents a typical area with moist semi-humid soil salinization,and its salinization has seriously affected the sustainable use of local resources.The use of remote sensing technology to understand changes in the spatial and temporal patterns of salinization is key to combating regional land degradation.In this study,a feature space model was constructed for remote sensing and monitoring land salinization using Landsat 8 OIL multi-spectral images.The feature parameters were paired to construct a feature space model;a total of eight feature space models were obtained.An accuracy analysis was conducted by combining salt-loving vegetation data with measured data,and the model demonstrating the highest accuracy was selected to develop salinization inversion maps for 2015 and 2020.The results showed that:(1)The total salinization area of the Yellow River Delta displayed a slight upward trend,increasing from 4244 km^(2) in 2015 to 4629 km^(2) in 2020.However,the area’s salting degree reduced substantially,and the areas of saline soil and severe salinization were reduced in size;(2)The areas with reduced salinization severity were mainly concentrated in areas surrounding cities,and primarily comprised wetlands and some regions around the Bohai Sea;(3)Numerous factors such as the implementation of the“Bohai Granary”cultivation engagement plan,increase in human activities to greening local residential living environments,and seawater intrusion caused by the reduction of sediment contents have impacted the distribution of salinization areas in the Yellow River Delta;(4)The characteristic space method of salinization monitoring has better applicability and can be promoted in humid-sub humid regions.
基金partly supported by the National Natural Science Foundation of China(Nos.61532012,61370196,and 61672109)
文摘In this paper, we target a similarity search among data supply chains, which plays an essential role in optimizing the supply chain and extending its value. This problem is very challenging for application-oriented data supply chains because the high complexity of the data supply chain makes the computation of similarity extremely complex and inefficient. In this paper, we propose a feature space representation model based on key points,which can extract the key features from the subsequences of the original data supply chain and simplify it into a feature vector form. Then, we formulate the similarity computation of the subsequences based on the multiscale features. Further, we propose an improved hierarchical clustering algorithm for a similarity search over the data supply chains. The main idea is to separate the subsequences into disjoint groups such that each group meets one specific clustering criteria; thus, the cluster containing the query object is the similarity search result. The experimental results show that the proposed approach is both effective and efficient for data supply chain retrieval.