The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome th...Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome this issue,we propose a convolutional graph neural network(CGNN)model,which we enhance with multilayer feature fusion and a squeeze-and-excitation block.Additionally,we introduce a spatially balanced mean squared error(SBMSE)loss function to address the imbalanced distribution and spatial variability of meteorological variables.The CGNN is capable of extracting essential spatial features and aggregating them from a global perspective,thereby improving the accuracy of prediction and enhancing the model's generalization ability.Based on the experimental results,CGNN has certain advantages in terms of bias distribution,exhibiting a smaller variance.When it comes to precipitation,both UNet and AE also demonstrate relatively small biases.As for temperature,AE and CNNdense perform outstandingly during the winter.The time correlation coefficients show an improvement of at least 10%at daily and monthly scales for both temperature and precipitation.Furthermore,the SBMSE loss function displays an advantage over existing loss functions in predicting the98th percentile and identifying areas where extreme events occur.However,the SBMSE tends to overestimate the distribution of extreme precipitation,which may be due to the theoretical assumptions about the posterior distribution of data that partially limit the effectiveness of the loss function.In future work,we will further optimize the SBMSE to improve prediction accuracy.展开更多
In recent years, the interest in damage identification of structural components through innovative techniques has grown significantly. Damage identification has always been a crucial concern in quality assessment and ...In recent years, the interest in damage identification of structural components through innovative techniques has grown significantly. Damage identification has always been a crucial concern in quality assessment and load capacity rating of infrastructure. In this regard, researchers focus on proposing efficient tools to identify the damages in early stages to prevent the sudden failure in structural components, ensuring the public safety and reducing the asset management costs. The sensing technologies along with the data analysis through various techniques and machine learning approaches have been the area of interest for these innovative techniques. The purpose of this research is to develop a robust method for automatic condition assessment of real-life concrete structures for the detection of relatively small cracks at early stages. A damage identification algorithm is proposed using the hybrid approaches to analyze the sensors data. The data obtained from transducers mounted on concrete beams under static loading in laboratory. These data are used as the input parameters. The method relies only on the measured time responses. After filtering and normalization of the data, the damage sensitive statistical features are extracted from the signals and used as the inputs of Self-Advising Support Vector Machine (SA-SVM) for the classification purpose in civil Engineering area. Finally, the results are compared with traditional methods to investigate the feasibility of the hybrid proposed algorithm. It is demonstrated that the presented method can reliably detect the crack in the structure and thereby enable the real-time infrastructure health monitoring.展开更多
Thermal image, or thermogram, becomes a new type of signal for machine condition monitoring and fault diagnosis due to the capability to display real-time temperature distribution and possibility to indicate the mach...Thermal image, or thermogram, becomes a new type of signal for machine condition monitoring and fault diagnosis due to the capability to display real-time temperature distribution and possibility to indicate the machine’s operating condition through its temperature. In this paper, an investigation of using the second-order statistical features of thermogram in association with minimum redundancy maximum relevance (mRMR) feature selection and simplified fuzzy ARTMAP (SFAM) classification is conducted for rotating machinery fault diagnosis. The thermograms of different machine conditions are firstly preprocessed for improving the image contrast, removing noise, and cropping to obtain the regions of interest (ROIs). Then, an enhanced algorithm based on bi-dimensional empirical mode decomposition is implemented to further increase the quality of ROIs before the second-order statistical features are extracted from their gray-level co-occurrence matrix (GLCM). The highly relevant features to the machine condition are selected from the total feature set by mRMR and are fed into SFAM to accomplish the fault diagnosis. In order to verify this investigation, the thermograms acquired from different conditions of a fault simulator including normal, misalignment, faulty bearing, and mass unbalance are used. This investigation also provides a comparative study of SFAM and other traditional methods such as back-propagation and probabilistic neural networks. The results show that the second-order statistical features used in this framework can provide a plausible accuracy in fault diagnosis of rotating machinery.展开更多
On the basis of the arctic monthly mean sea ice extent data set during 1953-1984, the arctic region is divided into eight subregions,and the analyses of empirical orthogonal functions, power spectrum and maximum entro...On the basis of the arctic monthly mean sea ice extent data set during 1953-1984, the arctic region is divided into eight subregions,and the analyses of empirical orthogonal functions, power spectrum and maximum entropy spectrum are made to indentify the major spatial and temporal features of the sea ice fluctuations within 32-year period. And then, a brief appropriate physical explanation is tentatively suggested. The results show that both seasonal and non-seasonal variations of the sea ice extent are remarkable, and iis mean annual peripheral positions as well as their interannu-al shifting amplitudes are quite different among all subregions. These features are primarily affected by solar radiation, o-cean circulation, sea surface temperature and maritime-continental contrast, while the non-seasonal variations are most possibly affected by the cosmic-geophysical factors such as earth pole shife, earth rotation oscillation and solar activity.展开更多
With the increasing popularity of high-resolution remote sensing images,the remote sensing image retrieval(RSIR)has always been a topic of major issue.A combined,global non-subsampled shearlet transform(NSST)-domain s...With the increasing popularity of high-resolution remote sensing images,the remote sensing image retrieval(RSIR)has always been a topic of major issue.A combined,global non-subsampled shearlet transform(NSST)-domain statistical features(NSSTds)and local three dimensional local ternary pattern(3D-LTP)features,is proposed for high-resolution remote sensing images.We model the NSST image coefficients of detail subbands using 2-state laplacian mixture(LM)distribution and its three parameters are estimated using Expectation-Maximization(EM)algorithm.We also calculate the statistical parameters such as subband kurtosis and skewness from detail subbands along with mean and standard deviation calculated from approximation subband,and concatenate all of them with the 2-state LM parameters to describe the global features of the image.The various properties of NSST such as multiscale,localization and flexible directional sensitivity make it a suitable choice to provide an effective approximation of an image.In order to extract the dense local features,a new 3D-LTP is proposed where dimension reduction is performed via selection of‘uniform’patterns.The 3D-LTP is calculated from spatial RGB planes of the input image.The proposed inter-channel 3D-LTP not only exploits the local texture information but the color information is captured too.Finally,a fused feature representation(NSSTds-3DLTP)is proposed using new global(NSSTds)and local(3D-LTP)features to enhance the discriminativeness of features.The retrieval performance of proposed NSSTds-3DLTP features are tested on three challenging remote sensing image datasets such as WHU-RS19,Aerial Image Dataset(AID)and PatternNet in terms of mean average precision(MAP),average normalized modified retrieval rank(ANMRR)and precision-recall(P-R)graph.The experimental results are encouraging and the NSSTds-3DLTP features leads to superior retrieval performance compared to many well known existing descriptors such as Gabor RGB,Granulometry,local binary pattern(LBP),Fisher vector(FV),vector of locally aggregated descriptors(VLAD)and median robust extended local binary pattern(MRELBP).For WHU-RS19 dataset,in terms of{MAP,ANMRR},the NSSTds-3DLTP improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{41.93%,20.87%},{92.30%,32.68%},{86.14%,31.97%},{18.18%,15.22%},{8.96%,19.60%}and{15.60%,13.26%},respectively.For AID,in terms of{MAP,ANMRR},the NSSTds-3DLTP improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{152.60%,22.06%},{226.65%,25.08%},{185.03%,23.33%},{80.06%,12.16%},{50.58%,10.49%}and{62.34%,3.24%},respectively.For PatternNet,the NSSTds-3DLTP respectively improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{32.79%,10.34%},{141.30%,24.72%},{17.47%,10.34%},{83.20%,19.07%},{21.56%,3.60%},and{19.30%,0.48%}in terms of{MAP,ANMRR}.The moderate dimensionality of simple NSSTds-3DLTP allows the system to run in real-time.展开更多
Rock failure can cause serious geological disasters,and the non-extensive statistical features of electric potential(EP)are expected to provide valuable information for disaster prediction.In this paper,the uniaxial c...Rock failure can cause serious geological disasters,and the non-extensive statistical features of electric potential(EP)are expected to provide valuable information for disaster prediction.In this paper,the uniaxial compression experiments with EP monitoring were carried out on fine sandstone,marble and granite samples under four displacement rates.The Tsallis entropy q value of EPs is used to analyze the selforganization evolution of rock failure.Then the influence of displacement rate and rock type on q value are explored by mineral structure and fracture modes.A self-organized critical prediction method with q value is proposed.The results show that the probability density function(PDF)of EPs follows the q-Gaussian distribution.The displacement rate is positively correlated with q value.With the displacement rate increasing,the fracture mode changes,the damage degree intensifies,and the microcrack network becomes denser.The influence of rock type on q value is related to the burst intensity of energy release and the crack fracture mode.The q value of EPs can be used as an effective prediction index for rock failure like b value of acoustic emission(AE).The results provide useful reference and method for the monitoring and early warning of geological disasters.展开更多
Cleats are the dominant micro-fracture network controlling the macro-mechanical behavior of coal.Improved understanding of the spatial characteristics of cleat networks is therefore important to the coal mining indust...Cleats are the dominant micro-fracture network controlling the macro-mechanical behavior of coal.Improved understanding of the spatial characteristics of cleat networks is therefore important to the coal mining industry.Discrete fracture networks(DFNs)are increasingly used in engineering analyses to spatially model fractures at various scales.The reliability of coal DFNs largely depends on the confidence in the input cleat statistics.Estimates of these parameters can be made from image-based three-dimensional(3D)characterization of coal cleats using X-ray micro-computed tomography(m CT).One key step in this process,after cleat extraction,is the separation of individual cleats,without which the cleats are a connected network and statistics for different cleat sets cannot be measured.In this paper,a feature extraction-based image processing method is introduced to identify and separate distinct cleat groups from 3D X-ray m CT images.Kernels(filters)representing explicit cleat features of coal are built and cleat separation is successfully achieved by convolutional operations on 3D coal images.The new method is applied to a coal specimen with 80 mm in diameter and 100 mm in length acquired from an Anglo American Steelmaking Coal mine in the Bowen Basin,Queensland,Australia.It is demonstrated that the new method produces reliable cleat separation capable of defining individual cleats and preserving 3D topology after separation.Bedding-parallel fractures are also identified and separated,which has his-torically been challenging to delineate and rarely reported.A variety of cleat/fracture statistics is measured which not only can quantitatively characterize the cleat/fracture system but also can be used for DFN modeling.Finally,variability and heterogeneity with respect to the core axis are investigated.Significant heterogeneity is observed and suggests that the representative elementary volume(REV)of the cleat groups for engineering purposes may be a complex problem requiring careful consideration.展开更多
In literature, features based on First and Second Order Statistics that characterizes textures are used for classification of images. Features based on statistics of texture provide far less number of relevant and dis...In literature, features based on First and Second Order Statistics that characterizes textures are used for classification of images. Features based on statistics of texture provide far less number of relevant and distinguishable features in comparison to existing methods based on wavelet transformation. In this paper, we investigated performance of texture-based features in comparison to wavelet-based features with commonly used classifiers for the classification of Alzheimer’s disease based on T2-weighted MRI brain image. The performance is evaluated in terms of sensitivity, specificity, accuracy, training and testing time. Experiments are performed on publicly available medical brain images. Experimental results show that the performance with First and Second Order Statistics based features is significantly better in comparison to existing methods based on wavelet transformation in terms of all performance measures for all classifiers.展开更多
Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is in...Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is introduced to promote efficiency.However,most existing Transformer-based log anomaly detection methods convert unstructured log messages into structured templates by log parsing,which introduces parsing errors.They only extract simple semantic feature,which ignores other features,and are generally supervised,relying on the amount of labeled data.To overcome the limitations of existing methods,this paper proposes a novel unsupervised log anomaly detection method based on multi-feature(UMFLog).UMFLog includes two sub-models to consider two kinds of features:semantic feature and statistical feature,respectively.UMFLog applies the log original content with detailed parameters instead of templates or template IDs to avoid log parsing errors.In the first sub-model,UMFLog uses Bidirectional Encoder Representations from Transformers(BERT)instead of random initialization to extract effective semantic feature,and an unsupervised hypersphere-based Transformer model to learn compact log sequence representations and obtain anomaly candidates.In the second sub-model,UMFLog exploits a statistical feature-based Variational Autoencoder(VAE)about word occurrence times to identify the final anomaly from anomaly candidates.Extensive experiments and evaluations are conducted on three real public log datasets.The results show that UMFLog significantly improves F1-scores compared to the state-of-the-art(SOTA)methods because of the multi-feature.展开更多
Tyre pressure monitoring system(TPMS)is compulsory in most countries like the United States and European Union.The existing systems depend on pressure sensors strapped on the tyre or on wheel speed sensor data.A diffe...Tyre pressure monitoring system(TPMS)is compulsory in most countries like the United States and European Union.The existing systems depend on pressure sensors strapped on the tyre or on wheel speed sensor data.A difference in wheel speed would trigger an alarm based on the algorithm implemented.In this paper,machine learning approach is proposed as a new method to monitor tyre pressure by extracting the vertical vibrations from a wheel hub of a moving vehicle using an accelerometer.The obtained signals will be used to compute through statistical features and histogram features for the feature extraction process.The LMT(Logistic Model Tree)was used as the classifier and attained a classification accuracy of 92.5%with 10-fold cross validation for statistical features and 90.5% with 10-fold cross validation for histogram features.The proposed model can be used for monitoring the automobile tyre pressure successfully.展开更多
Forecasting the movement of stock market is a long-time attractive topic. This paper implements different statistical learning models to predict the movement of S&P 500 index. The S&P 500 index is influenced b...Forecasting the movement of stock market is a long-time attractive topic. This paper implements different statistical learning models to predict the movement of S&P 500 index. The S&P 500 index is influenced by other important financial indexes across the world such as commodity price and financial technical indicators. This paper systematically investigated four supervised learning models, including Logistic Regression, Gaussian Discriminant Analysis (GDA), Naive Bayes and Support Vector Machine (SVM) in the forecast of S&P 500 index. After several experiments of optimization in features and models, especially the SVM kernel selection and feature selection for different models, this paper concludes that a SVM model with a Radial Basis Function (RBF) kernel can achieve an accuracy rate of 62.51% for the future market trend of the S&P 500 index.展开更多
Statistical Signal Transmission(SST)is a technique based on orthogonal frequency-division multiplexing(OFDM)and adopts cyclostationary features,which can transmit extra information without additional bandwidth.However...Statistical Signal Transmission(SST)is a technique based on orthogonal frequency-division multiplexing(OFDM)and adopts cyclostationary features,which can transmit extra information without additional bandwidth.However,the more complicated environment in 5G communication systems,especially the fast time-varying scenarios,will dramatically degrade the performance of the SST.In this paper,we propose a fragmental weight-conservation combining(FWCC)scheme for SST,to overcome its performance degradation under fast time-varying channels.The proposed FWCC scheme consists of three phases:1、incise the received OFDM stream into pieces;2、endue different weights for fine and contaminated pieces,respectively;3、combine cyclic autocorrelation function energies of all the pieces;and 4、compute the final feature and demodulate data of SST.Through these procedures above,the detection accuracy of SST will be theoretically refined under fast time-varying channels.Such an inference is confirmed through numerical results in this paper.It is demonstrated that the BER performance of proposed scheme outperforms that of the original scheme both in ideal channel estimation conditions and in imperfect channel estimation conditions.In addition,we also find the experiential optimal weight distribution strategy for the proposed FWCC scheme,which facilitates practical applications.展开更多
Hydraulic brakes in automobiles are an important control component used not only for the safety of the passenger but also for others moving on the road.Therefore,monitoring the condition of the brake components is ine...Hydraulic brakes in automobiles are an important control component used not only for the safety of the passenger but also for others moving on the road.Therefore,monitoring the condition of the brake components is inevitable.The brake elements can be monitored by studying the vibration characteristics obtained from the brake system using a proper signal processing technique through machine learning approaches.The vibration signals were captured using an accelerometer sensor under a various fault condition.The acquired vibration signals were processed for extracting meaningful information as features.The condition of the brake system can be predicted using a feature based machine learning approach through the extracted features.This study focuses on a mechatronics system for data acquisitions and a signal processing technique for extracting features such as statistical,histogram and wavelets.Comparative results have been carried out using an experimental study for finding the effectiveness of the suggested signal processing techniques for monitoring the condition of the brake system.展开更多
Emanated from the idea of reinvestigating ancient medical system of Ayurveda—Traditional Indian Medicine (TIM), our recent study had shown significant applications of analysis of arterial pulse waveforms for non-inva...Emanated from the idea of reinvestigating ancient medical system of Ayurveda—Traditional Indian Medicine (TIM), our recent study had shown significant applications of analysis of arterial pulse waveforms for non-invasive diagnosis of cardiovascular functions. Here we present results of further investigations analyzing the relation of pulse-characteristics with some clinical and pathological parameters and other features that are of diagnostic importance in Ayurveda.展开更多
The motivation for this article is to propose new damage classifiers based on a supervised learning problem for locating and quantifying damage.A new feature extraction approach using time series analysis is introduce...The motivation for this article is to propose new damage classifiers based on a supervised learning problem for locating and quantifying damage.A new feature extraction approach using time series analysis is introduced to extract damage-sensitive features from auto-regressive models.This approach sets out to improve current feature extraction techniques in the context of time series modeling.The coefficients and residuals of the AR model obtained from the proposed approach are selected as the main features and are applied to the proposed supervised learning classifiers that are categorized as coefficient-based and residual-based classifiers.These classifiers compute the relative errors in the extracted features between the undamaged and damaged states.Eventually,the abilities of the proposed methods to localize and quantify single and multiple damage scenarios are verified by applying experimental data for a laboratory frame and a four-story steel structure.Comparative analyses are performed to validate the superiority of the proposed methods over some existing techniques.Results show that the proposed classifiers,with the aid of extracted features from the proposed feature extraction approach,are able to locate and quantify damage;however,the residual-based classifiers yield better results than the coefficient-based classifiers.Moreover,these methods are superior to some classical techniques.展开更多
A novel method to extract conic blending feature in reverse engineering is presented. Different from the methods to recover constant and variable radius blends from unorganized points, it contains not only novel segme...A novel method to extract conic blending feature in reverse engineering is presented. Different from the methods to recover constant and variable radius blends from unorganized points, it contains not only novel segmentation and feature recognition techniques, but also bias corrected technique to capture more reliable distribution of feature parameters along the spine curve. The segmentation depending on point classification separates the points in the conic blend region from the input point cloud. The available feature parameters of the cross-sectional curves are extracted with the processes of slicing point clouds with planes, conic curve fitting, and parameters estimation and compensation, The extracted parameters and its distribution laws are refined according to statistic theory such as regression analysis and hypothesis test. The proposed method can accurately capture the original design intentions and conveniently guide the reverse modeling process. Application examples are presented to verify the high precision and stability of the proposed method.展开更多
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
基金partially funded by the National Natural Science Foundation of China(U2142205)the Guangdong Major Project of Basic and Applied Basic Research(2020B0301030004)+1 种基金the Special Fund for Forecasters of China Meteorological Administration(CMAYBY2020-094)the Graduate Student Research and Innovation Program of Central South University(2023ZZTS0347)。
文摘Traditional meteorological downscaling methods face limitations due to the complex distribution of meteorological variables,which can lead to unstable forecasting results,especially in extreme scenarios.To overcome this issue,we propose a convolutional graph neural network(CGNN)model,which we enhance with multilayer feature fusion and a squeeze-and-excitation block.Additionally,we introduce a spatially balanced mean squared error(SBMSE)loss function to address the imbalanced distribution and spatial variability of meteorological variables.The CGNN is capable of extracting essential spatial features and aggregating them from a global perspective,thereby improving the accuracy of prediction and enhancing the model's generalization ability.Based on the experimental results,CGNN has certain advantages in terms of bias distribution,exhibiting a smaller variance.When it comes to precipitation,both UNet and AE also demonstrate relatively small biases.As for temperature,AE and CNNdense perform outstandingly during the winter.The time correlation coefficients show an improvement of at least 10%at daily and monthly scales for both temperature and precipitation.Furthermore,the SBMSE loss function displays an advantage over existing loss functions in predicting the98th percentile and identifying areas where extreme events occur.However,the SBMSE tends to overestimate the distribution of extreme precipitation,which may be due to the theoretical assumptions about the posterior distribution of data that partially limit the effectiveness of the loss function.In future work,we will further optimize the SBMSE to improve prediction accuracy.
文摘In recent years, the interest in damage identification of structural components through innovative techniques has grown significantly. Damage identification has always been a crucial concern in quality assessment and load capacity rating of infrastructure. In this regard, researchers focus on proposing efficient tools to identify the damages in early stages to prevent the sudden failure in structural components, ensuring the public safety and reducing the asset management costs. The sensing technologies along with the data analysis through various techniques and machine learning approaches have been the area of interest for these innovative techniques. The purpose of this research is to develop a robust method for automatic condition assessment of real-life concrete structures for the detection of relatively small cracks at early stages. A damage identification algorithm is proposed using the hybrid approaches to analyze the sensors data. The data obtained from transducers mounted on concrete beams under static loading in laboratory. These data are used as the input parameters. The method relies only on the measured time responses. After filtering and normalization of the data, the damage sensitive statistical features are extracted from the signals and used as the inputs of Self-Advising Support Vector Machine (SA-SVM) for the classification purpose in civil Engineering area. Finally, the results are compared with traditional methods to investigate the feasibility of the hybrid proposed algorithm. It is demonstrated that the presented method can reliably detect the crack in the structure and thereby enable the real-time infrastructure health monitoring.
文摘Thermal image, or thermogram, becomes a new type of signal for machine condition monitoring and fault diagnosis due to the capability to display real-time temperature distribution and possibility to indicate the machine’s operating condition through its temperature. In this paper, an investigation of using the second-order statistical features of thermogram in association with minimum redundancy maximum relevance (mRMR) feature selection and simplified fuzzy ARTMAP (SFAM) classification is conducted for rotating machinery fault diagnosis. The thermograms of different machine conditions are firstly preprocessed for improving the image contrast, removing noise, and cropping to obtain the regions of interest (ROIs). Then, an enhanced algorithm based on bi-dimensional empirical mode decomposition is implemented to further increase the quality of ROIs before the second-order statistical features are extracted from their gray-level co-occurrence matrix (GLCM). The highly relevant features to the machine condition are selected from the total feature set by mRMR and are fed into SFAM to accomplish the fault diagnosis. In order to verify this investigation, the thermograms acquired from different conditions of a fault simulator including normal, misalignment, faulty bearing, and mass unbalance are used. This investigation also provides a comparative study of SFAM and other traditional methods such as back-propagation and probabilistic neural networks. The results show that the second-order statistical features used in this framework can provide a plausible accuracy in fault diagnosis of rotating machinery.
文摘On the basis of the arctic monthly mean sea ice extent data set during 1953-1984, the arctic region is divided into eight subregions,and the analyses of empirical orthogonal functions, power spectrum and maximum entropy spectrum are made to indentify the major spatial and temporal features of the sea ice fluctuations within 32-year period. And then, a brief appropriate physical explanation is tentatively suggested. The results show that both seasonal and non-seasonal variations of the sea ice extent are remarkable, and iis mean annual peripheral positions as well as their interannu-al shifting amplitudes are quite different among all subregions. These features are primarily affected by solar radiation, o-cean circulation, sea surface temperature and maritime-continental contrast, while the non-seasonal variations are most possibly affected by the cosmic-geophysical factors such as earth pole shife, earth rotation oscillation and solar activity.
文摘With the increasing popularity of high-resolution remote sensing images,the remote sensing image retrieval(RSIR)has always been a topic of major issue.A combined,global non-subsampled shearlet transform(NSST)-domain statistical features(NSSTds)and local three dimensional local ternary pattern(3D-LTP)features,is proposed for high-resolution remote sensing images.We model the NSST image coefficients of detail subbands using 2-state laplacian mixture(LM)distribution and its three parameters are estimated using Expectation-Maximization(EM)algorithm.We also calculate the statistical parameters such as subband kurtosis and skewness from detail subbands along with mean and standard deviation calculated from approximation subband,and concatenate all of them with the 2-state LM parameters to describe the global features of the image.The various properties of NSST such as multiscale,localization and flexible directional sensitivity make it a suitable choice to provide an effective approximation of an image.In order to extract the dense local features,a new 3D-LTP is proposed where dimension reduction is performed via selection of‘uniform’patterns.The 3D-LTP is calculated from spatial RGB planes of the input image.The proposed inter-channel 3D-LTP not only exploits the local texture information but the color information is captured too.Finally,a fused feature representation(NSSTds-3DLTP)is proposed using new global(NSSTds)and local(3D-LTP)features to enhance the discriminativeness of features.The retrieval performance of proposed NSSTds-3DLTP features are tested on three challenging remote sensing image datasets such as WHU-RS19,Aerial Image Dataset(AID)and PatternNet in terms of mean average precision(MAP),average normalized modified retrieval rank(ANMRR)and precision-recall(P-R)graph.The experimental results are encouraging and the NSSTds-3DLTP features leads to superior retrieval performance compared to many well known existing descriptors such as Gabor RGB,Granulometry,local binary pattern(LBP),Fisher vector(FV),vector of locally aggregated descriptors(VLAD)and median robust extended local binary pattern(MRELBP).For WHU-RS19 dataset,in terms of{MAP,ANMRR},the NSSTds-3DLTP improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{41.93%,20.87%},{92.30%,32.68%},{86.14%,31.97%},{18.18%,15.22%},{8.96%,19.60%}and{15.60%,13.26%},respectively.For AID,in terms of{MAP,ANMRR},the NSSTds-3DLTP improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{152.60%,22.06%},{226.65%,25.08%},{185.03%,23.33%},{80.06%,12.16%},{50.58%,10.49%}and{62.34%,3.24%},respectively.For PatternNet,the NSSTds-3DLTP respectively improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{32.79%,10.34%},{141.30%,24.72%},{17.47%,10.34%},{83.20%,19.07%},{21.56%,3.60%},and{19.30%,0.48%}in terms of{MAP,ANMRR}.The moderate dimensionality of simple NSSTds-3DLTP allows the system to run in real-time.
基金supported by National Key R&D Program of China(2022YFC3004705)the National Natural Science Foundation of China(Nos.52074280,52227901 and 52204249)+1 种基金the Postgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX24_2913)the Graduate Innovation Program of China University of Mining and Technology(No.2024WLKXJ139).
文摘Rock failure can cause serious geological disasters,and the non-extensive statistical features of electric potential(EP)are expected to provide valuable information for disaster prediction.In this paper,the uniaxial compression experiments with EP monitoring were carried out on fine sandstone,marble and granite samples under four displacement rates.The Tsallis entropy q value of EPs is used to analyze the selforganization evolution of rock failure.Then the influence of displacement rate and rock type on q value are explored by mineral structure and fracture modes.A self-organized critical prediction method with q value is proposed.The results show that the probability density function(PDF)of EPs follows the q-Gaussian distribution.The displacement rate is positively correlated with q value.With the displacement rate increasing,the fracture mode changes,the damage degree intensifies,and the microcrack network becomes denser.The influence of rock type on q value is related to the burst intensity of energy release and the crack fracture mode.The q value of EPs can be used as an effective prediction index for rock failure like b value of acoustic emission(AE).The results provide useful reference and method for the monitoring and early warning of geological disasters.
文摘Cleats are the dominant micro-fracture network controlling the macro-mechanical behavior of coal.Improved understanding of the spatial characteristics of cleat networks is therefore important to the coal mining industry.Discrete fracture networks(DFNs)are increasingly used in engineering analyses to spatially model fractures at various scales.The reliability of coal DFNs largely depends on the confidence in the input cleat statistics.Estimates of these parameters can be made from image-based three-dimensional(3D)characterization of coal cleats using X-ray micro-computed tomography(m CT).One key step in this process,after cleat extraction,is the separation of individual cleats,without which the cleats are a connected network and statistics for different cleat sets cannot be measured.In this paper,a feature extraction-based image processing method is introduced to identify and separate distinct cleat groups from 3D X-ray m CT images.Kernels(filters)representing explicit cleat features of coal are built and cleat separation is successfully achieved by convolutional operations on 3D coal images.The new method is applied to a coal specimen with 80 mm in diameter and 100 mm in length acquired from an Anglo American Steelmaking Coal mine in the Bowen Basin,Queensland,Australia.It is demonstrated that the new method produces reliable cleat separation capable of defining individual cleats and preserving 3D topology after separation.Bedding-parallel fractures are also identified and separated,which has his-torically been challenging to delineate and rarely reported.A variety of cleat/fracture statistics is measured which not only can quantitatively characterize the cleat/fracture system but also can be used for DFN modeling.Finally,variability and heterogeneity with respect to the core axis are investigated.Significant heterogeneity is observed and suggests that the representative elementary volume(REV)of the cleat groups for engineering purposes may be a complex problem requiring careful consideration.
文摘In literature, features based on First and Second Order Statistics that characterizes textures are used for classification of images. Features based on statistics of texture provide far less number of relevant and distinguishable features in comparison to existing methods based on wavelet transformation. In this paper, we investigated performance of texture-based features in comparison to wavelet-based features with commonly used classifiers for the classification of Alzheimer’s disease based on T2-weighted MRI brain image. The performance is evaluated in terms of sensitivity, specificity, accuracy, training and testing time. Experiments are performed on publicly available medical brain images. Experimental results show that the performance with First and Second Order Statistics based features is significantly better in comparison to existing methods based on wavelet transformation in terms of all performance measures for all classifiers.
基金Acknowledgements: This work is supported by the National Natural Science Foundation of China (Nos. 60832010, 60671064, 60703011) the Chinese National 863 Program (No. 2007AA0 IZ- 458) the research fund for the Doctoral Program of Higher Education (No. RFDP20070213047).
基金supported in part by the National Natural Science Foundation of China under Grant 62272062the Scientific Research Fund of Hunan Provincial Transportation Department(No.202143)the Open Fund ofKey Laboratory of Safety Control of Bridge Engineering,Ministry of Education(Changsha University of Science Technology)under Grant 21KB07.
文摘Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is introduced to promote efficiency.However,most existing Transformer-based log anomaly detection methods convert unstructured log messages into structured templates by log parsing,which introduces parsing errors.They only extract simple semantic feature,which ignores other features,and are generally supervised,relying on the amount of labeled data.To overcome the limitations of existing methods,this paper proposes a novel unsupervised log anomaly detection method based on multi-feature(UMFLog).UMFLog includes two sub-models to consider two kinds of features:semantic feature and statistical feature,respectively.UMFLog applies the log original content with detailed parameters instead of templates or template IDs to avoid log parsing errors.In the first sub-model,UMFLog uses Bidirectional Encoder Representations from Transformers(BERT)instead of random initialization to extract effective semantic feature,and an unsupervised hypersphere-based Transformer model to learn compact log sequence representations and obtain anomaly candidates.In the second sub-model,UMFLog exploits a statistical feature-based Variational Autoencoder(VAE)about word occurrence times to identify the final anomaly from anomaly candidates.Extensive experiments and evaluations are conducted on three real public log datasets.The results show that UMFLog significantly improves F1-scores compared to the state-of-the-art(SOTA)methods because of the multi-feature.
文摘Tyre pressure monitoring system(TPMS)is compulsory in most countries like the United States and European Union.The existing systems depend on pressure sensors strapped on the tyre or on wheel speed sensor data.A difference in wheel speed would trigger an alarm based on the algorithm implemented.In this paper,machine learning approach is proposed as a new method to monitor tyre pressure by extracting the vertical vibrations from a wheel hub of a moving vehicle using an accelerometer.The obtained signals will be used to compute through statistical features and histogram features for the feature extraction process.The LMT(Logistic Model Tree)was used as the classifier and attained a classification accuracy of 92.5%with 10-fold cross validation for statistical features and 90.5% with 10-fold cross validation for histogram features.The proposed model can be used for monitoring the automobile tyre pressure successfully.
文摘Forecasting the movement of stock market is a long-time attractive topic. This paper implements different statistical learning models to predict the movement of S&P 500 index. The S&P 500 index is influenced by other important financial indexes across the world such as commodity price and financial technical indicators. This paper systematically investigated four supervised learning models, including Logistic Regression, Gaussian Discriminant Analysis (GDA), Naive Bayes and Support Vector Machine (SVM) in the forecast of S&P 500 index. After several experiments of optimization in features and models, especially the SVM kernel selection and feature selection for different models, this paper concludes that a SVM model with a Radial Basis Function (RBF) kernel can achieve an accuracy rate of 62.51% for the future market trend of the S&P 500 index.
基金supported by the National Natural Science Foundation of China (Nos. 61801461, 61801460)the Strategical Leadership Project of Chinese Academy of Sciences (grant No. XDC02070800)the Shanghai Municipality of Science and Technology Commission Project (Nos. 18XD1404100, 17QA1403800)
文摘Statistical Signal Transmission(SST)is a technique based on orthogonal frequency-division multiplexing(OFDM)and adopts cyclostationary features,which can transmit extra information without additional bandwidth.However,the more complicated environment in 5G communication systems,especially the fast time-varying scenarios,will dramatically degrade the performance of the SST.In this paper,we propose a fragmental weight-conservation combining(FWCC)scheme for SST,to overcome its performance degradation under fast time-varying channels.The proposed FWCC scheme consists of three phases:1、incise the received OFDM stream into pieces;2、endue different weights for fine and contaminated pieces,respectively;3、combine cyclic autocorrelation function energies of all the pieces;and 4、compute the final feature and demodulate data of SST.Through these procedures above,the detection accuracy of SST will be theoretically refined under fast time-varying channels.Such an inference is confirmed through numerical results in this paper.It is demonstrated that the BER performance of proposed scheme outperforms that of the original scheme both in ideal channel estimation conditions and in imperfect channel estimation conditions.In addition,we also find the experiential optimal weight distribution strategy for the proposed FWCC scheme,which facilitates practical applications.
文摘Hydraulic brakes in automobiles are an important control component used not only for the safety of the passenger but also for others moving on the road.Therefore,monitoring the condition of the brake components is inevitable.The brake elements can be monitored by studying the vibration characteristics obtained from the brake system using a proper signal processing technique through machine learning approaches.The vibration signals were captured using an accelerometer sensor under a various fault condition.The acquired vibration signals were processed for extracting meaningful information as features.The condition of the brake system can be predicted using a feature based machine learning approach through the extracted features.This study focuses on a mechatronics system for data acquisitions and a signal processing technique for extracting features such as statistical,histogram and wavelets.Comparative results have been carried out using an experimental study for finding the effectiveness of the suggested signal processing techniques for monitoring the condition of the brake system.
文摘Emanated from the idea of reinvestigating ancient medical system of Ayurveda—Traditional Indian Medicine (TIM), our recent study had shown significant applications of analysis of arterial pulse waveforms for non-invasive diagnosis of cardiovascular functions. Here we present results of further investigations analyzing the relation of pulse-characteristics with some clinical and pathological parameters and other features that are of diagnostic importance in Ayurveda.
文摘The motivation for this article is to propose new damage classifiers based on a supervised learning problem for locating and quantifying damage.A new feature extraction approach using time series analysis is introduced to extract damage-sensitive features from auto-regressive models.This approach sets out to improve current feature extraction techniques in the context of time series modeling.The coefficients and residuals of the AR model obtained from the proposed approach are selected as the main features and are applied to the proposed supervised learning classifiers that are categorized as coefficient-based and residual-based classifiers.These classifiers compute the relative errors in the extracted features between the undamaged and damaged states.Eventually,the abilities of the proposed methods to localize and quantify single and multiple damage scenarios are verified by applying experimental data for a laboratory frame and a four-story steel structure.Comparative analyses are performed to validate the superiority of the proposed methods over some existing techniques.Results show that the proposed classifiers,with the aid of extracted features from the proposed feature extraction approach,are able to locate and quantify damage;however,the residual-based classifiers yield better results than the coefficient-based classifiers.Moreover,these methods are superior to some classical techniques.
基金This project is supported by General Electric Company and National Advanced Technology Project of China(No.863-511-942-018).
文摘A novel method to extract conic blending feature in reverse engineering is presented. Different from the methods to recover constant and variable radius blends from unorganized points, it contains not only novel segmentation and feature recognition techniques, but also bias corrected technique to capture more reliable distribution of feature parameters along the spine curve. The segmentation depending on point classification separates the points in the conic blend region from the input point cloud. The available feature parameters of the cross-sectional curves are extracted with the processes of slicing point clouds with planes, conic curve fitting, and parameters estimation and compensation, The extracted parameters and its distribution laws are refined according to statistic theory such as regression analysis and hypothesis test. The proposed method can accurately capture the original design intentions and conveniently guide the reverse modeling process. Application examples are presented to verify the high precision and stability of the proposed method.