Nowadays, Iris recognition is a method of biometric verification of the person authentication process based on the human iris unique pattern, which is applied to control system for high level security. It is a popular...Nowadays, Iris recognition is a method of biometric verification of the person authentication process based on the human iris unique pattern, which is applied to control system for high level security. It is a popular system for recognizing humans and essential to understand it. The objective of this method is to assign a unique subject for each iris image for authentication of the person and provide an effective feature representation of the iris recognition with the image analysis. This paper proposed a new optimization and recognition process of iris features selection by using proposed Modified ADMM and Deep Learning Algorithm (MADLA). For improving the performance of the security with feature extraction, the proposed algorithm is designed and used to extract the strong features identification of iris of the person with less time, better accuracy, improving performance in access control and in security level. The evaluations of iris data are demonstrated the improvement of the recognition accuracy. In this proposed methodology, the recognition of the iris features has been improved and it incorporates into the iris recognition systems.展开更多
Numerous vibration-based techniques are rarely used in diesel engines fault diagnosis in a direct way, due to the surface vibration signals of diesel engines with the complex non-stationary and nonlinear time-varying ...Numerous vibration-based techniques are rarely used in diesel engines fault diagnosis in a direct way, due to the surface vibration signals of diesel engines with the complex non-stationary and nonlinear time-varying fea- tures. To investigate the fault diagnosis of diesel engines, fractal correlation dimension, wavelet energy and entropy as features reflecting the diesel engine fault fractal and energy characteristics are extracted from the decomposed signals through analyzing vibration acceleration signals derived from the cylinder head in seven different states of valve train. An intelligent fault detector FastICA-SVM is applied for diesel engine fault diagnosis and classification. The results demonstrate that FastlCA-SVM achieves higher classification accuracy and makes better general- ization performance in small samples recognition. Besides, the fractal correlation dimension and wavelet energy and entropy as the special features of diesel engine vibration signal are considered as input vectors of classifier FastlCA- SVM and could produce the excellent classification results. The proposed methodology improves the accuracy of fea- ture extraction and the fault diagnosis of diesel engines.展开更多
Objective and quantitative assessment of skin conditions is essential for cosmeceutical studies and research on skin aging and skin regeneration.Various handcraft-based image processing methods have been proposed to e...Objective and quantitative assessment of skin conditions is essential for cosmeceutical studies and research on skin aging and skin regeneration.Various handcraft-based image processing methods have been proposed to evaluate skin conditions objectively,but they have unavoidable disadvantages when used to analyze skin features accurately.This study proposes a hybrid segmentation scheme consisting of Deeplab v3+with an Inception-ResNet-v2 backbone,LightGBM,and morphological processing(MP)to overcome the shortcomings of handcraft-based approaches.First,we apply Deeplab v3+with an Inception-ResNet-v2 backbone for pixel segmentation of skin wrinkles and cells.Then,LightGBM and MP are used to enhance the pixel segmentation quality.Finally,we determine several skin features based on the results of wrinkle and cell segmentation.Our proposed segmentation scheme achieved a mean accuracy of 0.854,mean of intersection over union of 0.749,and mean boundary F1 score of 0.852,which achieved 1.1%,6.7%,and 14.8%improvement over the panoptic-based semantic segmentation method,respectively.展开更多
A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have ...A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have occurred,which led to an active research area for improving NIDS technologies.In an analysis of related works,it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction(FR)and Machine Learning(ML)techniques on NIDS datasets.However,these datasets are different in feature sets,attack types,and network design.Therefore,this paper aims to discover whether these techniques can be generalised across various datasets.Six ML models are utilised:a Deep Feed Forward(DFF),Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),Decision Tree(DT),Logistic Regression(LR),and Naive Bayes(NB).The accuracy of three Feature Extraction(FE)algorithms is detected;Principal Component Analysis(PCA),Auto-encoder(AE),and Linear Discriminant Analysis(LDA),are evaluated using three benchmark datasets:UNSW-NB15,ToN-IoT and CSE-CIC-IDS2018.Although PCA and AE algorithms have been widely used,the determination of their optimal number of extracted dimensions has been overlooked.The results indicate that no clear FE method or ML model can achieve the best scores for all datasets.The optimal number of extracted dimensions has been identified for each dataset,and LDA degrades the performance of the ML models on two datasets.The variance is used to analyse the extracted dimensions of LDA and PCA.Finally,this paper concludes that the choice of datasets significantly alters the performance of the applied techniques.We believe that a universal(benchmark)feature set is needed to facilitate further advancement and progress of research in this field.展开更多
Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduct...Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduction is undoubtedly necessary for line drawings.However,most existing methods for artifact drawing rely on the principles of orthographic projection that always cannot avoid angle occlusion and data overlapping while the surface of cultural relics is complex.Therefore,conformal mapping was introduced as a dimensionality reduction way to compensate for the limitation of orthographic projection.Based on the given criteria for assessing surface complexity,this paper proposed a three-dimensional feature guideline extraction method for complex cultural relic surfaces.A 2D and 3D combined factor that measured the importance of points on describing surface features,vertex weight,was designed.Then the selection threshold for feature guideline extraction was determined based on the differences between vertex weight and shape index distributions.The feasibility and stability were verified through experiments conducted on real cultural relic surface data.Results demonstrated the ability of the method to address the challenges associated with the automatic generation of line drawings for complex surfaces.The extraction method and the obtained results will be useful for line graphic drawing,displaying and propaganda of cultural relics.展开更多
To solve the problems of low precision of weak feature extraction,heavy reliance on labor and low efficiency of weak feature extraction in X-ray weld detection image of ultra-high voltage(UHV)equipment key parts,an au...To solve the problems of low precision of weak feature extraction,heavy reliance on labor and low efficiency of weak feature extraction in X-ray weld detection image of ultra-high voltage(UHV)equipment key parts,an automatic feature extraction algorithm is proposed.Firstly,the original weld image is denoised while retaining the characteristic information of weak defects by the proposed monostable stochastic resonance method.Then,binarization is achieved by combining Laplacian edge detection and Otsu threshold segmentation.Finally,the automatic identification of weld defect area is realized based on the sequential traversal of binary tree.Several characteristic analysis dimensions are established for weld defects of UHV key parts,including defect area,perimeter,slenderness ratio,duty cycle,etc.The experiment using theweld detection image of the actual production site shows that the proposedmethod can effectively extract theweak feature information ofweld defects and further provide reference for decision-making.展开更多
One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelli...One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.展开更多
In the IoT(Internet of Things)domain,the increased use of encryption protocols such as SSL/TLS,VPN(Virtual Private Network),and Tor has led to a rise in attacks leveraging encrypted traffic.While research on anomaly d...In the IoT(Internet of Things)domain,the increased use of encryption protocols such as SSL/TLS,VPN(Virtual Private Network),and Tor has led to a rise in attacks leveraging encrypted traffic.While research on anomaly detection using AI(Artificial Intelligence)is actively progressing,the encrypted nature of the data poses challenges for labeling,resulting in data imbalance and biased feature extraction toward specific nodes.This study proposes a reconstruction error-based anomaly detection method using an autoencoder(AE)that utilizes packet metadata excluding specific node information.The proposed method omits biased packet metadata such as IP and Port and trains the detection model using only normal data,leveraging a small amount of packet metadata.This makes it well-suited for direct application in IoT environments due to its low resource consumption.In experiments comparing feature extraction methods for AE-based anomaly detection,we found that using flowbased features significantly improves accuracy,precision,F1 score,and AUC(Area Under the Receiver Operating Characteristic Curve)score compared to packet-based features.Additionally,for flow-based features,the proposed method showed a 30.17%increase in F1 score and improved false positive rates compared to Isolation Forest and OneClassSVM.Furthermore,the proposedmethod demonstrated a 32.43%higherAUCwhen using packet features and a 111.39%higher AUC when using flow features,compared to previously proposed oversampling methods.This study highlights the impact of feature extraction methods on attack detection in imbalanced,encrypted traffic environments and emphasizes that the one-class method using AE is more effective for attack detection and reducing false positives compared to traditional oversampling methods.展开更多
In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clini...In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clinical operating environments,endoscopic images often suffer from challenges such as low texture,uneven illumination,and non-rigid structures,which affect feature observation and extraction.This can severely impact surgical navigation or clinical diagnosis due to missing feature points in endoscopic images,leading to treatment and postoperative recovery issues for patients.To address these challenges,this paper introduces,for the first time,a Cross-Channel Multi-Modal Adaptive Spatial Feature Fusion(ASFF)module based on the lightweight architecture of EfficientViT.Additionally,a novel lightweight feature extraction and matching network based on attention mechanism is proposed.This network dynamically adjusts attention weights for cross-modal information from grayscale images and optical flow images through a dual-branch Siamese network.It extracts static and dynamic information features ranging from low-level to high-level,and from local to global,ensuring robust feature extraction across different widths,noise levels,and blur scenarios.Global and local matching are performed through a multi-level cascaded attention mechanism,with cross-channel attention introduced to simultaneously extract low-level and high-level features.Extensive ablation experiments and comparative studies are conducted on the HyperKvasir,EAD,M2caiSeg,CVC-ClinicDB,and UCL synthetic datasets.Experimental results demonstrate that the proposed network improves upon the baseline EfficientViT-B3 model by 75.4%in accuracy(Acc),while also enhancing runtime performance and storage efficiency.When compared with the complex DenseDescriptor feature extraction network,the difference in Acc is less than 7.22%,and IoU calculation results on specific datasets outperform complex dense models.Furthermore,this method increases the F1 score by 33.2%and accelerates runtime by 70.2%.It is noteworthy that the speed of CMMCAN surpasses that of comparative lightweight models,with feature extraction and matching performance comparable to existing complex models but with faster speed and higher cost-effectiveness.展开更多
Cleats are the dominant micro-fracture network controlling the macro-mechanical behavior of coal.Improved understanding of the spatial characteristics of cleat networks is therefore important to the coal mining indust...Cleats are the dominant micro-fracture network controlling the macro-mechanical behavior of coal.Improved understanding of the spatial characteristics of cleat networks is therefore important to the coal mining industry.Discrete fracture networks(DFNs)are increasingly used in engineering analyses to spatially model fractures at various scales.The reliability of coal DFNs largely depends on the confidence in the input cleat statistics.Estimates of these parameters can be made from image-based three-dimensional(3D)characterization of coal cleats using X-ray micro-computed tomography(m CT).One key step in this process,after cleat extraction,is the separation of individual cleats,without which the cleats are a connected network and statistics for different cleat sets cannot be measured.In this paper,a feature extraction-based image processing method is introduced to identify and separate distinct cleat groups from 3D X-ray m CT images.Kernels(filters)representing explicit cleat features of coal are built and cleat separation is successfully achieved by convolutional operations on 3D coal images.The new method is applied to a coal specimen with 80 mm in diameter and 100 mm in length acquired from an Anglo American Steelmaking Coal mine in the Bowen Basin,Queensland,Australia.It is demonstrated that the new method produces reliable cleat separation capable of defining individual cleats and preserving 3D topology after separation.Bedding-parallel fractures are also identified and separated,which has his-torically been challenging to delineate and rarely reported.A variety of cleat/fracture statistics is measured which not only can quantitatively characterize the cleat/fracture system but also can be used for DFN modeling.Finally,variability and heterogeneity with respect to the core axis are investigated.Significant heterogeneity is observed and suggests that the representative elementary volume(REV)of the cleat groups for engineering purposes may be a complex problem requiring careful consideration.展开更多
This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online ide...This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online identification method is a computer-involved approach for data collection,processing,and system identification,commonly used for adaptive control and prediction.This paper proposes a method for dynamically aggregating large-scale adjustable loads to support high proportions of new energy integration,aiming to study the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction methods.The experiment selected 300 central air conditioners as the research subject and analyzed their regulation characteristics,economic efficiency,and comfort.The experimental results show that as the adjustment time of the air conditioner increases from 5 minutes to 35 minutes,the stable adjustment quantity during the adjustment period decreases from 28.46 to 3.57,indicating that air conditioning loads can be controlled over a long period and have better adjustment effects in the short term.Overall,the experimental results of this paper demonstrate that analyzing the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction algorithms is effective.展开更多
Maintaining a steady power supply requires accurate forecasting of solar irradiance,since clean energy resources do not provide steady power.The existing forecasting studies have examined the limited effects of weathe...Maintaining a steady power supply requires accurate forecasting of solar irradiance,since clean energy resources do not provide steady power.The existing forecasting studies have examined the limited effects of weather conditions on solar radiation such as temperature and precipitation utilizing convolutional neural network(CNN),but no comprehensive study has been conducted on concentrations of air pollutants along with weather conditions.This paper proposes a hybrid approach based on deep learning,expanding the feature set by adding new air pollution concentrations,and ranking these features to select and reduce their size to improve efficiency.In order to improve the accuracy of feature selection,a maximum-dependency and minimum-redundancy(mRMR)criterion is applied to the constructed feature space to identify and rank the features.The combination of air pollution data with weather conditions data has enabled the prediction of solar irradiance with a higher accuracy.An evaluation of the proposed approach is conducted in Istanbul over 12 months for 43791 discrete times,with the main purpose of analyzing air data,including particular matter(PM10 and PM25),carbon monoxide(CO),nitric oxide(NOX),nitrogen dioxide(NO_(2)),ozone(O₃),sulfur dioxide(SO_(2))using a CNN,a long short-term memory network(LSTM),and MRMR feature extraction.Compared with the benchmark models with root mean square error(RMSE)results of 76.2,60.3,41.3,32.4,there is a significant improvement with the RMSE result of 5.536.This hybrid model presented here offers high prediction accuracy,a wider feature set,and a novel approach based on air concentrations combined with weather conditions for solar irradiance prediction.展开更多
Addressing the challenges posed by the nonlinear and non-stationary vibrations in rotating machinery,where weak fault characteristic signals hinder accurate fault state representation,we propose a novel feature extrac...Addressing the challenges posed by the nonlinear and non-stationary vibrations in rotating machinery,where weak fault characteristic signals hinder accurate fault state representation,we propose a novel feature extraction method that combines the Flexible Analytic Wavelet Transform(FAWT)with Nonlinear Quantum Permutation Entropy.FAWT,leveraging fractional orders and arbitrary scaling and translation factors,exhibits superior translational invariance and adjustable fundamental oscillatory characteristics.This flexibility enables FAWT to provide well-suited wavelet shapes,effectively matching subtle fault components and avoiding performance degradation associated with fixed frequency partitioning and low-oscillation bases in detecting weak faults.In our approach,gearbox vibration signals undergo FAWT to obtain sub-bands.Quantum theory is then introduced into permutation entropy to propose Nonlinear Quantum Permutation Entropy,a feature that more accurately characterizes the operational state of vibration simulation signals.The nonlinear quantum permutation entropy extracted from sub-bands is utilized to characterize the operating state of rotating machinery.A comprehensive analysis of vibration signals from rolling bearings and gearboxes validates the feasibility of the proposed method.Comparative assessments with parameters derived from traditional permutation entropy,sample entropy,wavelet transform(WT),and empirical mode decomposition(EMD)underscore the superior effectiveness of this approach in fault detection and classification for rotating machinery.展开更多
This paper proposes a novel open set recognition method,the Spatial Distribution Feature Extraction Network(SDFEN),to address the problem of electromagnetic signal recognition in an open environment.The spatial distri...This paper proposes a novel open set recognition method,the Spatial Distribution Feature Extraction Network(SDFEN),to address the problem of electromagnetic signal recognition in an open environment.The spatial distribution feature extraction layer in SDFEN replaces convolutional output neural networks with the spatial distribution features that focus more on inter-sample information by incorporating class center vectors.The designed hybrid loss function considers both intra-class distance and inter-class distance,thereby enhancing the similarity among samples of the same class and increasing the dissimilarity between samples of different classes during training.Consequently,this method allows unknown classes to occupy a larger space in the feature space.This reduces the possibility of overlap with known class samples and makes the boundaries between known and unknown samples more distinct.Additionally,the feature comparator threshold can be used to reject unknown samples.For signal open set recognition,seven methods,including the proposed method,are applied to two kinds of electromagnetic signal data:modulation signal and real-world emitter.The experimental results demonstrate that the proposed method outperforms the other six methods overall in a simulated open environment.Specifically,compared to the state-of-the-art Openmax method,the novel method achieves up to 8.87%and 5.25%higher micro-F-measures,respectively.展开更多
Biometric recognition is a widely used technology for user authentication.In the application of this technology,biometric security and recognition accuracy are two important issues that should be considered.In terms o...Biometric recognition is a widely used technology for user authentication.In the application of this technology,biometric security and recognition accuracy are two important issues that should be considered.In terms of biometric security,cancellable biometrics is an effective technique for protecting biometric data.Regarding recognition accuracy,feature representation plays a significant role in the performance and reliability of cancellable biometric systems.How to design good feature representations for cancellable biometrics is a challenging topic that has attracted a great deal of attention from the computer vision community,especially from researchers of cancellable biometrics.Feature extraction and learning in cancellable biometrics is to find suitable feature representations with a view to achieving satisfactory recognition performance,while the privacy of biometric data is protected.This survey informs the progress,trend and challenges of feature extraction and learning for cancellable biometrics,thus shedding light on the latest developments and future research of this area.展开更多
A topic studied in cartography is to make the extraction of cartographic features that provide the update of cartographic maps more easily. For this reason many automatic routines were created with the intent to perfo...A topic studied in cartography is to make the extraction of cartographic features that provide the update of cartographic maps more easily. For this reason many automatic routines were created with the intent to perform the features extraction. Despite of all studies about this, some features cannot be found by the algorithm or it can extract some pixels unduly. So the current article aims to show the results with the software development that uses the original and reference image to calculate some statistics about the extraction process. Furthermore, the calculated statistics can be used to evaluate the extraction process.展开更多
The coronavirus(COVID19),also known as the novel coronavirus,first appeared in December 2019 in Wuhan,China.After that,it quickly spread throughout the world and became a disease.It has significantly impacted our ever...The coronavirus(COVID19),also known as the novel coronavirus,first appeared in December 2019 in Wuhan,China.After that,it quickly spread throughout the world and became a disease.It has significantly impacted our everyday lives,the national and international economies,and public health.However,early diagnosis is critical for prompt treatment and reducing trauma in the healthcare system.Clinical radiologists primarily use chest X-rays,and computerized tomography(CT)scans to test for pneumonia infection.We used Chest CT scans to predict COVID19 pneumonia and healthy scans in this study.We proposed a joint framework for prediction based on classical feature fusion and PSO-based optimization.We begin by extracting standard features such as discrete wavelet transforms(DWT),discrete cosine transforms(DCT),and dominant rotated local binary patterns(DRLBP).In addition,we extracted Shanon Entropy and Kurtosis features.In the following step,a Max-Covariance-based maximization approach for feature fusion is proposed.The fused features are optimized in the preliminary phase using Particle Swarm Optimization(PSO)and the ELM fitness function.For final prediction,PSO is used to obtain robust features,which are then implanted in a Support Vector Data Description(SVDD)classifier.The experiment is carried out using available COVID19 Chest CT Scans and scans from healthy patients.These images are from the Radiopaedia website.For the proposed scheme,the fusion and selection process accuracy is 88.6%and 93.1%,respectively.A detailed analysis is conducted,which supports the proposed system efficiency.展开更多
A novel class of periodically changing features hidden in radar pulse sequence environment,named G features,is proposed.Combining fractal theory and Hilbert-Huang transform,the features are extracted using changing ch...A novel class of periodically changing features hidden in radar pulse sequence environment,named G features,is proposed.Combining fractal theory and Hilbert-Huang transform,the features are extracted using changing characteristics of pulse parameters in radar emitter signals.The features can be applied in modern complex electronic warfare environment to address the issue of signal sorting when radar emitter pulse signal parameters severely or even completely overlap.Experiment results show that the proposed feature class and feature extraction method can discriminate periodically changing pulse sequence signal sorting features from radar pulse signal flow with complex variant features,therefore provide a new methodology for signal sorting.展开更多
In stock market forecasting,the identification of critical features that affect the performance of machine learning(ML)models is crucial to achieve accurate stock price predictions.Several review papers in the literat...In stock market forecasting,the identification of critical features that affect the performance of machine learning(ML)models is crucial to achieve accurate stock price predictions.Several review papers in the literature have focused on various ML,statistical,and deep learning-based methods used in stock market forecasting.However,no survey study has explored feature selection and extraction techniques for stock market forecasting.This survey presents a detailed analysis of 32 research works that use a combination of feature study and ML approaches in various stock market applications.We conduct a systematic search for articles in the Scopus and Web of Science databases for the years 2011–2022.We review a variety of feature selection and feature extraction approaches that have been successfully applied in the stock market analyses presented in the articles.We also describe the combination of feature analysis techniques and ML methods and evaluate their performance.Moreover,we present other survey articles,stock market input and output data,and analyses based on various factors.We find that correlation criteria,random forest,principal component analysis,and autoencoder are the most widely used feature selection and extraction techniques with the best prediction accuracy for various stock market applications.展开更多
Breast cancer is the most prevalent cancer among women,and diagnosing it early is vital for successful treatment.The examination of images captured during biopsies plays an important role in determining whether a pati...Breast cancer is the most prevalent cancer among women,and diagnosing it early is vital for successful treatment.The examination of images captured during biopsies plays an important role in determining whether a patient has cancer or not.However,the stochastic patterns,varying intensities of colors,and the large sizes of these images make it challenging to identify and mark malignant regions in them.Against this backdrop,this study proposes an approach to the pixel categorization based on the genetic algorithm(GA)and principal component analysis(PCA).The spatial features of the images were extracted using various filters,and the most prevalent ones are selected using the GA and fed into the classifiers for pixel-level categorization.Three classifiers—random forest(RF),decision tree(DT),and extra tree(ET)—were used in the proposed model.The parameters of all modelswere separately tuned,and their performance was tested.The results show that the features extracted by using the GA+PCA in the proposed model are influential and reliable for pixel-level classification in service of the image annotation and tumor identification.Further,an image from benign,malignant,and normal classes was randomly selected and used to test the proposed model.The proposed modelGA-PCA-DT has delivered accuracies between 0.99 to 1.0 on a reduced feature set.The predicted pixel sets were also compared with their respective ground-truth values to assess the overall performance of the method on two metrics—the universal image quality index(UIQI)and the structural similarity index(SSI).Both quality measures delivered excellent results.展开更多
文摘Nowadays, Iris recognition is a method of biometric verification of the person authentication process based on the human iris unique pattern, which is applied to control system for high level security. It is a popular system for recognizing humans and essential to understand it. The objective of this method is to assign a unique subject for each iris image for authentication of the person and provide an effective feature representation of the iris recognition with the image analysis. This paper proposed a new optimization and recognition process of iris features selection by using proposed Modified ADMM and Deep Learning Algorithm (MADLA). For improving the performance of the security with feature extraction, the proposed algorithm is designed and used to extract the strong features identification of iris of the person with less time, better accuracy, improving performance in access control and in security level. The evaluations of iris data are demonstrated the improvement of the recognition accuracy. In this proposed methodology, the recognition of the iris features has been improved and it incorporates into the iris recognition systems.
基金Supported by National Science and Technology Support Program of China(Grant No.2015BAF07B04)
文摘Numerous vibration-based techniques are rarely used in diesel engines fault diagnosis in a direct way, due to the surface vibration signals of diesel engines with the complex non-stationary and nonlinear time-varying fea- tures. To investigate the fault diagnosis of diesel engines, fractal correlation dimension, wavelet energy and entropy as features reflecting the diesel engine fault fractal and energy characteristics are extracted from the decomposed signals through analyzing vibration acceleration signals derived from the cylinder head in seven different states of valve train. An intelligent fault detector FastICA-SVM is applied for diesel engine fault diagnosis and classification. The results demonstrate that FastlCA-SVM achieves higher classification accuracy and makes better general- ization performance in small samples recognition. Besides, the fractal correlation dimension and wavelet energy and entropy as the special features of diesel engine vibration signal are considered as input vectors of classifier FastlCA- SVM and could produce the excellent classification results. The proposed methodology improves the accuracy of fea- ture extraction and the fault diagnosis of diesel engines.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2020R1F1A1074885)was supported by the Brain Korea 21 Project in 2021(No.4199990114242).
文摘Objective and quantitative assessment of skin conditions is essential for cosmeceutical studies and research on skin aging and skin regeneration.Various handcraft-based image processing methods have been proposed to evaluate skin conditions objectively,but they have unavoidable disadvantages when used to analyze skin features accurately.This study proposes a hybrid segmentation scheme consisting of Deeplab v3+with an Inception-ResNet-v2 backbone,LightGBM,and morphological processing(MP)to overcome the shortcomings of handcraft-based approaches.First,we apply Deeplab v3+with an Inception-ResNet-v2 backbone for pixel segmentation of skin wrinkles and cells.Then,LightGBM and MP are used to enhance the pixel segmentation quality.Finally,we determine several skin features based on the results of wrinkle and cell segmentation.Our proposed segmentation scheme achieved a mean accuracy of 0.854,mean of intersection over union of 0.749,and mean boundary F1 score of 0.852,which achieved 1.1%,6.7%,and 14.8%improvement over the panoptic-based semantic segmentation method,respectively.
文摘A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have occurred,which led to an active research area for improving NIDS technologies.In an analysis of related works,it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction(FR)and Machine Learning(ML)techniques on NIDS datasets.However,these datasets are different in feature sets,attack types,and network design.Therefore,this paper aims to discover whether these techniques can be generalised across various datasets.Six ML models are utilised:a Deep Feed Forward(DFF),Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),Decision Tree(DT),Logistic Regression(LR),and Naive Bayes(NB).The accuracy of three Feature Extraction(FE)algorithms is detected;Principal Component Analysis(PCA),Auto-encoder(AE),and Linear Discriminant Analysis(LDA),are evaluated using three benchmark datasets:UNSW-NB15,ToN-IoT and CSE-CIC-IDS2018.Although PCA and AE algorithms have been widely used,the determination of their optimal number of extracted dimensions has been overlooked.The results indicate that no clear FE method or ML model can achieve the best scores for all datasets.The optimal number of extracted dimensions has been identified for each dataset,and LDA degrades the performance of the ML models on two datasets.The variance is used to analyse the extracted dimensions of LDA and PCA.Finally,this paper concludes that the choice of datasets significantly alters the performance of the applied techniques.We believe that a universal(benchmark)feature set is needed to facilitate further advancement and progress of research in this field.
基金National Natural Science Foundation of China(Nos.42071444,42101444)。
文摘Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduction is undoubtedly necessary for line drawings.However,most existing methods for artifact drawing rely on the principles of orthographic projection that always cannot avoid angle occlusion and data overlapping while the surface of cultural relics is complex.Therefore,conformal mapping was introduced as a dimensionality reduction way to compensate for the limitation of orthographic projection.Based on the given criteria for assessing surface complexity,this paper proposed a three-dimensional feature guideline extraction method for complex cultural relic surfaces.A 2D and 3D combined factor that measured the importance of points on describing surface features,vertex weight,was designed.Then the selection threshold for feature guideline extraction was determined based on the differences between vertex weight and shape index distributions.The feasibility and stability were verified through experiments conducted on real cultural relic surface data.Results demonstrated the ability of the method to address the challenges associated with the automatic generation of line drawings for complex surfaces.The extraction method and the obtained results will be useful for line graphic drawing,displaying and propaganda of cultural relics.
文摘To solve the problems of low precision of weak feature extraction,heavy reliance on labor and low efficiency of weak feature extraction in X-ray weld detection image of ultra-high voltage(UHV)equipment key parts,an automatic feature extraction algorithm is proposed.Firstly,the original weld image is denoised while retaining the characteristic information of weak defects by the proposed monostable stochastic resonance method.Then,binarization is achieved by combining Laplacian edge detection and Otsu threshold segmentation.Finally,the automatic identification of weld defect area is realized based on the sequential traversal of binary tree.Several characteristic analysis dimensions are established for weld defects of UHV key parts,including defect area,perimeter,slenderness ratio,duty cycle,etc.The experiment using theweld detection image of the actual production site shows that the proposedmethod can effectively extract theweak feature information ofweld defects and further provide reference for decision-making.
文摘One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2023-00235509,Development of Security Monitoring Technology Based Network Behavior against Encrypted Cyber Threats in ICT Convergence Environment).
文摘In the IoT(Internet of Things)domain,the increased use of encryption protocols such as SSL/TLS,VPN(Virtual Private Network),and Tor has led to a rise in attacks leveraging encrypted traffic.While research on anomaly detection using AI(Artificial Intelligence)is actively progressing,the encrypted nature of the data poses challenges for labeling,resulting in data imbalance and biased feature extraction toward specific nodes.This study proposes a reconstruction error-based anomaly detection method using an autoencoder(AE)that utilizes packet metadata excluding specific node information.The proposed method omits biased packet metadata such as IP and Port and trains the detection model using only normal data,leveraging a small amount of packet metadata.This makes it well-suited for direct application in IoT environments due to its low resource consumption.In experiments comparing feature extraction methods for AE-based anomaly detection,we found that using flowbased features significantly improves accuracy,precision,F1 score,and AUC(Area Under the Receiver Operating Characteristic Curve)score compared to packet-based features.Additionally,for flow-based features,the proposed method showed a 30.17%increase in F1 score and improved false positive rates compared to Isolation Forest and OneClassSVM.Furthermore,the proposedmethod demonstrated a 32.43%higherAUCwhen using packet features and a 111.39%higher AUC when using flow features,compared to previously proposed oversampling methods.This study highlights the impact of feature extraction methods on attack detection in imbalanced,encrypted traffic environments and emphasizes that the one-class method using AE is more effective for attack detection and reducing false positives compared to traditional oversampling methods.
基金This work was supported by Science and Technology Cooperation Special Project of Shijiazhuang(SJZZXA23005).
文摘In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clinical operating environments,endoscopic images often suffer from challenges such as low texture,uneven illumination,and non-rigid structures,which affect feature observation and extraction.This can severely impact surgical navigation or clinical diagnosis due to missing feature points in endoscopic images,leading to treatment and postoperative recovery issues for patients.To address these challenges,this paper introduces,for the first time,a Cross-Channel Multi-Modal Adaptive Spatial Feature Fusion(ASFF)module based on the lightweight architecture of EfficientViT.Additionally,a novel lightweight feature extraction and matching network based on attention mechanism is proposed.This network dynamically adjusts attention weights for cross-modal information from grayscale images and optical flow images through a dual-branch Siamese network.It extracts static and dynamic information features ranging from low-level to high-level,and from local to global,ensuring robust feature extraction across different widths,noise levels,and blur scenarios.Global and local matching are performed through a multi-level cascaded attention mechanism,with cross-channel attention introduced to simultaneously extract low-level and high-level features.Extensive ablation experiments and comparative studies are conducted on the HyperKvasir,EAD,M2caiSeg,CVC-ClinicDB,and UCL synthetic datasets.Experimental results demonstrate that the proposed network improves upon the baseline EfficientViT-B3 model by 75.4%in accuracy(Acc),while also enhancing runtime performance and storage efficiency.When compared with the complex DenseDescriptor feature extraction network,the difference in Acc is less than 7.22%,and IoU calculation results on specific datasets outperform complex dense models.Furthermore,this method increases the F1 score by 33.2%and accelerates runtime by 70.2%.It is noteworthy that the speed of CMMCAN surpasses that of comparative lightweight models,with feature extraction and matching performance comparable to existing complex models but with faster speed and higher cost-effectiveness.
文摘Cleats are the dominant micro-fracture network controlling the macro-mechanical behavior of coal.Improved understanding of the spatial characteristics of cleat networks is therefore important to the coal mining industry.Discrete fracture networks(DFNs)are increasingly used in engineering analyses to spatially model fractures at various scales.The reliability of coal DFNs largely depends on the confidence in the input cleat statistics.Estimates of these parameters can be made from image-based three-dimensional(3D)characterization of coal cleats using X-ray micro-computed tomography(m CT).One key step in this process,after cleat extraction,is the separation of individual cleats,without which the cleats are a connected network and statistics for different cleat sets cannot be measured.In this paper,a feature extraction-based image processing method is introduced to identify and separate distinct cleat groups from 3D X-ray m CT images.Kernels(filters)representing explicit cleat features of coal are built and cleat separation is successfully achieved by convolutional operations on 3D coal images.The new method is applied to a coal specimen with 80 mm in diameter and 100 mm in length acquired from an Anglo American Steelmaking Coal mine in the Bowen Basin,Queensland,Australia.It is demonstrated that the new method produces reliable cleat separation capable of defining individual cleats and preserving 3D topology after separation.Bedding-parallel fractures are also identified and separated,which has his-torically been challenging to delineate and rarely reported.A variety of cleat/fracture statistics is measured which not only can quantitatively characterize the cleat/fracture system but also can be used for DFN modeling.Finally,variability and heterogeneity with respect to the core axis are investigated.Significant heterogeneity is observed and suggests that the representative elementary volume(REV)of the cleat groups for engineering purposes may be a complex problem requiring careful consideration.
基金supported by the State Grid Science&Technology Project(5100-202114296A-0-0-00).
文摘This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online identification method is a computer-involved approach for data collection,processing,and system identification,commonly used for adaptive control and prediction.This paper proposes a method for dynamically aggregating large-scale adjustable loads to support high proportions of new energy integration,aiming to study the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction methods.The experiment selected 300 central air conditioners as the research subject and analyzed their regulation characteristics,economic efficiency,and comfort.The experimental results show that as the adjustment time of the air conditioner increases from 5 minutes to 35 minutes,the stable adjustment quantity during the adjustment period decreases from 28.46 to 3.57,indicating that air conditioning loads can be controlled over a long period and have better adjustment effects in the short term.Overall,the experimental results of this paper demonstrate that analyzing the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction algorithms is effective.
文摘Maintaining a steady power supply requires accurate forecasting of solar irradiance,since clean energy resources do not provide steady power.The existing forecasting studies have examined the limited effects of weather conditions on solar radiation such as temperature and precipitation utilizing convolutional neural network(CNN),but no comprehensive study has been conducted on concentrations of air pollutants along with weather conditions.This paper proposes a hybrid approach based on deep learning,expanding the feature set by adding new air pollution concentrations,and ranking these features to select and reduce their size to improve efficiency.In order to improve the accuracy of feature selection,a maximum-dependency and minimum-redundancy(mRMR)criterion is applied to the constructed feature space to identify and rank the features.The combination of air pollution data with weather conditions data has enabled the prediction of solar irradiance with a higher accuracy.An evaluation of the proposed approach is conducted in Istanbul over 12 months for 43791 discrete times,with the main purpose of analyzing air data,including particular matter(PM10 and PM25),carbon monoxide(CO),nitric oxide(NOX),nitrogen dioxide(NO_(2)),ozone(O₃),sulfur dioxide(SO_(2))using a CNN,a long short-term memory network(LSTM),and MRMR feature extraction.Compared with the benchmark models with root mean square error(RMSE)results of 76.2,60.3,41.3,32.4,there is a significant improvement with the RMSE result of 5.536.This hybrid model presented here offers high prediction accuracy,a wider feature set,and a novel approach based on air concentrations combined with weather conditions for solar irradiance prediction.
基金supported financially by FundamentalResearch Program of Shanxi Province(No.202103021223056).
文摘Addressing the challenges posed by the nonlinear and non-stationary vibrations in rotating machinery,where weak fault characteristic signals hinder accurate fault state representation,we propose a novel feature extraction method that combines the Flexible Analytic Wavelet Transform(FAWT)with Nonlinear Quantum Permutation Entropy.FAWT,leveraging fractional orders and arbitrary scaling and translation factors,exhibits superior translational invariance and adjustable fundamental oscillatory characteristics.This flexibility enables FAWT to provide well-suited wavelet shapes,effectively matching subtle fault components and avoiding performance degradation associated with fixed frequency partitioning and low-oscillation bases in detecting weak faults.In our approach,gearbox vibration signals undergo FAWT to obtain sub-bands.Quantum theory is then introduced into permutation entropy to propose Nonlinear Quantum Permutation Entropy,a feature that more accurately characterizes the operational state of vibration simulation signals.The nonlinear quantum permutation entropy extracted from sub-bands is utilized to characterize the operating state of rotating machinery.A comprehensive analysis of vibration signals from rolling bearings and gearboxes validates the feasibility of the proposed method.Comparative assessments with parameters derived from traditional permutation entropy,sample entropy,wavelet transform(WT),and empirical mode decomposition(EMD)underscore the superior effectiveness of this approach in fault detection and classification for rotating machinery.
文摘This paper proposes a novel open set recognition method,the Spatial Distribution Feature Extraction Network(SDFEN),to address the problem of electromagnetic signal recognition in an open environment.The spatial distribution feature extraction layer in SDFEN replaces convolutional output neural networks with the spatial distribution features that focus more on inter-sample information by incorporating class center vectors.The designed hybrid loss function considers both intra-class distance and inter-class distance,thereby enhancing the similarity among samples of the same class and increasing the dissimilarity between samples of different classes during training.Consequently,this method allows unknown classes to occupy a larger space in the feature space.This reduces the possibility of overlap with known class samples and makes the boundaries between known and unknown samples more distinct.Additionally,the feature comparator threshold can be used to reject unknown samples.For signal open set recognition,seven methods,including the proposed method,are applied to two kinds of electromagnetic signal data:modulation signal and real-world emitter.The experimental results demonstrate that the proposed method outperforms the other six methods overall in a simulated open environment.Specifically,compared to the state-of-the-art Openmax method,the novel method achieves up to 8.87%and 5.25%higher micro-F-measures,respectively.
基金Australian Research Council,Grant/Award Numbers:DP190103660,DP200103207,LP180100663UniSQ Capacity Building Grants,Grant/Award Number:1008313。
文摘Biometric recognition is a widely used technology for user authentication.In the application of this technology,biometric security and recognition accuracy are two important issues that should be considered.In terms of biometric security,cancellable biometrics is an effective technique for protecting biometric data.Regarding recognition accuracy,feature representation plays a significant role in the performance and reliability of cancellable biometric systems.How to design good feature representations for cancellable biometrics is a challenging topic that has attracted a great deal of attention from the computer vision community,especially from researchers of cancellable biometrics.Feature extraction and learning in cancellable biometrics is to find suitable feature representations with a view to achieving satisfactory recognition performance,while the privacy of biometric data is protected.This survey informs the progress,trend and challenges of feature extraction and learning for cancellable biometrics,thus shedding light on the latest developments and future research of this area.
文摘A topic studied in cartography is to make the extraction of cartographic features that provide the update of cartographic maps more easily. For this reason many automatic routines were created with the intent to perform the features extraction. Despite of all studies about this, some features cannot be found by the algorithm or it can extract some pixels unduly. So the current article aims to show the results with the software development that uses the original and reference image to calculate some statistics about the extraction process. Furthermore, the calculated statistics can be used to evaluate the extraction process.
基金supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)the Soonchunhyang University Research Fund.
文摘The coronavirus(COVID19),also known as the novel coronavirus,first appeared in December 2019 in Wuhan,China.After that,it quickly spread throughout the world and became a disease.It has significantly impacted our everyday lives,the national and international economies,and public health.However,early diagnosis is critical for prompt treatment and reducing trauma in the healthcare system.Clinical radiologists primarily use chest X-rays,and computerized tomography(CT)scans to test for pneumonia infection.We used Chest CT scans to predict COVID19 pneumonia and healthy scans in this study.We proposed a joint framework for prediction based on classical feature fusion and PSO-based optimization.We begin by extracting standard features such as discrete wavelet transforms(DWT),discrete cosine transforms(DCT),and dominant rotated local binary patterns(DRLBP).In addition,we extracted Shanon Entropy and Kurtosis features.In the following step,a Max-Covariance-based maximization approach for feature fusion is proposed.The fused features are optimized in the preliminary phase using Particle Swarm Optimization(PSO)and the ELM fitness function.For final prediction,PSO is used to obtain robust features,which are then implanted in a Support Vector Data Description(SVDD)classifier.The experiment is carried out using available COVID19 Chest CT Scans and scans from healthy patients.These images are from the Radiopaedia website.For the proposed scheme,the fusion and selection process accuracy is 88.6%and 93.1%,respectively.A detailed analysis is conducted,which supports the proposed system efficiency.
基金supported by the National Natural Science Foundation of China (60872108)the Postdoctoral Science Foundation of China(200902411+3 种基金20080430903)Heilongjiang Postdoctoral Financial Assistance (LBH-Z08129)the Scientific and Technological Creative Talents Special Research Foundation of Harbin Municipality (2008RFQXG030)Central University Basic Research Professional Expenses Special Fund Project
文摘A novel class of periodically changing features hidden in radar pulse sequence environment,named G features,is proposed.Combining fractal theory and Hilbert-Huang transform,the features are extracted using changing characteristics of pulse parameters in radar emitter signals.The features can be applied in modern complex electronic warfare environment to address the issue of signal sorting when radar emitter pulse signal parameters severely or even completely overlap.Experiment results show that the proposed feature class and feature extraction method can discriminate periodically changing pulse sequence signal sorting features from radar pulse signal flow with complex variant features,therefore provide a new methodology for signal sorting.
基金funded by The University of Groningen and Prospect Burma organization.
文摘In stock market forecasting,the identification of critical features that affect the performance of machine learning(ML)models is crucial to achieve accurate stock price predictions.Several review papers in the literature have focused on various ML,statistical,and deep learning-based methods used in stock market forecasting.However,no survey study has explored feature selection and extraction techniques for stock market forecasting.This survey presents a detailed analysis of 32 research works that use a combination of feature study and ML approaches in various stock market applications.We conduct a systematic search for articles in the Scopus and Web of Science databases for the years 2011–2022.We review a variety of feature selection and feature extraction approaches that have been successfully applied in the stock market analyses presented in the articles.We also describe the combination of feature analysis techniques and ML methods and evaluate their performance.Moreover,we present other survey articles,stock market input and output data,and analyses based on various factors.We find that correlation criteria,random forest,principal component analysis,and autoencoder are the most widely used feature selection and extraction techniques with the best prediction accuracy for various stock market applications.
文摘Breast cancer is the most prevalent cancer among women,and diagnosing it early is vital for successful treatment.The examination of images captured during biopsies plays an important role in determining whether a patient has cancer or not.However,the stochastic patterns,varying intensities of colors,and the large sizes of these images make it challenging to identify and mark malignant regions in them.Against this backdrop,this study proposes an approach to the pixel categorization based on the genetic algorithm(GA)and principal component analysis(PCA).The spatial features of the images were extracted using various filters,and the most prevalent ones are selected using the GA and fed into the classifiers for pixel-level categorization.Three classifiers—random forest(RF),decision tree(DT),and extra tree(ET)—were used in the proposed model.The parameters of all modelswere separately tuned,and their performance was tested.The results show that the features extracted by using the GA+PCA in the proposed model are influential and reliable for pixel-level classification in service of the image annotation and tumor identification.Further,an image from benign,malignant,and normal classes was randomly selected and used to test the proposed model.The proposed modelGA-PCA-DT has delivered accuracies between 0.99 to 1.0 on a reduced feature set.The predicted pixel sets were also compared with their respective ground-truth values to assess the overall performance of the method on two metrics—the universal image quality index(UIQI)and the structural similarity index(SSI).Both quality measures delivered excellent results.