期刊文献+
共找到7,512篇文章
< 1 2 250 >
每页显示 20 50 100
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
1
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 Khursheed Aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
下载PDF
Olive Leaf Disease Detection via Wavelet Transform and Feature Fusion of Pre-Trained Deep Learning Models
2
作者 Mahmood A.Mahmood Khalaf Alsalem 《Computers, Materials & Continua》 SCIE EI 2024年第3期3431-3448,共18页
Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wa... Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wavelet,feature-fused,pre-trained deep learning model for detecting olive leaf diseases.The proposed model combines wavelet transforms with pre-trained deep-learning models to extract discriminative features from olive leaf images.The model has four main phases:preprocessing using data augmentation,three-level wavelet transformation,learning using pre-trained deep learning models,and a fused deep learning model.In the preprocessing phase,the image dataset is augmented using techniques such as resizing,rescaling,flipping,rotation,zooming,and contrasting.In wavelet transformation,the augmented images are decomposed into three frequency levels.Three pre-trained deep learning models,EfficientNet-B7,DenseNet-201,and ResNet-152-V2,are used in the learning phase.The models were trained using the approximate images of the third-level sub-band of the wavelet transform.In the fused phase,the fused model consists of a merge layer,three dense layers,and two dropout layers.The proposed model was evaluated using a dataset of images of healthy and infected olive leaves.It achieved an accuracy of 99.72%in the diagnosis of olive leaf diseases,which exceeds the accuracy of other methods reported in the literature.This finding suggests that our proposed method is a promising tool for the early detection of olive leaf diseases. 展开更多
关键词 Olive leaf diseases wavelet transform deep learning feature fusion
下载PDF
A Power Data Anomaly Detection Model Based on Deep Learning with Adaptive Feature Fusion
3
作者 Xiu Liu Liang Gu +3 位作者 Xin Gong Long An Xurui Gao Juying Wu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4045-4061,共17页
With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve suffi... With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed. 展开更多
关键词 Data alignment dimension reduction feature fusion data anomaly detection deep learning
下载PDF
Machine Learning Security Defense Algorithms Based on Metadata Correlation Features
4
作者 Ruchun Jia Jianwei Zhang Yi Lin 《Computers, Materials & Continua》 SCIE EI 2024年第2期2391-2418,共28页
With the popularization of the Internet and the development of technology,cyber threats are increasing day by day.Threats such as malware,hacking,and data breaches have had a serious impact on cybersecurity.The networ... With the popularization of the Internet and the development of technology,cyber threats are increasing day by day.Threats such as malware,hacking,and data breaches have had a serious impact on cybersecurity.The network security environment in the era of big data presents the characteristics of large amounts of data,high diversity,and high real-time requirements.Traditional security defense methods and tools have been unable to cope with the complex and changing network security threats.This paper proposes a machine-learning security defense algorithm based on metadata association features.Emphasize control over unauthorized users through privacy,integrity,and availability.The user model is established and the mapping between the user model and the metadata of the data source is generated.By analyzing the user model and its corresponding mapping relationship,the query of the user model can be decomposed into the query of various heterogeneous data sources,and the integration of heterogeneous data sources based on the metadata association characteristics can be realized.Define and classify customer information,automatically identify and perceive sensitive data,build a behavior audit and analysis platform,analyze user behavior trajectories,and complete the construction of a machine learning customer information security defense system.The experimental results show that when the data volume is 5×103 bit,the data storage integrity of the proposed method is 92%.The data accuracy is 98%,and the success rate of data intrusion is only 2.6%.It can be concluded that the data storage method in this paper is safe,the data accuracy is always at a high level,and the data disaster recovery performance is good.This method can effectively resist data intrusion and has high air traffic control security.It can not only detect all viruses in user data storage,but also realize integrated virus processing,and further optimize the security defense effect of user big data. 展开更多
关键词 Data-oriented architecture METADATA correlation features machine learning security defense data source integration
下载PDF
Terrorism Attack Classification Using Machine Learning: The Effectiveness of Using Textual Features Extracted from GTD Dataset
5
作者 Mohammed Abdalsalam Chunlin Li +1 位作者 Abdelghani Dahou Natalia Kryvinska 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1427-1467,共41页
One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelli... One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier. 展开更多
关键词 Artificial intelligence machine learning natural language processing data analytic DistilBERT feature extraction terrorism classification GTD dataset
下载PDF
Novel Machine Learning–Based Approach for Arabic Text Classification Using Stylistic and Semantic Features 被引量:1
6
作者 Fethi Fkih Mohammed Alsuhaibani +1 位作者 Delel Rhouma Ali Mustafa Qamar 《Computers, Materials & Continua》 SCIE EI 2023年第6期5871-5886,共16页
Text classification is an essential task for many applications related to the Natural Language Processing domain.It can be applied in many fields,such as Information Retrieval,Knowledge Extraction,and Knowledge modeli... Text classification is an essential task for many applications related to the Natural Language Processing domain.It can be applied in many fields,such as Information Retrieval,Knowledge Extraction,and Knowledge modeling.Even though the importance of this task,Arabic Text Classification tools still suffer from many problems and remain incapable of responding to the increasing volume of Arabic content that circulates on the web or resides in large databases.This paper introduces a novel machine learning-based approach that exclusively uses hybrid(stylistic and semantic)features.First,we clean the Arabic documents and translate them to English using translation tools.Consequently,the semantic features are automatically extracted from the translated documents using an existing database of English topics.Besides,the model automatically extracts from the textual content a set of stylistic features such as word and character frequencies and punctuation.Therefore,we obtain 3 types of features:semantic,stylistic and hybrid.Using each time,a different type of feature,we performed an in-depth comparison study of nine well-known Machine Learning models to evaluate our approach and used a standard Arabic corpus.The obtained results show that Neural Network outperforms other models and provides good performances using hybrid features(F1-score=0.88%). 展开更多
关键词 Arabic text classification machine learning stylistic features semantic features TOPICS
下载PDF
Feature Selection with Deep Reinforcement Learning for Intrusion Detection System 被引量:1
7
作者 S.Priya K.Pradeep Mohan Kumar 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3339-3353,共15页
An intrusion detection system(IDS)becomes an important tool for ensuring security in the network.In recent times,machine learning(ML)and deep learning(DL)models can be applied for the identification of intrusions over... An intrusion detection system(IDS)becomes an important tool for ensuring security in the network.In recent times,machine learning(ML)and deep learning(DL)models can be applied for the identification of intrusions over the network effectively.To resolve the security issues,this paper presents a new Binary Butterfly Optimization algorithm based on Feature Selection with DRL technique,called BBOFS-DRL for intrusion detection.The proposed BBOFSDRL model mainly accomplishes the recognition of intrusions in the network.To attain this,the BBOFS-DRL model initially designs the BBOFS algorithm based on the traditional butterfly optimization algorithm(BOA)to elect feature subsets.Besides,DRL model is employed for the proper identification and classification of intrusions that exist in the network.Furthermore,beetle antenna search(BAS)technique is applied to tune the DRL parameters for enhanced intrusion detection efficiency.For ensuring the superior intrusion detection outcomes of the BBOFS-DRL model,a wide-ranging experimental analysis is performed against benchmark dataset.The simulation results reported the supremacy of the BBOFS-DRL model over its recent state of art approaches. 展开更多
关键词 Intrusion detection security reinforcement learning machine learning feature selection beetle antenna search
下载PDF
MSEs Credit Risk Assessment Model Based on Federated Learning and Feature Selection 被引量:1
8
作者 Zhanyang Xu Jianchun Cheng +2 位作者 Luofei Cheng Xiaolong Xu Muhammad Bilal 《Computers, Materials & Continua》 SCIE EI 2023年第6期5573-5595,共23页
Federated learning has been used extensively in business inno-vation scenarios in various industries.This research adopts the federated learning approach for the first time to address the issue of bank-enterprise info... Federated learning has been used extensively in business inno-vation scenarios in various industries.This research adopts the federated learning approach for the first time to address the issue of bank-enterprise information asymmetry in the credit assessment scenario.First,this research designs a credit risk assessment model based on federated learning and feature selection for micro and small enterprises(MSEs)using multi-dimensional enterprise data and multi-perspective enterprise information.The proposed model includes four main processes:namely encrypted entity alignment,hybrid feature selection,secure multi-party computation,and global model updating.Secondly,a two-step feature selection algorithm based on wrapper and filter is designed to construct the optimal feature set in multi-source heterogeneous data,which can provide excellent accuracy and interpretability.In addition,a local update screening strategy is proposed to select trustworthy model parameters for aggregation each time to ensure the quality of the global model.The results of the study show that the model error rate is reduced by 6.22%and the recall rate is improved by 11.03%compared to the algorithms commonly used in credit risk research,significantly improving the ability to identify defaulters.Finally,the business operations of commercial banks are used to confirm the potential of the proposed model for real-world implementation. 展开更多
关键词 Federated learning feature selection credit risk assessment MSEs
下载PDF
Deep Learning for Wind Speed Forecasting Using Bi-LSTM with Selected Features 被引量:1
9
作者 Siva Sankari Subbiah Senthil Kumar Paramasivan +2 位作者 Karmel Arockiasamy Saminathan Senthivel Muthamilselvan Thangavel 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期3829-3844,共16页
Wind speed forecasting is important for wind energy forecasting.In the modern era,the increase in energy demand can be managed effectively by fore-casting the wind speed accurately.The main objective of this research ... Wind speed forecasting is important for wind energy forecasting.In the modern era,the increase in energy demand can be managed effectively by fore-casting the wind speed accurately.The main objective of this research is to improve the performance of wind speed forecasting by handling uncertainty,the curse of dimensionality,overfitting and non-linearity issues.The curse of dimensionality and overfitting issues are handled by using Boruta feature selec-tion.The uncertainty and the non-linearity issues are addressed by using the deep learning based Bi-directional Long Short Term Memory(Bi-LSTM).In this paper,Bi-LSTM with Boruta feature selection named BFS-Bi-LSTM is proposed to improve the performance of wind speed forecasting.The model identifies relevant features for wind speed forecasting from the meteorological features using Boruta wrapper feature selection(BFS).Followed by Bi-LSTM predicts the wind speed by considering the wind speed from the past and future time steps.The proposed BFS-Bi-LSTM model is compared against Multilayer perceptron(MLP),MLP with Boruta(BFS-MLP),Long Short Term Memory(LSTM),LSTM with Boruta(BFS-LSTM)and Bi-LSTM in terms of Root Mean Square Error(RMSE),Mean Absolute Error(MAE),Mean Square Error(MSE)and R2.The BFS-Bi-LSTM surpassed other models by producing RMSE of 0.784,MAE of 0.530,MSE of 0.615 and R2 of 0.8766.The experimental result shows that the BFS-Bi-LSTM produced better forecasting results compared to others. 展开更多
关键词 Bi-directional long short term memory boruta feature selection deep learning machine learning wind speed forecasting
下载PDF
Deep Learning-Based Semantic Feature Extraction:A Literature Review and Future Directions 被引量:1
10
作者 DENG Letian ZHAO Yanru 《ZTE Communications》 2023年第2期11-17,共7页
Semantic communication,as a critical component of artificial intelligence(AI),has gained increasing attention in recent years due to its significant impact on various fields.In this paper,we focus on the applications ... Semantic communication,as a critical component of artificial intelligence(AI),has gained increasing attention in recent years due to its significant impact on various fields.In this paper,we focus on the applications of semantic feature extraction,a key step in the semantic communication,in several areas of artificial intelligence,including natural language processing,medical imaging,remote sensing,autonomous driving,and other image-related applications.Specifically,we discuss how semantic feature extraction can enhance the accuracy and efficiency of natural language processing tasks,such as text classification,sentiment analysis,and topic modeling.In the medical imaging field,we explore how semantic feature extraction can be used for disease diagnosis,drug development,and treatment planning.In addition,we investigate the applications of semantic feature extraction in remote sensing and autonomous driving,where it can facilitate object detection,scene understanding,and other tasks.By providing an overview of the applications of semantic feature extraction in various fields,this paper aims to provide insights into the potential of this technology to advance the development of artificial intelligence. 展开更多
关键词 semantic feature extraction semantic communication deep learning
下载PDF
Human Gait Recognition Based on Sequential Deep Learning and Best Features Selection
11
作者 Ch Avais Hanif Muhammad Ali Mughal +3 位作者 Muhammad Attique Khan Usman Tariq Ye Jin Kim Jae-Hyuk Cha 《Computers, Materials & Continua》 SCIE EI 2023年第6期5123-5140,共18页
Gait recognition is an active research area that uses a walking theme to identify the subject correctly.Human Gait Recognition(HGR)is performed without any cooperation from the individual.However,in practice,it remain... Gait recognition is an active research area that uses a walking theme to identify the subject correctly.Human Gait Recognition(HGR)is performed without any cooperation from the individual.However,in practice,it remains a challenging task under diverse walking sequences due to the covariant factors such as normal walking and walking with wearing a coat.Researchers,over the years,have worked on successfully identifying subjects using different techniques,but there is still room for improvement in accuracy due to these covariant factors.This paper proposes an automated model-free framework for human gait recognition in this article.There are a few critical steps in the proposed method.Firstly,optical flow-based motion region esti-mation and dynamic coordinates-based cropping are performed.The second step involves training a fine-tuned pre-trained MobileNetV2 model on both original and optical flow cropped frames;the training has been conducted using static hyperparameters.The third step proposed a fusion technique known as normal distribution serially fusion.In the fourth step,a better optimization algorithm is applied to select the best features,which are then classified using a Bi-Layered neural network.Three publicly available datasets,CASIA A,CASIA B,and CASIA C,were used in the experimental process and obtained average accuracies of 99.6%,91.6%,and 95.02%,respectively.The proposed framework has achieved improved accuracy compared to the other methods. 展开更多
关键词 Human gait recognition optical flow deep learning features FUSION feature selection
下载PDF
Solving Geometry Problems via Feature Learning and Contrastive Learning of Multimodal Data
12
作者 Pengpeng Jian Fucheng Guo +1 位作者 Yanli Wang Yang Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第8期1707-1728,共22页
This paper presents an end-to-end deep learning method to solve geometry problems via feature learning and contrastive learning of multimodal data.A key challenge in solving geometry problems using deep learning is to... This paper presents an end-to-end deep learning method to solve geometry problems via feature learning and contrastive learning of multimodal data.A key challenge in solving geometry problems using deep learning is to automatically adapt to the task of understanding single-modal and multimodal problems.Existing methods either focus on single-modal ormultimodal problems,and they cannot fit each other.A general geometry problem solver shouldobviouslybe able toprocess variousmodalproblems at the same time.Inthispaper,a shared feature-learning model of multimodal data is adopted to learn the unified feature representation of text and image,which can solve the heterogeneity issue between multimodal geometry problems.A contrastive learning model of multimodal data enhances the semantic relevance betweenmultimodal features and maps them into a unified semantic space,which can effectively adapt to both single-modal and multimodal downstream tasks.Based on the feature extraction and fusion of multimodal data,a proposed geometry problem solver uses relation extraction,theorem reasoning,and problem solving to present solutions in a readable way.Experimental results show the effectiveness of the method. 展开更多
关键词 Geometry problems multimodal feature learning multimodal contrastive learning automatic solver
下载PDF
Adaptive Kernel Firefly Algorithm Based Feature Selection and Q-Learner Machine Learning Models in Cloud
13
作者 I.Mettildha Mary K.Karuppasamy 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期2667-2685,共19页
CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferrin... CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferring information.A dynamic strategy,DevMLOps(Development Machine Learning Operations)used in automatic selections and tunings of MLTs result in significant performance differences.But,the scheme has many disadvantages including continuity in training,more samples and training time in feature selections and increased classification execution times.RFEs(Recursive Feature Eliminations)are computationally very expensive in its operations as it traverses through each feature without considering correlations between them.This problem can be overcome by the use of Wrappers as they select better features by accounting for test and train datasets.The aim of this paper is to use DevQLMLOps for automated tuning and selections based on orchestrations and messaging between containers.The proposed AKFA(Adaptive Kernel Firefly Algorithm)is for selecting features for CNM(Cloud Network Monitoring)operations.AKFA methodology is demonstrated using CNSD(Cloud Network Security Dataset)with satisfactory results in the performance metrics like precision,recall,F-measure and accuracy used. 展开更多
关键词 Cloud analytics machine learning ensemble learning distributed learning clustering classification auto selection auto tuning decision feedback cloud DevOps feature selection wrapper feature selection Adaptive Kernel Firefly Algorithm(AKFA) Q learning
下载PDF
Crops Leaf Diseases Recognition:A Framework of Optimum Deep Learning Features
14
作者 Shafaq Abbas Muhammad Attique Khan +5 位作者 Majed Alhaisoni Usman Tariq Ammar Armghan Fayadh Alenezi Arnab Majumdar Orawit Thinnukool 《Computers, Materials & Continua》 SCIE EI 2023年第1期1139-1159,共21页
Manual diagnosis of crops diseases is not an easy process;thus,a computerized method is widely used.Froma couple of years,advancements in the domain ofmachine learning,such as deep learning,have shown substantial succ... Manual diagnosis of crops diseases is not an easy process;thus,a computerized method is widely used.Froma couple of years,advancements in the domain ofmachine learning,such as deep learning,have shown substantial success.However,they still faced some challenges such as similarity in disease symptoms and irrelevant features extraction.In this article,we proposed a new deep learning architecture with optimization algorithm for cucumber and potato leaf diseases recognition.The proposed architecture consists of five steps.In the first step,data augmentation is performed to increase the numbers of training samples.In the second step,pre-trained DarkNet19 deep model is opted and fine-tuned that later utilized for the training of fine-tuned model through transfer learning.Deep features are extracted from the global pooling layer in the next step that is refined using Improved Cuckoo search algorithm.The best selected features are finally classified using machine learning classifiers such as SVM,and named a few more for final classification results.The proposed architecture is tested using publicly available datasets–Cucumber National Dataset and Plant Village.The proposed architecture achieved an accuracy of 100.0%,92.9%,and 99.2%,respectively.Acomparison with recent techniques is also performed,revealing that the proposed method achieved improved accuracy while consuming less computational time. 展开更多
关键词 Crops diseases PREPROCESSING convolutional neural network features optimization machine learning
下载PDF
Graph-Based Feature Learning for Cross-Project Software Defect Prediction
15
作者 Ahmed Abdu Zhengjun Zhai +2 位作者 Hakim A.Abdo Redhwan Algabri Sungon Lee 《Computers, Materials & Continua》 SCIE EI 2023年第10期161-180,共20页
Cross-project software defect prediction(CPDP)aims to enhance defect prediction in target projects with limited or no historical data by leveraging information from related source projects.The existing CPDP approaches... Cross-project software defect prediction(CPDP)aims to enhance defect prediction in target projects with limited or no historical data by leveraging information from related source projects.The existing CPDP approaches rely on static metrics or dynamic syntactic features,which have shown limited effectiveness in CPDP due to their inability to capture higher-level system properties,such as complex design patterns,relationships between multiple functions,and dependencies in different software projects,that are important for CPDP.This paper introduces a novel approach,a graph-based feature learning model for CPDP(GB-CPDP),that utilizes NetworkX to extract features and learn representations of program entities from control flow graphs(CFGs)and data dependency graphs(DDGs).These graphs capture the structural and data dependencies within the source code.The proposed approach employs Node2Vec to transform CFGs and DDGs into numerical vectors and leverages Long Short-Term Memory(LSTM)networks to learn predictive models.The process involves graph construction,feature learning through graph embedding and LSTM,and defect prediction.Experimental evaluation using nine open-source Java projects from the PROMISE dataset demonstrates that GB-CPDP outperforms state-of-the-art CPDP methods in terms of F1-measure and Area Under the Curve(AUC).The results showcase the effectiveness of GB-CPDP in improving the performance of cross-project defect prediction. 展开更多
关键词 Cross-project defect prediction graphs features deep learning graph embedding
下载PDF
Learning Noise-Assisted Robust Image Features for Fine-Grained Image Retrieval
16
作者 Vidit Kumar Hemant Petwal +1 位作者 Ajay Krishan Gairola Pareshwar Prasad Barmola 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期2711-2724,共14页
Fine-grained image search is one of the most challenging tasks in computer vision that aims to retrieve similar images at the fine-grained level for a given query image.The key objective is to learn discriminative fin... Fine-grained image search is one of the most challenging tasks in computer vision that aims to retrieve similar images at the fine-grained level for a given query image.The key objective is to learn discriminative fine-grained features by training deep models such that similar images are clustered,and dissimilar images are separated in the low embedding space.Previous works primarily focused on defining local structure loss functions like triplet loss,pairwise loss,etc.However,training via these approaches takes a long training time,and they have poor accuracy.Additionally,representations learned through it tend to tighten up in the embedded space and lose generalizability to unseen classes.This paper proposes a noise-assisted representation learning method for fine-grained image retrieval to mitigate these issues.In the proposed work,class manifold learning is performed in which positive pairs are created with noise insertion operation instead of tightening class clusters.And other instances are treated as negatives within the same cluster.Then a loss function is defined to penalize when the distance between instances of the same class becomes too small relative to the noise pair in that class in embedded space.The proposed approach is validated on CARS-196 and CUB-200 datasets and achieved better retrieval results(85.38%recall@1 for CARS-196%and 70.13%recall@1 for CUB-200)compared to other existing methods. 展开更多
关键词 Convolutional network zero-shot learning fine-grained image retrieval image representation image retrieval intra-class diversity feature learning
下载PDF
Attentive Neighborhood Feature Augmentation for Semi-supervised Learning
17
作者 Qi Liu Jing Li +1 位作者 Xianmin Wang Wenpeng Zhao 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1753-1771,共19页
Recent state-of-the-art semi-supervised learning(SSL)methods usually use data augmentations as core components.Such methods,however,are limited to simple transformations such as the augmentations under the instance’s... Recent state-of-the-art semi-supervised learning(SSL)methods usually use data augmentations as core components.Such methods,however,are limited to simple transformations such as the augmentations under the instance’s naive representations or the augmentations under the instance’s semantic representations.To tackle this problem,we offer a unique insight into data augmentations and propose a novel data-augmentation-based semi-supervised learning method,called Attentive Neighborhood Feature Aug-mentation(ANFA).The motivation of our method lies in the observation that the relationship between the given feature and its neighborhood may contribute to constructing more reliable transformations for the data,and further facilitating the classifier to distinguish the ambiguous features from the low-dense regions.Specially,we first project the labeled and unlabeled data points into an embedding space and then construct a neighbor graph that serves as a similarity measure based on the similar representations in the embedding space.Then,we employ an attention mechanism to transform the target features into augmented ones based on the neighbor graph.Finally,we formulate a novel semi-supervised loss by encouraging the predictions of the interpolations of augmented features to be consistent with the corresponding interpolations of the predictions of the target features.We carried out exper-iments on SVHN and CIFAR-10 benchmark datasets and the experimental results demonstrate that our method outperforms the state-of-the-art methods when the number of labeled examples is limited. 展开更多
关键词 Semi-supervised learning attention mechanism feature augmentation consistency regularization
下载PDF
Entropy Based Feature Fusion Using Deep Learning for Waste Object Detection and Classification Model
18
作者 Ehab Bahaudien Ashary Sahar Jambi +1 位作者 Rehab B.Ashari Mahmoud Ragab 《Computer Systems Science & Engineering》 SCIE EI 2023年第12期2953-2969,共17页
Object Detection is the task of localization and classification of objects in a video or image.In recent times,because of its widespread applications,it has obtained more importance.In the modern world,waste pollution... Object Detection is the task of localization and classification of objects in a video or image.In recent times,because of its widespread applications,it has obtained more importance.In the modern world,waste pollution is one significant environmental problem.The prominence of recycling is known very well for both ecological and economic reasons,and the industry needs higher efficiency.Waste object detection utilizing deep learning(DL)involves training a machine-learning method to classify and detect various types of waste in videos or images.This technology is utilized for several purposes recycling and sorting waste,enhancing waste management and reducing environmental pollution.Recent studies of automatic waste detection are difficult to compare because of the need for benchmarks and broadly accepted standards concerning the employed data andmetrics.Therefore,this study designs an Entropy-based Feature Fusion using Deep Learning forWasteObject Detection and Classification(EFFDL-WODC)algorithm.The presented EFFDL-WODC system inherits the concepts of feature fusion and DL techniques for the effectual recognition and classification of various kinds of waste objects.In the presented EFFDL-WODC system,two major procedures can be contained,such as waste object detection and waste object classification.For object detection,the EFFDL-WODC technique uses a YOLOv7 object detector with a fusionbased backbone network.In addition,entropy feature fusion-based models such as VGG-16,SqueezeNet,and NASNetmodels are used.Finally,the EFFDL-WODC technique uses a graph convolutional network(GCN)model performed for the classification of detected waste objects.The performance validation of the EFFDL-WODC approach was validated on the benchmark database.The comprehensive comparative results demonstrated the improved performance of the EFFDL-WODC technique over recent approaches. 展开更多
关键词 Object detection object classification waste management deep learning feature fusion
下载PDF
Feature Fusion-Based Deep Learning Network to Recognize Table Tennis Actions
19
作者 Chih-Ta Yen Tz-Yun Chen +1 位作者 Un-Hung Chen Guo-Chang WangZong-Xian Chen 《Computers, Materials & Continua》 SCIE EI 2023年第1期83-99,共17页
A system for classifying four basic table tennis strokes using wearable devices and deep learning networks is proposed in this study.The wearable device consisted of a six-axis sensor,Raspberry Pi 3,and a power bank.M... A system for classifying four basic table tennis strokes using wearable devices and deep learning networks is proposed in this study.The wearable device consisted of a six-axis sensor,Raspberry Pi 3,and a power bank.Multiple kernel sizes were used in convolutional neural network(CNN)to evaluate their performance for extracting features.Moreover,a multiscale CNN with two kernel sizes was used to perform feature fusion at different scales in a concatenated manner.The CNN achieved recognition of the four table tennis strokes.Experimental data were obtained from20 research participants who wore sensors on the back of their hands while performing the four table tennis strokes in a laboratory environment.The data were collected to verify the performance of the proposed models for wearable devices.Finally,the sensor and multi-scale CNN designed in this study achieved accuracy and F1 scores of 99.58%and 99.16%,respectively,for the four strokes.The accuracy for five-fold cross validation was 99.87%.This result also shows that the multi-scale convolutional neural network has better robustness after fivefold cross validation. 展开更多
关键词 Wearable devices deep learning six-axis sensor feature fusion multi-scale convolutional neural networks action recognit
下载PDF
Hybrid Metaheuristics Feature Selection with Stacked Deep Learning-Enabled Cyber-Attack Detection Model
20
作者 Mashael M Asiri Heba G.Mohamed +5 位作者 Mohamed K Nour Mesfer Al Duhayyim Amira Sayed A.Aziz Abdelwahed Motwakel Abu Sarwar Zamani Mohamed I.Eldesouki 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期1679-1694,共16页
Due to exponential increase in smart resource limited devices and high speed communication technologies,Internet of Things(IoT)have received significant attention in different application areas.However,IoT environment... Due to exponential increase in smart resource limited devices and high speed communication technologies,Internet of Things(IoT)have received significant attention in different application areas.However,IoT environment is highly susceptible to cyber-attacks because of memory,processing,and communication restrictions.Since traditional models are not adequate for accomplishing security in the IoT environment,the recent developments of deep learning(DL)models find beneficial.This study introduces novel hybrid metaheuristics feature selection with stacked deep learning enabled cyber-attack detection(HMFS-SDLCAD)model.The major intention of the HMFS-SDLCAD model is to recognize the occurrence of cyberattacks in the IoT environment.At the preliminary stage,data pre-processing is carried out to transform the input data into useful format.In addition,salp swarm optimization based on particle swarm optimization(SSOPSO)algorithm is used for feature selection process.Besides,stacked bidirectional gated recurrent unit(SBiGRU)model is utilized for the identification and classification of cyberattacks.Finally,whale optimization algorithm(WOA)is employed for optimal hyperparameter optimization process.The experimental analysis of the HMFS-SDLCAD model is validated using benchmark dataset and the results are assessed under several aspects.The simulation outcomes pointed out the improvements of the HMFS-SDLCAD model over recent approaches. 展开更多
关键词 Cyberattacks SECURITY deep learning internet of things feature selection data classification
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部