期刊文献+

为您找到了以下期刊:

共找到730篇文章
< 1 2 37 >
每页显示 20 50 100
Intrusion Detection System for Smart Industrial Environments with Ensemble Feature Selection and Deep Convolutional Neural Networks 被引量:1
1
作者 Asad Raza Shahzad Memon +1 位作者 Muhammad Ali Nizamani Mahmood Hussain Shah intelligent automation & soft computing 2024年第3期545-566,共22页
Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerabl... Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerable to cyber threats and exploits due to their connectivity with the internet.Traditional signature-based IDS are effective in detecting known attacks,but they are unable to detect unknown emerging attacks.Therefore,there is the need for an IDS which can learn from data and detect new threats.Ensemble Machine Learning(ML)and individual Deep Learning(DL)based IDS have been developed,and these individual models achieved low accuracy;however,their performance can be improved with the ensemble stacking technique.In this paper,we have proposed a Deep Stacked Neural Network(DSNN)based IDS,which consists of two stacked Convolutional Neural Network(CNN)models as base learners and Extreme Gradient Boosting(XGB)as the meta learner.The proposed DSNN model was trained and evaluated with the next-generation dataset,TON_IoT.Several pre-processing techniques were applied to prepare a dataset for the model,including ensemble feature selection and the SMOTE technique.Accuracy,precision,recall,F1-score,and false positive rates were used to evaluate the performance of the proposed ensemble model.Our experimental results showed that the accuracy for binary classification is 99.61%,which is better than in the baseline individual DL and ML models.In addition,the model proposed for IDS has been compared with similar models.The proposed DSNN achieved better performance metrics than the other models.The proposed DSNN model will be used to develop enhanced IDS for threat mitigation in smart industrial environments. 展开更多
关键词 Industrial internet of things smart industrial environment cyber-attacks convolutional neural network ensemble learning
下载PDF
Secure Digital Image Watermarking Technique Based on ResNet-50 Architecture
2
作者 Satya Narayan Das Mrutyunjaya Panda intelligent automation & soft computing 2024年第6期1073-1100,共28页
In today’s world of massive data and interconnected networks,it’s crucial to burgeon a secure and efficient digital watermarking method to protect the copyrights of digital content.Existing research primarily focuse... In today’s world of massive data and interconnected networks,it’s crucial to burgeon a secure and efficient digital watermarking method to protect the copyrights of digital content.Existing research primarily focuses on deep learning-based approaches to improve the quality of watermarked images,but they have some flaws.To overcome this,the deep learning digital image watermarking model with highly secure algorithms is proposed to secure the digital image.Recently,quantum logistic maps,which combine the concept of quantum computing with traditional techniques,have been considered a niche and promising area of research that has attracted researchers’attention to further research in digital watermarking.This research uses the chaotic behaviour of the quantum logistic map with Rivest–Shamir–Adleman(RSA)and Secure Hash(SHA-3)algorithms for a robust watermark embedding process,where a watermark is embedded into the host image.This way,the quantum chaos method not only helps limit the chance of tampering with the image content through reverse engineering but also assists in maintaining a high level of imperceptibility and strong robustness with efficient extraction or detection of watermark images.Lifting Wavelet Transformation(LWT)is a potential and computationally efficient version of traditional Discrete Wavelet Transform(DWT)where the host image is divided into four sub-bands to offer a multi-resolution view of an image with greater flexibility in watermarking methodologies.Furthermore,considering the robustness against attacks,a pre-trained Residual Neural Network(ResNet-50),a convolutional neural network with 50 layers deep,is used to better learn the complex features and efficiently extract the watermark from the image.By integrating RSA and SHA-3 algorithms,the proposed model demonstrates improved imperceptibility,robustness,and accuracy in watermark extraction compared to traditional methods.It achieves a Peak Signal-to-Noise Ratio(PSNR)of 49.83%,a Structural Similarity Index Measure(SSIM)of 0.98,and a Number of Pixels Change Rate(NPCR)of 99.79%,respectively.These results reflect the model’s effectiveness in delivering superior quality and security.Consequently,our proposed approach offers accurate results,exceptional invisibility,and enhanced robustness compared to the existing digital image watermarking techniques. 展开更多
关键词 Image watermarking quantum logistics Rivest-Shamir-Adleman(RSA) Secure Hash(SHA-3) Lifting Wavelet Transformation(LWT) ResNet-50 deep learning secure communication
下载PDF
Performance Evaluation ofMulti-Agent Reinforcement Learning Algorithms
3
作者 Abdulghani M.Abdulghani Mokhles M.Abdulghani +1 位作者 Wilbur L.Walters Khalid H.Abed intelligent automation & soft computing 2024年第2期337-352,共16页
Multi-Agent Reinforcement Learning(MARL)has proven to be successful in cooperative assignments.MARL is used to investigate how autonomous agents with the same interests can connect and act in one team.MARL cooperation... Multi-Agent Reinforcement Learning(MARL)has proven to be successful in cooperative assignments.MARL is used to investigate how autonomous agents with the same interests can connect and act in one team.MARL cooperation scenarios are explored in recreational cooperative augmented reality environments,as well as realworld scenarios in robotics.In this paper,we explore the realm of MARL and its potential applications in cooperative assignments.Our focus is on developing a multi-agent system that can collaborate to attack or defend against enemies and achieve victory withminimal damage.To accomplish this,we utilize the StarCraftMulti-Agent Challenge(SMAC)environment and train four MARL algorithms:Q-learning with Mixtures of Experts(QMIX),Value-DecompositionNetwork(VDN),Multi-agent Proximal PolicyOptimizer(MAPPO),andMulti-Agent Actor Attention Critic(MAA2C).These algorithms allow multiple agents to cooperate in a specific scenario to achieve the targeted mission.Our results show that the QMIX algorithm outperforms the other three algorithms in the attacking scenario,while the VDN algorithm achieves the best results in the defending scenario.Specifically,the VDNalgorithmreaches the highest value of battle wonmean and the lowest value of dead alliesmean.Our research demonstrates the potential forMARL algorithms to be used in real-world applications,such as controllingmultiple robots to provide helpful services or coordinating teams of agents to accomplish tasks that would be impossible for a human to do.The SMAC environment provides a unique opportunity to test and evaluate MARL algorithms in a challenging and dynamic environment,and our results show that these algorithms can be used to achieve victory with minimal damage. 展开更多
关键词 Reinforcement learning RL MULTI-AGENT MARL SMAC VDN QMIX MAPPO
下载PDF
Systematic Cloud-Based Optimization: Twin-Fold Moth Flame Algorithm for VM Deployment and Load-Balancing
4
作者 Umer Nauman Yuhong Zhang +1 位作者 Zhihui Li Tong Zhen intelligent automation & soft computing 2024年第3期477-510,共34页
Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate des... Cloud computing has gained significant recognition due to its ability to provide a broad range of online services and applications.Nevertheless,existing commercial cloud computing models demonstrate an appropriate design by concentrating computational assets,such as preservation and server infrastructure,in a limited number of large-scale worldwide data facilities.Optimizing the deployment of virtual machines(VMs)is crucial in this scenario to ensure system dependability,performance,and minimal latency.A significant barrier in the present scenario is the load distribution,particularly when striving for improved energy consumption in a hypothetical grid computing framework.This design employs load-balancing techniques to allocate different user workloads across several virtual machines.To address this challenge,we propose using the twin-fold moth flame technique,which serves as a very effective optimization technique.Developers intentionally designed the twin-fold moth flame method to consider various restrictions,including energy efficiency,lifespan analysis,and resource expenditures.It provides a thorough approach to evaluating total costs in the cloud computing environment.When assessing the efficacy of our suggested strategy,the study will analyze significant metrics such as energy efficiency,lifespan analysis,and resource expenditures.This investigation aims to enhance cloud computing techniques by developing a new optimization algorithm that considers multiple factors for effective virtual machine placement and load balancing.The proposed work demonstrates notable improvements of 12.15%,10.68%,8.70%,13.29%,18.46%,and 33.39%for 40 count data of nodes using the artificial bee colony-bat algorithm,ant colony optimization,crow search algorithm,krill herd,whale optimization genetic algorithm,and improved Lévy-based whale optimization algorithm,respectively. 展开更多
关键词 Optimizing cloud computing deployment of virtual machines LOAD-BALANCING twin-fold moth flame algorithm grid computing computational resource distribution data virtualization
下载PDF
Systematic Review:Load Balancing in Cloud Computing by Using Metaheuristic Based Dynamic Algorithms
5
作者 Darakhshan Syed Ghulam Muhammad Safdar Rizvi intelligent automation & soft computing 2024年第3期437-476,共40页
Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led... Cloud Computing has the ability to provide on-demand access to a shared resource pool.It has completely changed the way businesses are managed,implement applications,and provide services.The rise in popularity has led to a significant increase in the user demand for services.However,in cloud environments efficient load balancing is essential to ensure optimal performance and resource utilization.This systematic review targets a detailed description of load balancing techniques including static and dynamic load balancing algorithms.Specifically,metaheuristic-based dynamic load balancing algorithms are identified as the optimal solution in case of increased traffic.In a cloud-based context,this paper describes load balancing measurements,including the benefits and drawbacks associated with the selected load balancing techniques.It also summarizes the algorithms based on implementation,time complexity,adaptability,associated issue(s),and targeted QoS parameters.Additionally,the analysis evaluates the tools and instruments utilized in each investigated study.Moreover,comparative analysis among static,traditional dynamic and metaheuristic algorithms based on response time by using the CloudSim simulation tool is also performed.Finally,the key open problems and potential directions for the state-of-the-art metaheuristic-based approaches are also addressed. 展开更多
关键词 Cloud computing load balancing metaheuristic algorithm dynamic algorithm load balancer QOS
下载PDF
Robot Vision over CosGANs to Enhance Performance with Source-Free Domain Adaptation Using Advanced Loss Function
6
作者 Laviza Falak Naz Rohail Qamar +2 位作者 Raheela Asif Muhammad Imran Saad Ahmed intelligent automation & soft computing 2024年第5期855-887,共33页
Domain shift is when the data used in training does not match the ones it will be applied to later on under similar conditions.Domain shift will reduce accuracy in results.To prevent this,domain adaptation is done,whi... Domain shift is when the data used in training does not match the ones it will be applied to later on under similar conditions.Domain shift will reduce accuracy in results.To prevent this,domain adaptation is done,which adapts the pre-trained model to the target domain.In real scenarios,the availability of labels for target data is rare thus resulting in unsupervised domain adaptation.Herein,we propose an innovative approach where source-free domain adaptation models and Generative Adversarial Networks(GANs)are integrated to improve the performance of computer vision or robotic vision-based systems in our study.Cosine Generative Adversarial Network(CosGAN)is developed as a GAN that uses cosine embedding loss to handle issues associated with unsupervised source-relax domain adaptations.For less complex architecture,the CosGAN training process has two steps that produce results almost comparable to other state-of-the-art techniques.The efficiency of CosGAN was compared by conducting experiments using benchmarked datasets.The approach was evaluated on different datasets and experimental results show superiority over existing state-of-the-art methods in terms of accuracy as well as generalization ability.This technique has numerous applications including wheeled robots,autonomous vehicles,warehouse automation,and all image-processing-based automation tasks so it can reshape the field of robotic vision with its ability to make robots adapt to new tasks and environments efficiently without requiring additional labeled data.It lays the groundwork for future expansions in robotic vision and applications.Although GAN provides a variety of outstanding features,it also increases the risk of instability and over-fitting of the training data thus making the data difficult to converge. 展开更多
关键词 Cosine generative adversarial network cosine embedding loss generative adversarial networks source free domain adaptation unsupervised learning hyper-parameter
下载PDF
A Deep Learning-Based Automated Approach of Schizophrenia Detection from Facial Micro-Expressions
7
作者 Anum Saher Ghulam Gilanie +3 位作者 Sana Cheema Akkasha Latif Syeda Naila Batool Hafeez Ullah intelligent automation & soft computing 2024年第6期1053-1071,共19页
Schizophrenia is a severe mental illness responsible for many of the world’s disabilities.It significantly impacts human society;thus,rapid,and efficient identification is required.This research aims to diagnose schi... Schizophrenia is a severe mental illness responsible for many of the world’s disabilities.It significantly impacts human society;thus,rapid,and efficient identification is required.This research aims to diagnose schizophrenia directly from a high-resolution camera,which can capture the subtle micro facial expressions that are difficult to spot with the help of the naked eye.In a clinical study by a team of experts at Bahawal Victoria Hospital(BVH),Bahawalpur,Pakistan,there were 300 people with schizophrenia and 299 healthy subjects.Videos of these participants have been captured and converted into their frames using the OpenFace tool.Additionally,pose,gaze,Action Units(AUs),and land-marked features have been extracted in the Comma Separated Values(CSV)file.Aligned faces have been used to detect schizophrenia by the proposed and the pre-trained Convolutional Neural Network(CNN)models,i.e.,VGG16,Mobile Net,Efficient Net,Google Net,and ResNet50.Moreover,Vision transformer,Swim transformer,big transformer,and vision transformer without attention have also been used to train the models on customized dataset.CSV files have been used to train a model using logistic regression,decision trees,random forest,gradient boosting,and support vector machine classifiers.Moreover,the parameters of the proposed CNN architecture have been optimized using the Particle Swarm Optimization algorithm.The experimental results showed a validation accuracy of 99.6%for the proposed CNN model.The results demonstrated that the reported method is superior to the previous methodologies.The model can be deployed in a real-time environment. 展开更多
关键词 SCHIZOPHRENIA deep learning machine learning facial expressions TRANSFORMERS particle swarm optimization(PSO)algorithm
下载PDF
Malware Attacks Detection in IoT Using Recurrent Neural Network(RNN)
8
作者 Abeer Abdullah Alsadhan Abdullah A.Al-Atawi +3 位作者 Hanen karamti Abid Jameel Islam Zada Tan N.Nguyen intelligent automation & soft computing 2024年第2期135-155,共21页
IoT(Internet of Things)devices are being used more and more in a variety of businesses and for a variety of tasks,such as environmental data collection in both civilian and military situations.They are a desirable att... IoT(Internet of Things)devices are being used more and more in a variety of businesses and for a variety of tasks,such as environmental data collection in both civilian and military situations.They are a desirable attack target for malware intended to infect specific IoT devices due to their growing use in a variety of applications and their increasing computational and processing power.In this study,we investigate the possibility of detecting IoT malware using recurrent neural networks(RNNs).RNNis used in the proposed method to investigate the execution operation codes of ARM-based Internet of Things apps(OpCodes).To train our algorithms,we employ a dataset of IoT applications that includes 281 malicious and 270 benign pieces of software.The trained model is then put to the test using 100 brand-new IoT malware samples across three separate LSTM settings.Model exposure was not previously conducted on these samples.Detecting newly crafted malware samples with 2-layer neurons had the highest accuracy(98.18%)in the 10-fold cross validation experiment.A comparison of the LSTMtechnique to other machine learning classifiers shows that it yields the best results. 展开更多
关键词 MALWARE malicious code code obfuscation IOT machine learning deep learning
下载PDF
Evaluating the Effectiveness of Graph Convolutional Network for Detection of Healthcare Polypharmacy Side Effects
9
作者 Omer Nabeel Dara Tareq Abed Mohammed Abdullahi Abdu Ibrahim intelligent automation & soft computing 2024年第6期1007-1033,共27页
Healthcare polypharmacy is routinely used to treat numerous conditions;however,it often leads to unanticipated bad consequences owing to complicated medication interactions.This paper provides a graph convolutional ne... Healthcare polypharmacy is routinely used to treat numerous conditions;however,it often leads to unanticipated bad consequences owing to complicated medication interactions.This paper provides a graph convolutional network(GCN)-based model for identifying adverse effects in polypharmacy by integrating pharmaceutical data from electronic health records(EHR).The GCN framework analyzes the complicated links between drugs to forecast the possibility of harmful drug interactions.Experimental assessments reveal that the proposed GCN model surpasses existing machine learning approaches,reaching an accuracy(ACC)of 91%,an area under the receiver operating characteristic curve(AUC)of 0.88,and an F1-score of 0.83.Furthermore,the overall accuracy of the model achieved 98.47%.These findings imply that the GCN model is helpful for monitoring individuals receiving polypharmacy.Future research should concentrate on improving the model and extending datasets for therapeutic applications. 展开更多
关键词 POLYPHARMACY side effects drug-drug interactions graph convolutional networks deep learning medication network
下载PDF
A Study on Optimizing the Double-Spine Type Flow Path Design for the Overhead Transportation System Using Tabu Search Algorithm
10
作者 Nguyen Huu Loc Khuu Thuy Duy Truong +3 位作者 Quoc Dien Le Tran Thanh Cong Vu Hoa Binh Tran Tuong Quan Vo intelligent automation & soft computing 2024年第2期255-279,共25页
Optimizing Flow Path Design(FPD)is a popular research area in transportation system design,but its application to Overhead Transportation Systems(OTSs)has been limited.This study focuses on optimizing a double-spine f... Optimizing Flow Path Design(FPD)is a popular research area in transportation system design,but its application to Overhead Transportation Systems(OTSs)has been limited.This study focuses on optimizing a double-spine flow path design for OTSs with 10 stations by minimizing the total travel distance for both loaded and empty flows.We employ transportation methods,specifically the North-West Corner and Stepping-Stone methods,to determine empty vehicle travel flows.Additionally,the Tabu Search(TS)algorithm is applied to branch the 10 stations into two main layout branches.The results obtained from our proposed method demonstrate a reduction in the objective function value compared to the initial feasible solution.Furthermore,we explore howchanges in the parameters of the TS algorithm affect the optimal result.We validate the feasibility of our approach by comparing it with relevant literature and conducting additional tests on layouts with 20 and 30 stations. 展开更多
关键词 Overhead transportation systems tabu search double-spine layout transportationmethod empty travel flow path design
下载PDF
A Deep Transfer Learning Approach for Addressing Yaw Pose Variation to Improve Face Recognition Performance
11
作者 M.Jayasree K.A.Sunitha +3 位作者 A.Brindha Punna Rajasekhar G.Aravamuthan G.Joselin Retnakumar intelligent automation & soft computing 2024年第4期745-764,共20页
Identifying faces in non-frontal poses presents a significant challenge for face recognition(FR)systems.In this study,we delved into the impact of yaw pose variations on these systems and devised a robust method for d... Identifying faces in non-frontal poses presents a significant challenge for face recognition(FR)systems.In this study,we delved into the impact of yaw pose variations on these systems and devised a robust method for detecting faces across a wide range of angles from 0°to±90°.We initially selected the most suitable feature vector size by integrating the Dlib,FaceNet(Inception-v2),and“Support Vector Machines(SVM)”+“K-nearest neighbors(KNN)”algorithms.To train and evaluate this feature vector,we used two datasets:the“Labeled Faces in the Wild(LFW)”benchmark data and the“Robust Shape-Based FR System(RSBFRS)”real-time data,which contained face images with varying yaw poses.After selecting the best feature vector,we developed a real-time FR system to handle yaw poses.The proposed FaceNet architecture achieved recognition accuracies of 99.7%and 99.8%for the LFW and RSBFRS datasets,respectively,with 128 feature vector dimensions and minimum Euclidean distance thresholds of 0.06 and 0.12.The FaceNet+SVM and FaceNet+KNN classifiers achieved classification accuracies of 99.26%and 99.44%,respectively.The 128-dimensional embedding vector showed the highest recognition rate among all dimensions.These results demonstrate the effectiveness of our proposed approach in enhancing FR accuracy,particularly in real-world scenarios with varying yaw poses. 展开更多
关键词 Face recognition pose variations transfer learning method yaw poses FaceNet Inception-v2
下载PDF
Machine Learning Empowered Security and Privacy Architecture for IoT Networks with the Integration of Blockchain
12
作者 Sohaib Latif M.Saad Bin Ilyas +3 位作者 Azhar Imran Hamad Ali Abosaq Abdulaziz Alzubaidi Vincent Karovic Jr. intelligent automation & soft computing 2024年第2期353-379,共27页
The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes ... The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively. 展开更多
关键词 Machine learning internet of things blockchain data privacy SECURITY Industry 4.0
下载PDF
ABMRF:An Ensemble Model for Author Profiling Based on Stylistic Features Using Roman Urdu
13
作者 Aiman Muhammad Arshad +3 位作者 Bilal Khan Khalil Khan Ali Mustafa Qamar Rehan Ullah Khan intelligent automation & soft computing 2024年第2期301-317,共17页
This study explores the area of Author Profiling(AP)and its importance in several industries,including forensics,security,marketing,and education.A key component of AP is the extraction of useful information from text... This study explores the area of Author Profiling(AP)and its importance in several industries,including forensics,security,marketing,and education.A key component of AP is the extraction of useful information from text,with an emphasis on the writers’ages and genders.To improve the accuracy of AP tasks,the study develops an ensemble model dubbed ABMRF that combines AdaBoostM1(ABM1)and Random Forest(RF).The work uses an extensive technique that involves textmessage dataset pretreatment,model training,and assessment.To evaluate the effectiveness of several machine learning(ML)algorithms in classifying age and gender,including Composite Hypercube on Random Projection(CHIRP),Decision Trees(J48),Na飗e Bayes(NB),K Nearest Neighbor,AdaboostM1,NB-Updatable,RF,andABMRF,they are compared.The findings demonstrate thatABMRFregularly beats the competition,with a gender classification accuracy of 71.14%and an age classification accuracy of 54.29%,respectively.Additional metrics like precision,recall,F-measure,Matthews Correlation Coefficient(MCC),and accuracy support ABMRF’s outstanding performance in age and gender profiling tasks.This study demonstrates the usefulness of ABMRF as an ensemble model for author profiling and highlights its possible uses in marketing,law enforcement,and education.The results emphasize the effectiveness of ensemble approaches in enhancing author profiling task accuracy,particularly when it comes to age and gender identification. 展开更多
关键词 Machine learning author profiling AdaBoostM1 random forest ensemble learning text classification
下载PDF
A New Framework for Scholarship Predictor Using a Machine Learning Approach
14
作者 Bushra Kanwal Rana Saud Shoukat +3 位作者 Saif Ur Rehman Mahwish Kundi Tahani AlSaedi Abdulrahman Alahmadi intelligent automation & soft computing 2024年第5期829-854,共26页
Education is the base of the survival and growth of any state,but due to resource scarcity,students,particularly at the university level,are forced into a difficult situation.Scholarships are the most significant fina... Education is the base of the survival and growth of any state,but due to resource scarcity,students,particularly at the university level,are forced into a difficult situation.Scholarships are the most significant financial aid mechanisms developed to overcome such obstacles and assist the students in continuing with their higher studies.In this study,the convoluted situation of scholarship eligibility criteria,including parental income,responsibilities,and academic achievements,is addressed.In an attempt to maximize the scholarship selection process,numerous machine learning algorithms,including Support Vector Machines,Neural Networks,K-Nearest Neighbors,and the C4.5 algorithm,were applied.The C4.5 algorithm,owing to its efficiency in the prediction of scholarship beneficiaries based on extraneous factors,was capable of predicting a phenomenal 95.62%of predictions using extensive data of a well-esteemed government sector university from Pakistan.This percentage is 4%and 15%better than the remainder of the methods tested,and it depicts the extent of the potential for the technique to enhance the scholarship selection process.The Decision Support Systems(DSS)would not only save the administrative cost but would also create a fair and transparent process in place.In a world where accessibility to education is the key,this research provides data-oriented consolidation to ensure that deserving students are helped and allowed to get the financial assistance that they need to reach higher studies and bridge the gap between the demands of the day and the institutions of intellect. 展开更多
关键词 EDUCATION data mining C4.5 algorithm decision support system scholarship guarantee machine learning
下载PDF
A Health State Prediction Model Based on Belief Rule Base and LSTM for Complex Systems
15
作者 Yu Zhao Zhijie Zhou +3 位作者 Hongdong Fan Xiaoxia Han JieWang Manlin Chen intelligent automation & soft computing 2024年第1期73-91,共19页
In industrial production and engineering operations,the health state of complex systems is critical,and predicting it can ensure normal operation.Complex systems have many monitoring indicators,complex coupling struct... In industrial production and engineering operations,the health state of complex systems is critical,and predicting it can ensure normal operation.Complex systems have many monitoring indicators,complex coupling structures,non-linear and time-varying characteristics,so it is a challenge to establish a reliable prediction model.The belief rule base(BRB)can fuse observed data and expert knowledge to establish a nonlinear relationship between input and output and has well modeling capabilities.Since each indicator of the complex system can reflect the health state to some extent,the BRB is built based on the causal relationship between system indicators and the health state to achieve the prediction.A health state prediction model based on BRB and long short term memory for complex systems is proposed in this paper.Firstly,the LSTMis introduced to predict the trend of the indicators in the system.Secondly,the Density Peak Clustering(DPC)algorithmis used todetermine referential values of indicators for BRB,which effectively offset the lack of expert knowledge.Then,the predicted values and expert knowledge are fused to construct BRB to predict the health state of the systems by inference.Finally,the effectiveness of the model is verified by a case study of a certain vehicle hydraulic pump. 展开更多
关键词 Health state predicftion complex systems belief rule base expert knowledge LSTM density peak clustering
下载PDF
Chase,Pounce,and Escape Optimization Algorithm
16
作者 Adel Sabry Eesa intelligent automation & soft computing 2024年第4期697-723,共27页
While many metaheuristic optimization algorithms strive to address optimization challenges,they often grapple with the delicate balance between exploration and exploitation,leading to issues such as premature converge... While many metaheuristic optimization algorithms strive to address optimization challenges,they often grapple with the delicate balance between exploration and exploitation,leading to issues such as premature convergence,sensitivity to parameter settings,and difficulty in maintaining population diversity.In response to these challenges,this study introduces the Chase,Pounce,and Escape(CPE)algorithm,drawing inspiration from predator-prey dynamics.Unlike traditional optimization approaches,the CPE algorithm divides the population into two groups,each independently exploring the search space to efficiently navigate complex problem domains and avoid local optima.By incorporating a unique search mechanism that integrates both the average of the best solution and the current solution,the CPE algorithm demonstrates superior convergence properties.Additionally,the inclusion of a pouncing process facilitates rapid movement towards optimal solutions.Through comprehensive evaluations across various optimization scenarios,including standard test functions,Congress on Evolutionary Computation(CEC)-2017 benchmarks,and real-world engineering challenges,the effectiveness of the CPE algorithm is demonstrated.Results consistently highlight the algorithm’s performance,surpassing that of other well-known optimization techniques,and achieving remarkable outcomes in terms of mean,best,and standard deviation values across different problem domains,underscoring its robustness and versatility. 展开更多
关键词 Bio-inspired optimization METAHEURISTIC chase pounce and escape optimizer collective behavior engineering design problems
下载PDF
Overfitting in Machine Learning:A Comparative Analysis of Decision Trees and Random Forests
17
作者 Erblin Halabaku Eliot Bytyçi intelligent automation & soft computing 2024年第6期987-1006,共20页
Machine learning has emerged as a pivotal tool in deciphering and managing this excess of information in an era of abundant data.This paper presents a comprehensive analysis of machine learning algorithms,focusing on ... Machine learning has emerged as a pivotal tool in deciphering and managing this excess of information in an era of abundant data.This paper presents a comprehensive analysis of machine learning algorithms,focusing on the structure and efficacy of random forests in mitigating overfitting—a prevalent issue in decision tree models.It also introduces a novel approach to enhancing decision tree performance through an optimized pruning method called Adaptive Cross-Validated Alpha CCP(ACV-CCP).This method refines traditional cost complexity pruning by streamlining the selection of the alpha parameter,leveraging cross-validation within the pruning process to achieve a reliable,computationally efficient alpha selection that generalizes well to unseen data.By enhancing computational efficiency and balancing model complexity,ACV-CCP allows decision trees to maintain predictive accuracy while minimizing overfitting,effectively narrowing the performance gap between decision trees and random forests.Our findings illustrate how ACV-CCP contributes to the robustness and applicability of decision trees,providing a valuable perspective on achieving computationally efficient and generalized machine learning models. 展开更多
关键词 Artificial intelligence decision tree random forest PRUNE OVERFITTING
下载PDF
Optimal Scheduling of Multiple Rail Cranes in Rail Stations with Interference Crane Areas
18
作者 Nguyen Vu Anh Duy Nguyen Le Thai Nguyen Huu Tho intelligent automation & soft computing 2024年第1期15-31,共17页
In this paper,we consider a multi-crane scheduling problem in rail stations because their operations directly influence the throughput of the rail stations.In particular,the job is not only assigned to cranes but also... In this paper,we consider a multi-crane scheduling problem in rail stations because their operations directly influence the throughput of the rail stations.In particular,the job is not only assigned to cranes but also the job sequencing is implemented for each crane to minimize the makespan of cranes.A dual cycle of cranes is used to minimize the number of working cycles of cranes.The rail crane scheduling problems in this study are based on the movement of containers.We consider not only the gantry moves,but also the trolley moves as well as the rehandle cases are also included.A mathematical model of multi-crane scheduling is developed.The traditional and parallel simulated annealing(SA)are adapted to determine the optimal scheduling solutions.Numerical examples are conducted to evaluate the applicability of the proposed algorithms.Verification of the proposed parallel SA is done by comparing it to existing previous works.Results of numerical computation highlighted that the parallel SA algorithm outperformed the SA and gave better solutions than other considered algorithms. 展开更多
关键词 Multi-crane scheduling logistics containers MAKESPAN rail stations
下载PDF
Distributed Federated Split Learning Based Intrusion Detection System
19
作者 Rasha Almarshdi Etimad Fadel +1 位作者 Nahed Alowidi Laila Nassef intelligent automation & soft computing 2024年第5期949-983,共35页
The Internet of Medical Things(IoMT)is one of the critical emerging applications of the Internet of Things(IoT).The huge increases in data generation and transmission across distributed networks make security one of t... The Internet of Medical Things(IoMT)is one of the critical emerging applications of the Internet of Things(IoT).The huge increases in data generation and transmission across distributed networks make security one of the most important challenges facing IoMT networks.Distributed Denial of Service(DDoS)attacks impact the availability of services of legitimate users.Intrusion Detection Systems(IDSs)that are based on Centralized Learning(CL)suffer from high training time and communication overhead.IDS that are based on distributed learning,such as Federated Learning(FL)or Split Learning(SL),are recently used for intrusion detection.FL preserves data privacy while enabling collaborative model development.However,FL suffers from high training time and communication overhead.On the other hand,SL offers advantages in terms of computational resources,but it faces challenges such as communication overhead and potential security vulnerabilities at the split point.Federated Split Learning(FSL)has proposed overcoming the problems of both FL and SL and offering more secure,efficient,and scalable distribution systems.This paper proposes a novel distributed FSL(DFSL)system to detect DDoS attacks.The proposed DFSL enhances detection accuracy and reduces training time by designing an adaptive aggregation method based on the early stopping strategy.However,the increased number of clients leads to increasing communication overheads.We further propose a Multi-Node Selection(MNS)based Best ChannelBest l2-Norm(BC-BN2)selection scheme to reduce communication overhead.Two DL models are used to test the effectiveness of the proposed system,including a Convolutional Neural Network(CNN)and CNN with Long Short-Term Memory(LSTM)on two modern datasets.The performance of the proposed system is compared with three baseline distributed approaches such as FedAvg,Vanilla SL,and SplitFed algorithms.The proposed system outperforms the baseline algorithms with an accuracy of 99.70%and 99.87%in CICDDoS2019 and LITNET-2020 datasets,respectively.The proposed system’s training time and communication overhead are 30%and 20%less than the baseline algorithms. 展开更多
关键词 IDS DFSL DDoS attacks CNN CNN+LSTM
下载PDF
Chaotic Elephant Herd Optimization with Machine Learning for Arabic Hate Speech Detection
20
作者 Badriyya B.Al-onazi Jaber S.Alzahrani +5 位作者 Najm Alotaibi Hussain Alshahrani Mohamed Ahmed Elfaki Radwa Marzouk Heba Mohsen Abdelwahed Motwakel intelligent automation & soft computing 2024年第3期567-583,共17页
In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that op... In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that operate in the Arab countries have embraced social media in their day-to-day business activities at different scales.This is attributed to business owners’understanding of social media’s importance for business development.However,the Arabic morphology is too complicated to understand due to the availability of nearly 10,000 roots and more than 900 patterns that act as the basis for verbs and nouns.Hate speech over online social networking sites turns out to be a worldwide issue that reduces the cohesion of civil societies.In this background,the current study develops a Chaotic Elephant Herd Optimization with Machine Learning for Hate Speech Detection(CEHOML-HSD)model in the context of the Arabic language.The presented CEHOML-HSD model majorly concentrates on identifying and categorising the Arabic text into hate speech and normal.To attain this,the CEHOML-HSD model follows different sub-processes as discussed herewith.At the initial stage,the CEHOML-HSD model undergoes data pre-processing with the help of the TF-IDF vectorizer.Secondly,the Support Vector Machine(SVM)model is utilized to detect and classify the hate speech texts made in the Arabic language.Lastly,the CEHO approach is employed for fine-tuning the parameters involved in SVM.This CEHO approach is developed by combining the chaotic functions with the classical EHO algorithm.The design of the CEHO algorithm for parameter tuning shows the novelty of the work.A widespread experimental analysis was executed to validate the enhanced performance of the proposed CEHOML-HSD approach.The comparative study outcomes established the supremacy of the proposed CEHOML-HSD model over other approaches. 展开更多
关键词 Arabic language machine learning elephant herd optimization TF-IDF vectorizer hate speech detection
下载PDF
上一页 1 2 37 下一页 到第
使用帮助 返回顶部