With new developments experienced in Internet of Things(IoT),wearable,and sensing technology,the value of healthcare services has enhanced.This evolution has brought significant changes from conventional medicine-base...With new developments experienced in Internet of Things(IoT),wearable,and sensing technology,the value of healthcare services has enhanced.This evolution has brought significant changes from conventional medicine-based healthcare to real-time observation-based healthcare.Biomedical Electrocardiogram(ECG)signals are generally utilized in examination and diagnosis of Cardiovascular Diseases(CVDs)since it is quick and non-invasive in nature.Due to increasing number of patients in recent years,the classifier efficiency gets reduced due to high variances observed in ECG signal patterns obtained from patients.In such scenario computer-assisted automated diagnostic tools are important for classification of ECG signals.The current study devises an Improved Bat Algorithm with Deep Learning Based Biomedical ECGSignal Classification(IBADL-BECGC)approach.To accomplish this,the proposed IBADL-BECGC model initially pre-processes the input signals.Besides,IBADL-BECGC model applies NasNet model to derive the features from test ECG signals.In addition,Improved Bat Algorithm(IBA)is employed to optimally fine-tune the hyperparameters related to NasNet approach.Finally,Extreme Learning Machine(ELM)classification algorithm is executed to perform ECG classification method.The presented IBADL-BECGC model was experimentally validated utilizing benchmark dataset.The comparison study outcomes established the improved performance of IBADL-BECGC model over other existing methodologies since the former achieved a maximum accuracy of 97.49%.展开更多
In recent years,the need for a fast,efficient and a reliable wireless network has increased dramatically.Numerous 5G networks have already been tested while a few are in the early stages of deployment.In noncooperativ...In recent years,the need for a fast,efficient and a reliable wireless network has increased dramatically.Numerous 5G networks have already been tested while a few are in the early stages of deployment.In noncooperative communication scenarios,the recognition of digital signal modulations assists people in identifying the communication targets and ensures an effective management over them.The recent advancements in both Machine Learning(ML)and Deep Learning(DL)models demand the development of effective modulation recognition models with self-learning capability.In this background,the current research article designs aDeep Learning enabled Intelligent Modulation Recognition of Communication Signal(DLIMR-CS)technique for next-generation networks.The aim of the proposed DLIMR-CS technique is to classify different kinds of digitally-modulated signals.In addition,the fractal feature extraction process is appliedwith the help of the Sevcik Fractal Dimension(SFD)approach.Then,the extracted features are fed into the Deep Variational Autoencoder(DVAE)model for the classification of the modulated signals.In order to improve the classification performance of the DVAE model,the Tunicate Swarm Algorithm(TSA)is used to finetune the hyperparameters involved in DVAE model.A wide range of simulations was conducted to establish the enhanced performance of the proposed DLIMR-CS model.The experimental outcomes confirmed the superior recognition rate of the DLIMR-CS model over recent state-of-the-art methods under different evaluation parameters.展开更多
Mobile communication and the Internet of Things(IoT)technologies have recently been established to collect data from human beings and the environment.The data collected can be leveraged to provide intelligent services...Mobile communication and the Internet of Things(IoT)technologies have recently been established to collect data from human beings and the environment.The data collected can be leveraged to provide intelligent services through different applications.It is an extreme challenge to monitor disabled people from remote locations.It is because day-to-day events like falls heavily result in accidents.For a person with disabilities,a fall event is an important cause of mortality and post-traumatic complications.Therefore,detecting the fall events of disabled persons in smart homes at early stages is essential to provide the necessary support and increase their survival rate.The current study introduces a Whale Optimization Algorithm Deep Transfer Learning-DrivenAutomated Fall Detection(WOADTL-AFD)technique to improve the Quality of Life for persons with disabilities.The primary aim of the presented WOADTL-AFD technique is to identify and classify the fall events to help disabled individuals.To attain this,the proposed WOADTL-AFDmodel initially uses amodified SqueezeNet feature extractor which proficiently extracts the feature vectors.In addition,the WOADTLAFD technique classifies the fall events using an extreme Gradient Boosting(XGBoost)classifier.In the presented WOADTL-AFD technique,the WOA approach is used to fine-tune the hyperparameters involved in the modified SqueezeNet model.The proposedWOADTL-AFD technique was experimentally validated using the benchmark datasets,and the results confirmed the superior performance of the proposedWOADTL-AFD method compared to other recent approaches.展开更多
Cloud Computing(CC)is the most promising and advanced technology to store data and offer online services in an effective manner.When such fast evolving technologies are used in the protection of computerbased systems ...Cloud Computing(CC)is the most promising and advanced technology to store data and offer online services in an effective manner.When such fast evolving technologies are used in the protection of computerbased systems from cyberattacks,it brings several advantages compared to conventional data protection methods.Some of the computer-based systems that effectively protect the data include Cyber-Physical Systems(CPS),Internet of Things(IoT),mobile devices,desktop and laptop computer,and critical systems.Malicious software(malware)is nothing but a type of software that targets the computer-based systems so as to launch cyberattacks and threaten the integrity,secrecy,and accessibility of the information.The current study focuses on design of Optimal Bottleneck driven Deep Belief Network-enabled Cybersecurity Malware Classification(OBDDBNCMC)model.The presentedOBDDBN-CMCmodel intends to recognize and classify the malware that exists in IoT-based cloud platform.To attain this,Zscore data normalization is utilized to scale the data into a uniform format.In addition,BDDBN model is also exploited for recognition and categorization of malware.To effectually fine-tune the hyperparameters related to BDDBN model,GrasshopperOptimizationAlgorithm(GOA)is applied.This scenario enhances the classification results and also shows the novelty of current study.The experimental analysis was conducted upon OBDDBN-CMC model for validation and the results confirmed the enhanced performance ofOBDDBNCMC model over recent approaches.展开更多
In recent times,cities are getting smart and can be managed effectively through diverse architectures and services.Smart cities have the ability to support smart medical systems that can infiltrate distinct events(i.e...In recent times,cities are getting smart and can be managed effectively through diverse architectures and services.Smart cities have the ability to support smart medical systems that can infiltrate distinct events(i.e.,smart hospitals,smart homes,and community health centres)and scenarios(e.g.,rehabilitation,abnormal behavior monitoring,clinical decision-making,disease prevention and diagnosis postmarking surveillance and prescription recommendation).The integration of Artificial Intelligence(AI)with recent technologies,for instance medical screening gadgets,are significant enough to deliver maximum performance and improved management services to handle chronic diseases.With latest developments in digital data collection,AI techniques can be employed for clinical decision making process.On the other hand,Cardiovascular Disease(CVD)is one of the major illnesses that increase the mortality rate across the globe.Generally,wearables can be employed in healthcare systems that instigate the development of CVD detection and classification.With this motivation,the current study develops an Artificial Intelligence Enabled Decision Support System for CVD Disease Detection and Classification in e-healthcare environment,abbreviated as AIDSS-CDDC technique.The proposed AIDSS-CDDC model enables the Internet of Things(IoT)devices for healthcare data collection.Then,the collected data is saved in cloud server for examination.Followed by,training 4484 CMC,2023,vol.74,no.2 and testing processes are executed to determine the patient’s health condition.To accomplish this,the presented AIDSS-CDDC model employs data preprocessing and Improved Sine Cosine Optimization based Feature Selection(ISCO-FS)technique.In addition,Adam optimizer with Autoencoder Gated RecurrentUnit(AE-GRU)model is employed for detection and classification of CVD.The experimental results highlight that the proposed AIDSS-CDDC model is a promising performer compared to other existing models.展开更多
The text classification process has been extensively investigated in various languages,especially English.Text classification models are vital in several Natural Language Processing(NLP)applications.The Arabic languag...The text classification process has been extensively investigated in various languages,especially English.Text classification models are vital in several Natural Language Processing(NLP)applications.The Arabic language has a lot of significance.For instance,it is the fourth mostly-used language on the internet and the sixth official language of theUnitedNations.However,there are few studies on the text classification process in Arabic.A few text classification studies have been published earlier in the Arabic language.In general,researchers face two challenges in the Arabic text classification process:low accuracy and high dimensionality of the features.In this study,an Automated Arabic Text Classification using Hyperparameter Tuned Hybrid Deep Learning(AATC-HTHDL)model is proposed.The major goal of the proposed AATC-HTHDL method is to identify different class labels for the Arabic text.The first step in the proposed model is to pre-process the input data to transform it into a useful format.The Term Frequency-Inverse Document Frequency(TF-IDF)model is applied to extract the feature vectors.Next,the Convolutional Neural Network with Recurrent Neural Network(CRNN)model is utilized to classify the Arabic text.In the final stage,the Crow Search Algorithm(CSA)is applied to fine-tune the CRNN model’s hyperparameters,showing the work’s novelty.The proposed AATCHTHDL model was experimentally validated under different parameters and the outcomes established the supremacy of the proposed AATC-HTHDL model over other approaches.展开更多
Sign language includes the motion of the arms and hands to communicate with people with hearing disabilities.Several models have been available in the literature for sign language detection and classification for enha...Sign language includes the motion of the arms and hands to communicate with people with hearing disabilities.Several models have been available in the literature for sign language detection and classification for enhanced outcomes.But the latest advancements in computer vision enable us to perform signs/gesture recognition using deep neural networks.This paper introduces an Arabic Sign Language Gesture Classification using Deer Hunting Optimization with Machine Learning(ASLGC-DHOML)model.The presented ASLGC-DHOML technique mainly concentrates on recognising and classifying sign language gestures.The presented ASLGC-DHOML model primarily pre-processes the input gesture images and generates feature vectors using the densely connected network(DenseNet169)model.For gesture recognition and classification,a multilayer perceptron(MLP)classifier is exploited to recognize and classify the existence of sign language gestures.Lastly,the DHO algorithm is utilized for parameter optimization of the MLP model.The experimental results of the ASLGC-DHOML model are tested and the outcomes are inspected under distinct aspects.The comparison analysis highlighted that the ASLGC-DHOML method has resulted in enhanced gesture classification results than other techniques with maximum accuracy of 92.88%.展开更多
Developments in data storage and sensor technologies have allowed the cumulation of a large volume of data from industrial systems.Both structural and non-structural data of industrial systems are collected,which cove...Developments in data storage and sensor technologies have allowed the cumulation of a large volume of data from industrial systems.Both structural and non-structural data of industrial systems are collected,which covers data formats of time-series,text,images,sound,etc.Several researchers discussed above were mostly qualitative,and ceratin techniques need expert guidance to conclude on the condition of gearboxes.But,in this study,an improved symbiotic organism search with deep learning enabled fault diagnosis(ISOSDL-FD)model for gearbox fault detection in industrial systems.The proposed ISOSDL-FD technique majorly concentrates on the identification and classification of faults in the gearbox data.In addition,a Fast kurtogram based time-frequency analysis can be used for revealing the energy present in the machinery signals in the time-frequency representation.Moreover,the deep bidirectional recurrent neural network(DBiRNN)is applied for fault detection and classification.At last,the ISOS approach was derived for optimal hyperparameter tuning of the DL method so that the classification performance will be improvised.To illustrate the improvised performance of the ISOSDL-FD algorithm,a comprehensive experimental analysis can be performed.The experimental results stated the betterment of the ISOSDLFD algorithm over current techniques.展开更多
There is a drastic increase experienced in the production of vehicles in recent years across the globe.In this scenario,vehicle classification system plays a vital part in designing Intelligent Transportation Systems(...There is a drastic increase experienced in the production of vehicles in recent years across the globe.In this scenario,vehicle classification system plays a vital part in designing Intelligent Transportation Systems(ITS)for automatic highway toll collection,autonomous driving,and traffic management.Recently,computer vision and pattern recognition models are useful in designing effective vehicle classification systems.But these models are trained using a small number of hand-engineered features derived fromsmall datasets.So,such models cannot be applied for real-time road traffic conditions.Recent developments in Deep Learning(DL)-enabled vehicle classification models are highly helpful in resolving the issues that exist in traditional models.In this background,the current study develops a Lightning Search Algorithm with Deep Transfer Learning-based Vehicle Classification Model for ITS,named LSADTL-VCITS model.The key objective of the presented LSADTL-VCITS model is to automatically detect and classify the types of vehicles.To accomplish this,the presented LSADTL-VCITS model initially employs You Only Look Once(YOLO)-v5 object detector with Capsule Network(CapsNet)as baseline model.In addition,the proposed LSADTL-VCITS model applies LSA with Multilayer Perceptron(MLP)for detection and classification of the vehicles.The performance of the proposed LSADTL-VCITS model was experimentally validated using benchmark dataset and the outcomes were examined under several measures.The experimental outcomes established the superiority of the proposed LSADTL-VCITS model compared to existing approaches.展开更多
Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several ...Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several problems in their daily lives,technical intervention can help them resolve the challenges.In this background,an automatic object detection tool is the need of the hour to empower VIPs with safe navigation.The recent advances in the Internet of Things(IoT)and Deep Learning(DL)techniques make it possible.The current study proposes IoT-assisted Transient Search Optimization with a Lightweight RetinaNetbased object detection(TSOLWR-ODVIP)model to help VIPs.The primary aim of the presented TSOLWR-ODVIP technique is to identify different objects surrounding VIPs and to convey the information via audio message to them.For data acquisition,IoT devices are used in this study.Then,the Lightweight RetinaNet(LWR)model is applied to detect objects accurately.Next,the TSO algorithm is employed for fine-tuning the hyperparameters involved in the LWR model.Finally,the Long Short-Term Memory(LSTM)model is exploited for classifying objects.The performance of the proposed TSOLWR-ODVIP technique was evaluated using a set of objects,and the results were examined under distinct aspects.The comparison study outcomes confirmed that the TSOLWR-ODVIP model could effectually detect and classify the objects,enhancing the quality of life of VIPs.展开更多
The major environmental hazard in this pandemic is the unhygienic dis-posal of medical waste.Medical wastage is not properly managed it will become a hazard to the environment and humans.Managing medical wastage is a ...The major environmental hazard in this pandemic is the unhygienic dis-posal of medical waste.Medical wastage is not properly managed it will become a hazard to the environment and humans.Managing medical wastage is a major issue in the city,municipalities in the aspects of the environment,and logistics.An efficient supply chain with edge computing technology is used in managing medical waste.The supply chain operations include processing of waste collec-tion,transportation,and disposal of waste.Many research works have been applied to improve the management of wastage.The main issues in the existing techniques are ineffective and expensive and centralized edge computing which leads to failure in providing security,trustworthiness,and transparency.To over-come these issues,in this paper we implement an efficient Naive Bayes classifier algorithm and Q-Learning algorithm in decentralized edge computing technology with a binary bat optimization algorithm(NBQ-BBOA).This proposed work is used to track,detect,and manage medical waste.To minimize the transferring cost of medical wastage from various nodes,the Q-Learning algorithm is used.The accuracy obtained for the Naïve Bayes algorithm is 88%,the Q-Learning algo-rithm is 82%and NBQ-BBOA is 98%.The error rate of Root Mean Square Error(RMSE)and Mean Error(MAE)for the proposed work NBQ-BBOA are 0.012 and 0.045.展开更多
Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for com...Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for computer systems.HCI has accomplished effective incorporation of the human factors and software engineering of computing systems through the methods and concepts of cognitive science.Usability is an aspect of HCI dedicated to guar-anteeing that human–computer communication is,amongst other things,efficient,effective,and sustaining for the user.Simultaneously,Human activity recognition(HAR)aim is to identify actions from a sequence of observations on the activities of subjects and the environmental conditions.The vision-based HAR study is the basis of several applications involving health care,HCI,and video surveillance.This article develops a Fire Hawk Optimizer with Deep Learning Enabled Activ-ity Recognition(FHODL-AR)on HCI driven usability.In the presented FHODL-AR technique,the input images are investigated for the identification of different human activities.For feature extraction,a modified SqueezeNet model is intro-duced by the inclusion of few bypass connections to the SqueezeNet among Fire modules.Besides,the FHO algorithm is utilized as a hyperparameter optimization algorithm,which in turn boosts the classification performance.To detect and cate-gorize different kinds of activities,probabilistic neural network(PNN)classifier is applied.The experimental validation of the FHODL-AR technique is tested using benchmark datasets,and the outcomes reported the improvements of the FHODL-AR technique over other recent approaches.展开更多
Sign language is mainly utilized in communication with people who have hearing disabilities.Sign language is used to communicate with people hav-ing developmental impairments who have some or no interaction skills.The...Sign language is mainly utilized in communication with people who have hearing disabilities.Sign language is used to communicate with people hav-ing developmental impairments who have some or no interaction skills.The inter-action via Sign language becomes a fruitful means of communication for hearing and speech impaired persons.A Hand gesture recognition systemfinds helpful for deaf and dumb people by making use of human computer interface(HCI)and convolutional neural networks(CNN)for identifying the static indications of Indian Sign Language(ISL).This study introduces a shark smell optimization with deep learning based automated sign language recognition(SSODL-ASLR)model for hearing and speaking impaired people.The presented SSODL-ASLR technique majorly concentrates on the recognition and classification of sign lan-guage provided by deaf and dumb people.The presented SSODL-ASLR model encompasses a two stage process namely sign language detection and sign lan-guage classification.In thefirst stage,the Mask Region based Convolution Neural Network(Mask RCNN)model is exploited for sign language recognition.Sec-ondly,SSO algorithm with soft margin support vector machine(SM-SVM)model can be utilized for sign language classification.To assure the enhanced classifica-tion performance of the SSODL-ASLR model,a brief set of simulations was car-ried out.The extensive results portrayed the supremacy of the SSODL-ASLR model over other techniques.展开更多
The Internet of Things(IoT)environment plays a crucial role in the design of smart environments.Security and privacy are the major challenging problems that exist in the design of IoT-enabled real-time environments.Se...The Internet of Things(IoT)environment plays a crucial role in the design of smart environments.Security and privacy are the major challenging problems that exist in the design of IoT-enabled real-time environments.Security susceptibilities in IoT-based systems pose security threats which affect smart environment applications.Intrusion detection systems(IDS)can be used for IoT environments to mitigate IoT-related security attacks which use few security vulnerabilities.This paper introduces a modified garden balsan optimizationbased machine learning model for intrusion detection(MGBO-MLID)in the IoT cloud environment.The presented MGBO-MLID technique focuses on the identification and classification of intrusions in the IoT cloud atmosphere.Initially,the presented MGBO-MLID model applies min-max normalization that can be utilized for scaling the features in a uniform format.In addition,the MGBO-MLID model exploits the MGBO algorithm to choose the optimal subset of features.Moreover,the attention-based bidirectional long short-term(ABiLSTM)method can be utilized for the detection and classification of intrusions.At the final level,the Aquila optimization(AO)algorithm is applied as a hyperparameter optimizer to fine-tune the ABiLSTM methods.The experimental validation of the MGBO-MLID method is tested using a benchmark dataset.The extensive comparative study reported the betterment of the MGBO-MLID algorithm over recent approaches.展开更多
Massive-Multiple Inputs and Multiple Outputs(M-MIMO)is considered as one of the standard techniques in improving the performance of Fifth Generation(5G)radio.5G signal detection with low propagation delay and high thr...Massive-Multiple Inputs and Multiple Outputs(M-MIMO)is considered as one of the standard techniques in improving the performance of Fifth Generation(5G)radio.5G signal detection with low propagation delay and high throughput with minimum computational intricacy are some of the serious concerns in the deployment of 5G.The evaluation of 5G promises a high quality of service(QoS),a high data rate,low latency,and spectral efficiency,ensuring several applications that will improve the services in every sector.The existing detection techniques cannot be utilised in 5G and beyond 5G due to the high complexity issues in their implementation.In the proposed article,the Approximation Message Passing(AMP)is implemented and compared with the existing Minimum Mean Square Error(MMSE)and Message Passing Detector(MPD)algorithms.The outcomes of the work show that the performance of Bit Error Rate(BER)is improved with minimal complexity.展开更多
Text-To-Speech(TTS)is a speech processing tool that is highly helpful for visually-challenged people.The TTS tool is applied to transform the texts into human-like sounds.However,it is highly challenging to accomplish...Text-To-Speech(TTS)is a speech processing tool that is highly helpful for visually-challenged people.The TTS tool is applied to transform the texts into human-like sounds.However,it is highly challenging to accomplish the TTS out-comes for the non-diacritized text of the Arabic language since it has multiple unique features and rules.Some special characters like gemination and diacritic signs that correspondingly indicate consonant doubling and short vowels greatly impact the precise pronunciation of the Arabic language.But,such signs are not frequently used in the texts written in the Arabic language since its speakers and readers can guess them from the context itself.In this background,the current research article introduces an Optimal Deep Learning-driven Arab Text-to-Speech Synthesizer(ODLD-ATSS)model to help the visually-challenged people in the Kingdom of Saudi Arabia.The prime aim of the presented ODLD-ATSS model is to convert the text into speech signals for visually-challenged people.To attain this,the presented ODLD-ATSS model initially designs a Gated Recurrent Unit(GRU)-based prediction model for diacritic and gemination signs.Besides,the Buckwalter code is utilized to capture,store and display the Arabic texts.To improve the TSS performance of the GRU method,the Aquila Optimization Algorithm(AOA)is used,which shows the novelty of the work.To illustrate the enhanced performance of the proposed ODLD-ATSS model,further experi-mental analyses were conducted.The proposed model achieved a maximum accu-racy of 96.35%,and the experimental outcomes infer the improved performance of the proposed ODLD-ATSS model over other DL-based TSS models.展开更多
Wireless Sensor Networks(WSN)interlink numerous Sensor Nodes(SN)to support Internet of Things(loT)services.But the data gathered from SNs can be divulged,tempered,and forged.Conventional WSN data processes manage the ...Wireless Sensor Networks(WSN)interlink numerous Sensor Nodes(SN)to support Internet of Things(loT)services.But the data gathered from SNs can be divulged,tempered,and forged.Conventional WSN data processes manage the data in a centralized format at terminal gadgets.These devices are prone to attacks and the security of systems can get compromised.Blockchain is a distributed and decentralized technique that has the ability to handle security issues in WSN.The security issues include transactions that may be copied and spread across numerous nodes in a peer-peer network system.This breaches the mutual trust and allows data immutability which in turn permits the network to go on.At some instances,few nodes die or get compromised due to heavy power utilization.The current article develops an Energy Aware Chaotic Pigeon Inspired Optimization based Clustering scheme for Blockchain assisted WSN technique abbreviated as EACPIO-CB technique.The primary objective of the proposed EACPIO-CB model is to proficiently group the sensor nodes into clusters and exploit Blockchain(BC)for inter-cluster communication in the network.To select ClusterHeads(CHs)and organize the clusters,the presented EACPIO-CB model designs a fitness function that involves distinct input parameters.Further,BC technology enables the communication between one CH and the other and with the Base Station(BS)in the network.The authors conducted comprehensive set of simulations and the outcomes were investigated under different aspects.The simulation results confirmed the better performance of EACPIO-CB method over recent methodologies.展开更多
Nowadays,vehicular ad hoc networks(VANET)turn out to be a core portion of intelligent transportation systems(ITSs),that mainly focus on achieving continual Internet connectivity amongst vehicles on the road.The VANET ...Nowadays,vehicular ad hoc networks(VANET)turn out to be a core portion of intelligent transportation systems(ITSs),that mainly focus on achieving continual Internet connectivity amongst vehicles on the road.The VANET was utilized to enhance driving safety and build an ITS in modern cities.Driving safety is a main portion of VANET,the privacy and security of these messages should be protected.In this aspect,this article presents a blockchain with sunflower optimization enabled route planning scheme(BCSFO-RPS)for secure VANET.The presented BCSFO-RPSmodel focuses on the identification of routes in such a way that vehicular communication is secure.In addition,the BCSFO-RPS model employs SFO algorithm with a fitness function for effectual identification of routes.Besides,the proposed BCSFO-RPS model derives an intrusion detection system(IDS)encompassing two processes namely feature selection and classification.To detect intrusions,correlation based feature selection(CFS)and kernel extreme machine learning(KELM)classifier is applied.The performance of the BCSFO-RPS model is tested using a series of experiments and the results reported the enhancements of the BCSFO-RPS model over other approaches with maximum accuracy of 98.70%.展开更多
基金the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under Grant Number(71/43)Princess Nourah Bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R203)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR29).
文摘With new developments experienced in Internet of Things(IoT),wearable,and sensing technology,the value of healthcare services has enhanced.This evolution has brought significant changes from conventional medicine-based healthcare to real-time observation-based healthcare.Biomedical Electrocardiogram(ECG)signals are generally utilized in examination and diagnosis of Cardiovascular Diseases(CVDs)since it is quick and non-invasive in nature.Due to increasing number of patients in recent years,the classifier efficiency gets reduced due to high variances observed in ECG signal patterns obtained from patients.In such scenario computer-assisted automated diagnostic tools are important for classification of ECG signals.The current study devises an Improved Bat Algorithm with Deep Learning Based Biomedical ECGSignal Classification(IBADL-BECGC)approach.To accomplish this,the proposed IBADL-BECGC model initially pre-processes the input signals.Besides,IBADL-BECGC model applies NasNet model to derive the features from test ECG signals.In addition,Improved Bat Algorithm(IBA)is employed to optimally fine-tune the hyperparameters related to NasNet approach.Finally,Extreme Learning Machine(ELM)classification algorithm is executed to perform ECG classification method.The presented IBADL-BECGC model was experimentally validated utilizing benchmark dataset.The comparison study outcomes established the improved performance of IBADL-BECGC model over other existing methodologies since the former achieved a maximum accuracy of 97.49%.
文摘In recent years,the need for a fast,efficient and a reliable wireless network has increased dramatically.Numerous 5G networks have already been tested while a few are in the early stages of deployment.In noncooperative communication scenarios,the recognition of digital signal modulations assists people in identifying the communication targets and ensures an effective management over them.The recent advancements in both Machine Learning(ML)and Deep Learning(DL)models demand the development of effective modulation recognition models with self-learning capability.In this background,the current research article designs aDeep Learning enabled Intelligent Modulation Recognition of Communication Signal(DLIMR-CS)technique for next-generation networks.The aim of the proposed DLIMR-CS technique is to classify different kinds of digitally-modulated signals.In addition,the fractal feature extraction process is appliedwith the help of the Sevcik Fractal Dimension(SFD)approach.Then,the extracted features are fed into the Deep Variational Autoencoder(DVAE)model for the classification of the modulated signals.In order to improve the classification performance of the DVAE model,the Tunicate Swarm Algorithm(TSA)is used to finetune the hyperparameters involved in DVAE model.A wide range of simulations was conducted to establish the enhanced performance of the proposed DLIMR-CS model.The experimental outcomes confirmed the superior recognition rate of the DLIMR-CS model over recent state-of-the-art methods under different evaluation parameters.
基金The authors extend their appreciation to the King Salman Center for Disability Research for funding this work through Research Group no KSRG-2022-030.
文摘Mobile communication and the Internet of Things(IoT)technologies have recently been established to collect data from human beings and the environment.The data collected can be leveraged to provide intelligent services through different applications.It is an extreme challenge to monitor disabled people from remote locations.It is because day-to-day events like falls heavily result in accidents.For a person with disabilities,a fall event is an important cause of mortality and post-traumatic complications.Therefore,detecting the fall events of disabled persons in smart homes at early stages is essential to provide the necessary support and increase their survival rate.The current study introduces a Whale Optimization Algorithm Deep Transfer Learning-DrivenAutomated Fall Detection(WOADTL-AFD)technique to improve the Quality of Life for persons with disabilities.The primary aim of the presented WOADTL-AFD technique is to identify and classify the fall events to help disabled individuals.To attain this,the proposed WOADTL-AFDmodel initially uses amodified SqueezeNet feature extractor which proficiently extracts the feature vectors.In addition,the WOADTLAFD technique classifies the fall events using an extreme Gradient Boosting(XGBoost)classifier.In the presented WOADTL-AFD technique,the WOA approach is used to fine-tune the hyperparameters involved in the modified SqueezeNet model.The proposedWOADTL-AFD technique was experimentally validated using the benchmark datasets,and the results confirmed the superior performance of the proposedWOADTL-AFD method compared to other recent approaches.
基金the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under grant number(61/43).Princess Nourah Bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R319)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR24).
文摘Cloud Computing(CC)is the most promising and advanced technology to store data and offer online services in an effective manner.When such fast evolving technologies are used in the protection of computerbased systems from cyberattacks,it brings several advantages compared to conventional data protection methods.Some of the computer-based systems that effectively protect the data include Cyber-Physical Systems(CPS),Internet of Things(IoT),mobile devices,desktop and laptop computer,and critical systems.Malicious software(malware)is nothing but a type of software that targets the computer-based systems so as to launch cyberattacks and threaten the integrity,secrecy,and accessibility of the information.The current study focuses on design of Optimal Bottleneck driven Deep Belief Network-enabled Cybersecurity Malware Classification(OBDDBNCMC)model.The presentedOBDDBN-CMCmodel intends to recognize and classify the malware that exists in IoT-based cloud platform.To attain this,Zscore data normalization is utilized to scale the data into a uniform format.In addition,BDDBN model is also exploited for recognition and categorization of malware.To effectually fine-tune the hyperparameters related to BDDBN model,GrasshopperOptimizationAlgorithm(GOA)is applied.This scenario enhances the classification results and also shows the novelty of current study.The experimental analysis was conducted upon OBDDBN-CMC model for validation and the results confirmed the enhanced performance ofOBDDBNCMC model over recent approaches.
基金the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under Grant Number(71/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R114)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR26).
文摘In recent times,cities are getting smart and can be managed effectively through diverse architectures and services.Smart cities have the ability to support smart medical systems that can infiltrate distinct events(i.e.,smart hospitals,smart homes,and community health centres)and scenarios(e.g.,rehabilitation,abnormal behavior monitoring,clinical decision-making,disease prevention and diagnosis postmarking surveillance and prescription recommendation).The integration of Artificial Intelligence(AI)with recent technologies,for instance medical screening gadgets,are significant enough to deliver maximum performance and improved management services to handle chronic diseases.With latest developments in digital data collection,AI techniques can be employed for clinical decision making process.On the other hand,Cardiovascular Disease(CVD)is one of the major illnesses that increase the mortality rate across the globe.Generally,wearables can be employed in healthcare systems that instigate the development of CVD detection and classification.With this motivation,the current study develops an Artificial Intelligence Enabled Decision Support System for CVD Disease Detection and Classification in e-healthcare environment,abbreviated as AIDSS-CDDC technique.The proposed AIDSS-CDDC model enables the Internet of Things(IoT)devices for healthcare data collection.Then,the collected data is saved in cloud server for examination.Followed by,training 4484 CMC,2023,vol.74,no.2 and testing processes are executed to determine the patient’s health condition.To accomplish this,the presented AIDSS-CDDC model employs data preprocessing and Improved Sine Cosine Optimization based Feature Selection(ISCO-FS)technique.In addition,Adam optimizer with Autoencoder Gated RecurrentUnit(AE-GRU)model is employed for detection and classification of CVD.The experimental results highlight that the proposed AIDSS-CDDC model is a promising performer compared to other existing models.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R263),Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR31)。
文摘The text classification process has been extensively investigated in various languages,especially English.Text classification models are vital in several Natural Language Processing(NLP)applications.The Arabic language has a lot of significance.For instance,it is the fourth mostly-used language on the internet and the sixth official language of theUnitedNations.However,there are few studies on the text classification process in Arabic.A few text classification studies have been published earlier in the Arabic language.In general,researchers face two challenges in the Arabic text classification process:low accuracy and high dimensionality of the features.In this study,an Automated Arabic Text Classification using Hyperparameter Tuned Hybrid Deep Learning(AATC-HTHDL)model is proposed.The major goal of the proposed AATC-HTHDL method is to identify different class labels for the Arabic text.The first step in the proposed model is to pre-process the input data to transform it into a useful format.The Term Frequency-Inverse Document Frequency(TF-IDF)model is applied to extract the feature vectors.Next,the Convolutional Neural Network with Recurrent Neural Network(CRNN)model is utilized to classify the Arabic text.In the final stage,the Crow Search Algorithm(CSA)is applied to fine-tune the CRNN model’s hyperparameters,showing the work’s novelty.The proposed AATCHTHDL model was experimentally validated under different parameters and the outcomes established the supremacy of the proposed AATC-HTHDL model over other approaches.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+1 种基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura Universitysupporting this work by Grant Code:22UQU4310373DSR54.
文摘Sign language includes the motion of the arms and hands to communicate with people with hearing disabilities.Several models have been available in the literature for sign language detection and classification for enhanced outcomes.But the latest advancements in computer vision enable us to perform signs/gesture recognition using deep neural networks.This paper introduces an Arabic Sign Language Gesture Classification using Deer Hunting Optimization with Machine Learning(ASLGC-DHOML)model.The presented ASLGC-DHOML technique mainly concentrates on recognising and classifying sign language gestures.The presented ASLGC-DHOML model primarily pre-processes the input gesture images and generates feature vectors using the densely connected network(DenseNet169)model.For gesture recognition and classification,a multilayer perceptron(MLP)classifier is exploited to recognize and classify the existence of sign language gestures.Lastly,the DHO algorithm is utilized for parameter optimization of the MLP model.The experimental results of the ASLGC-DHOML model are tested and the outcomes are inspected under distinct aspects.The comparison analysis highlighted that the ASLGC-DHOML method has resulted in enhanced gesture classification results than other techniques with maximum accuracy of 92.88%.
文摘Developments in data storage and sensor technologies have allowed the cumulation of a large volume of data from industrial systems.Both structural and non-structural data of industrial systems are collected,which covers data formats of time-series,text,images,sound,etc.Several researchers discussed above were mostly qualitative,and ceratin techniques need expert guidance to conclude on the condition of gearboxes.But,in this study,an improved symbiotic organism search with deep learning enabled fault diagnosis(ISOSDL-FD)model for gearbox fault detection in industrial systems.The proposed ISOSDL-FD technique majorly concentrates on the identification and classification of faults in the gearbox data.In addition,a Fast kurtogram based time-frequency analysis can be used for revealing the energy present in the machinery signals in the time-frequency representation.Moreover,the deep bidirectional recurrent neural network(DBiRNN)is applied for fault detection and classification.At last,the ISOS approach was derived for optimal hyperparameter tuning of the DL method so that the classification performance will be improvised.To illustrate the improvised performance of the ISOSDL-FD algorithm,a comprehensive experimental analysis can be performed.The experimental results stated the betterment of the ISOSDLFD algorithm over current techniques.
文摘There is a drastic increase experienced in the production of vehicles in recent years across the globe.In this scenario,vehicle classification system plays a vital part in designing Intelligent Transportation Systems(ITS)for automatic highway toll collection,autonomous driving,and traffic management.Recently,computer vision and pattern recognition models are useful in designing effective vehicle classification systems.But these models are trained using a small number of hand-engineered features derived fromsmall datasets.So,such models cannot be applied for real-time road traffic conditions.Recent developments in Deep Learning(DL)-enabled vehicle classification models are highly helpful in resolving the issues that exist in traditional models.In this background,the current study develops a Lightning Search Algorithm with Deep Transfer Learning-based Vehicle Classification Model for ITS,named LSADTL-VCITS model.The key objective of the presented LSADTL-VCITS model is to automatically detect and classify the types of vehicles.To accomplish this,the presented LSADTL-VCITS model initially employs You Only Look Once(YOLO)-v5 object detector with Capsule Network(CapsNet)as baseline model.In addition,the proposed LSADTL-VCITS model applies LSA with Multilayer Perceptron(MLP)for detection and classification of the vehicles.The performance of the proposed LSADTL-VCITS model was experimentally validated using benchmark dataset and the outcomes were examined under several measures.The experimental outcomes established the superiority of the proposed LSADTL-VCITS model compared to existing approaches.
基金The authors extend their appreciation to the King Salman center for Disability Research for funding this work through Research Group no KSRG-2022-030。
文摘Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several problems in their daily lives,technical intervention can help them resolve the challenges.In this background,an automatic object detection tool is the need of the hour to empower VIPs with safe navigation.The recent advances in the Internet of Things(IoT)and Deep Learning(DL)techniques make it possible.The current study proposes IoT-assisted Transient Search Optimization with a Lightweight RetinaNetbased object detection(TSOLWR-ODVIP)model to help VIPs.The primary aim of the presented TSOLWR-ODVIP technique is to identify different objects surrounding VIPs and to convey the information via audio message to them.For data acquisition,IoT devices are used in this study.Then,the Lightweight RetinaNet(LWR)model is applied to detect objects accurately.Next,the TSO algorithm is employed for fine-tuning the hyperparameters involved in the LWR model.Finally,the Long Short-Term Memory(LSTM)model is exploited for classifying objects.The performance of the proposed TSOLWR-ODVIP technique was evaluated using a set of objects,and the results were examined under distinct aspects.The comparison study outcomes confirmed that the TSOLWR-ODVIP model could effectually detect and classify the objects,enhancing the quality of life of VIPs.
文摘The major environmental hazard in this pandemic is the unhygienic dis-posal of medical waste.Medical wastage is not properly managed it will become a hazard to the environment and humans.Managing medical wastage is a major issue in the city,municipalities in the aspects of the environment,and logistics.An efficient supply chain with edge computing technology is used in managing medical waste.The supply chain operations include processing of waste collec-tion,transportation,and disposal of waste.Many research works have been applied to improve the management of wastage.The main issues in the existing techniques are ineffective and expensive and centralized edge computing which leads to failure in providing security,trustworthiness,and transparency.To over-come these issues,in this paper we implement an efficient Naive Bayes classifier algorithm and Q-Learning algorithm in decentralized edge computing technology with a binary bat optimization algorithm(NBQ-BBOA).This proposed work is used to track,detect,and manage medical waste.To minimize the transferring cost of medical wastage from various nodes,the Q-Learning algorithm is used.The accuracy obtained for the Naïve Bayes algorithm is 88%,the Q-Learning algo-rithm is 82%and NBQ-BBOA is 98%.The error rate of Root Mean Square Error(RMSE)and Mean Error(MAE)for the proposed work NBQ-BBOA are 0.012 and 0.045.
文摘Human-Computer Interaction(HCI)is a sub-area within computer science focused on the study of the communication between people(users)and computers and the evaluation,implementation,and design of user interfaces for computer systems.HCI has accomplished effective incorporation of the human factors and software engineering of computing systems through the methods and concepts of cognitive science.Usability is an aspect of HCI dedicated to guar-anteeing that human–computer communication is,amongst other things,efficient,effective,and sustaining for the user.Simultaneously,Human activity recognition(HAR)aim is to identify actions from a sequence of observations on the activities of subjects and the environmental conditions.The vision-based HAR study is the basis of several applications involving health care,HCI,and video surveillance.This article develops a Fire Hawk Optimizer with Deep Learning Enabled Activ-ity Recognition(FHODL-AR)on HCI driven usability.In the presented FHODL-AR technique,the input images are investigated for the identification of different human activities.For feature extraction,a modified SqueezeNet model is intro-duced by the inclusion of few bypass connections to the SqueezeNet among Fire modules.Besides,the FHO algorithm is utilized as a hyperparameter optimization algorithm,which in turn boosts the classification performance.To detect and cate-gorize different kinds of activities,probabilistic neural network(PNN)classifier is applied.The experimental validation of the FHODL-AR technique is tested using benchmark datasets,and the outcomes reported the improvements of the FHODL-AR technique over other recent approaches.
文摘Sign language is mainly utilized in communication with people who have hearing disabilities.Sign language is used to communicate with people hav-ing developmental impairments who have some or no interaction skills.The inter-action via Sign language becomes a fruitful means of communication for hearing and speech impaired persons.A Hand gesture recognition systemfinds helpful for deaf and dumb people by making use of human computer interface(HCI)and convolutional neural networks(CNN)for identifying the static indications of Indian Sign Language(ISL).This study introduces a shark smell optimization with deep learning based automated sign language recognition(SSODL-ASLR)model for hearing and speaking impaired people.The presented SSODL-ASLR technique majorly concentrates on the recognition and classification of sign lan-guage provided by deaf and dumb people.The presented SSODL-ASLR model encompasses a two stage process namely sign language detection and sign lan-guage classification.In thefirst stage,the Mask Region based Convolution Neural Network(Mask RCNN)model is exploited for sign language recognition.Sec-ondly,SSO algorithm with soft margin support vector machine(SM-SVM)model can be utilized for sign language classification.To assure the enhanced classifica-tion performance of the SSODL-ASLR model,a brief set of simulations was car-ried out.The extensive results portrayed the supremacy of the SSODL-ASLR model over other techniques.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R114)Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR48.
文摘The Internet of Things(IoT)environment plays a crucial role in the design of smart environments.Security and privacy are the major challenging problems that exist in the design of IoT-enabled real-time environments.Security susceptibilities in IoT-based systems pose security threats which affect smart environment applications.Intrusion detection systems(IDS)can be used for IoT environments to mitigate IoT-related security attacks which use few security vulnerabilities.This paper introduces a modified garden balsan optimizationbased machine learning model for intrusion detection(MGBO-MLID)in the IoT cloud environment.The presented MGBO-MLID technique focuses on the identification and classification of intrusions in the IoT cloud atmosphere.Initially,the presented MGBO-MLID model applies min-max normalization that can be utilized for scaling the features in a uniform format.In addition,the MGBO-MLID model exploits the MGBO algorithm to choose the optimal subset of features.Moreover,the attention-based bidirectional long short-term(ABiLSTM)method can be utilized for the detection and classification of intrusions.At the final level,the Aquila optimization(AO)algorithm is applied as a hyperparameter optimizer to fine-tune the ABiLSTM methods.The experimental validation of the MGBO-MLID method is tested using a benchmark dataset.The extensive comparative study reported the betterment of the MGBO-MLID algorithm over recent approaches.
基金supported by Taif University Researchers Supporting Project Number(TURSP-2020/98)Taif University,Taif,Saudi Arabia.
文摘Massive-Multiple Inputs and Multiple Outputs(M-MIMO)is considered as one of the standard techniques in improving the performance of Fifth Generation(5G)radio.5G signal detection with low propagation delay and high throughput with minimum computational intricacy are some of the serious concerns in the deployment of 5G.The evaluation of 5G promises a high quality of service(QoS),a high data rate,low latency,and spectral efficiency,ensuring several applications that will improve the services in every sector.The existing detection techniques cannot be utilised in 5G and beyond 5G due to the high complexity issues in their implementation.In the proposed article,the Approximation Message Passing(AMP)is implemented and compared with the existing Minimum Mean Square Error(MMSE)and Message Passing Detector(MPD)algorithms.The outcomes of the work show that the performance of Bit Error Rate(BER)is improved with minimal complexity.
基金The authors extend their appreciation to the King Salman center for Disability Research for funding this work through Research Group no KSRG-2022-030.
文摘Text-To-Speech(TTS)is a speech processing tool that is highly helpful for visually-challenged people.The TTS tool is applied to transform the texts into human-like sounds.However,it is highly challenging to accomplish the TTS out-comes for the non-diacritized text of the Arabic language since it has multiple unique features and rules.Some special characters like gemination and diacritic signs that correspondingly indicate consonant doubling and short vowels greatly impact the precise pronunciation of the Arabic language.But,such signs are not frequently used in the texts written in the Arabic language since its speakers and readers can guess them from the context itself.In this background,the current research article introduces an Optimal Deep Learning-driven Arab Text-to-Speech Synthesizer(ODLD-ATSS)model to help the visually-challenged people in the Kingdom of Saudi Arabia.The prime aim of the presented ODLD-ATSS model is to convert the text into speech signals for visually-challenged people.To attain this,the presented ODLD-ATSS model initially designs a Gated Recurrent Unit(GRU)-based prediction model for diacritic and gemination signs.Besides,the Buckwalter code is utilized to capture,store and display the Arabic texts.To improve the TSS performance of the GRU method,the Aquila Optimization Algorithm(AOA)is used,which shows the novelty of the work.To illustrate the enhanced performance of the proposed ODLD-ATSS model,further experi-mental analyses were conducted.The proposed model achieved a maximum accu-racy of 96.35%,and the experimental outcomes infer the improved performance of the proposed ODLD-ATSS model over other DL-based TSS models.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under grant number(142/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R238)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4340237DSR24).
文摘Wireless Sensor Networks(WSN)interlink numerous Sensor Nodes(SN)to support Internet of Things(loT)services.But the data gathered from SNs can be divulged,tempered,and forged.Conventional WSN data processes manage the data in a centralized format at terminal gadgets.These devices are prone to attacks and the security of systems can get compromised.Blockchain is a distributed and decentralized technique that has the ability to handle security issues in WSN.The security issues include transactions that may be copied and spread across numerous nodes in a peer-peer network system.This breaches the mutual trust and allows data immutability which in turn permits the network to go on.At some instances,few nodes die or get compromised due to heavy power utilization.The current article develops an Energy Aware Chaotic Pigeon Inspired Optimization based Clustering scheme for Blockchain assisted WSN technique abbreviated as EACPIO-CB technique.The primary objective of the proposed EACPIO-CB model is to proficiently group the sensor nodes into clusters and exploit Blockchain(BC)for inter-cluster communication in the network.To select ClusterHeads(CHs)and organize the clusters,the presented EACPIO-CB model designs a fitness function that involves distinct input parameters.Further,BC technology enables the communication between one CH and the other and with the Base Station(BS)in the network.The authors conducted comprehensive set of simulations and the outcomes were investigated under different aspects.The simulation results confirmed the better performance of EACPIO-CB method over recent methodologies.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under Grant Number(25/43)Taif University Researchers Supporting Project Number(TURSP-2020/346)+1 种基金Taif University,Taif,Saudi Arabia.Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R303)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Ara-bia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR17).
文摘Nowadays,vehicular ad hoc networks(VANET)turn out to be a core portion of intelligent transportation systems(ITSs),that mainly focus on achieving continual Internet connectivity amongst vehicles on the road.The VANET was utilized to enhance driving safety and build an ITS in modern cities.Driving safety is a main portion of VANET,the privacy and security of these messages should be protected.In this aspect,this article presents a blockchain with sunflower optimization enabled route planning scheme(BCSFO-RPS)for secure VANET.The presented BCSFO-RPSmodel focuses on the identification of routes in such a way that vehicular communication is secure.In addition,the BCSFO-RPS model employs SFO algorithm with a fitness function for effectual identification of routes.Besides,the proposed BCSFO-RPS model derives an intrusion detection system(IDS)encompassing two processes namely feature selection and classification.To detect intrusions,correlation based feature selection(CFS)and kernel extreme machine learning(KELM)classifier is applied.The performance of the BCSFO-RPS model is tested using a series of experiments and the results reported the enhancements of the BCSFO-RPS model over other approaches with maximum accuracy of 98.70%.