Owing to the rapid increase in the interchange of text information through internet networks,the reliability and security of digital content are becoming a major research problem.Tampering detection,Content authentica...Owing to the rapid increase in the interchange of text information through internet networks,the reliability and security of digital content are becoming a major research problem.Tampering detection,Content authentication,and integrity verification of digital content interchanged through the Internet were utilized to solve a major concern in information and communication technologies.The authors’difficulties were tampering detection,authentication,and integrity verification of the digital contents.This study develops an Automated Data Mining based Digital Text Document Watermarking for Tampering Attack Detection(ADMDTW-TAD)via the Internet.The DM concept is exploited in the presented ADMDTW-TAD technique to identify the document’s appropriate characteristics to embed larger watermark information.The presented secure watermarking scheme intends to transmit digital text documents over the Internet securely.Once the watermark is embedded with no damage to the original document,it is then shared with the destination.The watermark extraction process is performed to get the original document securely.The experimental validation of the ADMDTW-TAD technique is carried out under varying levels of attack volumes,and the outcomes were inspected in terms of different measures.The simulation values indicated that the ADMDTW-TAD technique improved performance over other models.展开更多
With the flexible deployment and high mobility of Unmanned Aerial Vehicles(UAVs)in an open environment,they have generated con-siderable attention in military and civil applications intending to enable ubiquitous conn...With the flexible deployment and high mobility of Unmanned Aerial Vehicles(UAVs)in an open environment,they have generated con-siderable attention in military and civil applications intending to enable ubiquitous connectivity and foster agile communications.The difficulty stems from features other than mobile ad-hoc network(MANET),namely aerial mobility in three-dimensional space and often changing topology.In the UAV network,a single node serves as a forwarding,transmitting,and receiving node at the same time.Typically,the communication path is multi-hop,and routing significantly affects the network’s performance.A lot of effort should be invested in performance analysis for selecting the optimum routing system.With this motivation,this study modelled a new Coati Optimization Algorithm-based Energy-Efficient Routing Process for Unmanned Aerial Vehicle Communication(COAER-UAVC)technique.The presented COAER-UAVC technique establishes effective routes for communication between the UAVs.It is primarily based on the coati characteristics in nature:if attacking and hunting iguanas and escaping from predators.Besides,the presented COAER-UAVC technique concentrates on the design of fitness functions to minimize energy utilization and communication delay.A varied group of simulations was performed to depict the optimum performance of the COAER-UAVC system.The experimental results verified that the COAER-UAVC technique had assured improved performance over other approaches.展开更多
With new developments experienced in Internet of Things(IoT),wearable,and sensing technology,the value of healthcare services has enhanced.This evolution has brought significant changes from conventional medicine-base...With new developments experienced in Internet of Things(IoT),wearable,and sensing technology,the value of healthcare services has enhanced.This evolution has brought significant changes from conventional medicine-based healthcare to real-time observation-based healthcare.Biomedical Electrocardiogram(ECG)signals are generally utilized in examination and diagnosis of Cardiovascular Diseases(CVDs)since it is quick and non-invasive in nature.Due to increasing number of patients in recent years,the classifier efficiency gets reduced due to high variances observed in ECG signal patterns obtained from patients.In such scenario computer-assisted automated diagnostic tools are important for classification of ECG signals.The current study devises an Improved Bat Algorithm with Deep Learning Based Biomedical ECGSignal Classification(IBADL-BECGC)approach.To accomplish this,the proposed IBADL-BECGC model initially pre-processes the input signals.Besides,IBADL-BECGC model applies NasNet model to derive the features from test ECG signals.In addition,Improved Bat Algorithm(IBA)is employed to optimally fine-tune the hyperparameters related to NasNet approach.Finally,Extreme Learning Machine(ELM)classification algorithm is executed to perform ECG classification method.The presented IBADL-BECGC model was experimentally validated utilizing benchmark dataset.The comparison study outcomes established the improved performance of IBADL-BECGC model over other existing methodologies since the former achieved a maximum accuracy of 97.49%.展开更多
Recently,Internet of Things(IoT)devices have developed at a faster rate and utilization of devices gets considerably increased in day to day lives.Despite the benefits of IoT devices,security issues remain challenging...Recently,Internet of Things(IoT)devices have developed at a faster rate and utilization of devices gets considerably increased in day to day lives.Despite the benefits of IoT devices,security issues remain challenging owing to the fact that most devices do not include memory and computing resources essential for satisfactory security operation.Consequently,IoT devices are vulnerable to different kinds of attacks.A single attack on networking system/device could result in considerable data to data security and privacy.But the emergence of artificial intelligence(AI)techniques can be exploited for attack detection and classification in the IoT environment.In this view,this paper presents novel metaheuristics feature selection with fuzzy logic enabled intrusion detection system(MFSFL-IDS)in the IoT environment.The presented MFSFL-IDS approach purposes for recognizing the existence of intrusions and accomplish security in the IoT environment.To achieve this,the MFSFL-IDS model employs data pre-processing to transform the data into useful format.Besides,henry gas solubility optimization(HGSO)algorithm is applied as a feature selection approach to derive useful feature vectors.Moreover,adaptive neuro fuzzy inference system(ANFIS)technique was utilized for the recognition and classification of intrusions in the network.Finally,binary bat algorithm(BBA)is exploited for adjusting parameters involved in the ANFIS model.A comprehensive experimental validation of the MFSFL-IDS model is carried out using benchmark dataset and the outcomes are assessed under distinct aspects.The experimentation outcomes highlighted the superior performance of the MFSFL-IDS model over recentapproaches with maximum accuracy of 99.80%.展开更多
Hyperspectral remote sensing/imaging spectroscopy is a novel approach to reaching a spectrum from all the places of a huge array of spatial places so that several spectral wavelengths are utilized for making coherent ...Hyperspectral remote sensing/imaging spectroscopy is a novel approach to reaching a spectrum from all the places of a huge array of spatial places so that several spectral wavelengths are utilized for making coherent images.Hyperspectral remote sensing contains acquisition of digital images from several narrow,contiguous spectral bands throughout the visible,Thermal Infrared(TIR),Near Infrared(NIR),and Mid-Infrared(MIR)regions of the electromagnetic spectrum.In order to the application of agricultural regions,remote sensing approaches are studied and executed to their benefit of continuous and quantitativemonitoring.Particularly,hyperspectral images(HSI)are considered the precise for agriculture as they can offer chemical and physical data on vegetation.With this motivation,this article presents a novel Hurricane Optimization Algorithm with Deep Transfer Learning Driven Crop Classification(HOADTL-CC)model onHyperspectralRemote Sensing Images.The presentedHOADTL-CC model focuses on the identification and categorization of crops on hyperspectral remote sensing images.To accomplish this,the presentedHOADTL-CC model involves the design ofHOAwith capsule network(CapsNet)model for generating a set of useful feature vectors.Besides,Elman neural network(ENN)model is applied to allot proper class labels into the input HSI.Finally,glowworm swarm optimization(GSO)algorithm is exploited to fine tune the ENNparameters involved in this article.The experimental result scrutiny of the HOADTL-CC method can be tested with the help of benchmark dataset and the results are assessed under distinct aspects.Extensive comparative studies stated the enhanced performance of the HOADTL-CC model over recent approaches with maximum accuracy of 99.51%.展开更多
Cloud Computing(CC)is the most promising and advanced technology to store data and offer online services in an effective manner.When such fast evolving technologies are used in the protection of computerbased systems ...Cloud Computing(CC)is the most promising and advanced technology to store data and offer online services in an effective manner.When such fast evolving technologies are used in the protection of computerbased systems from cyberattacks,it brings several advantages compared to conventional data protection methods.Some of the computer-based systems that effectively protect the data include Cyber-Physical Systems(CPS),Internet of Things(IoT),mobile devices,desktop and laptop computer,and critical systems.Malicious software(malware)is nothing but a type of software that targets the computer-based systems so as to launch cyberattacks and threaten the integrity,secrecy,and accessibility of the information.The current study focuses on design of Optimal Bottleneck driven Deep Belief Network-enabled Cybersecurity Malware Classification(OBDDBNCMC)model.The presentedOBDDBN-CMCmodel intends to recognize and classify the malware that exists in IoT-based cloud platform.To attain this,Zscore data normalization is utilized to scale the data into a uniform format.In addition,BDDBN model is also exploited for recognition and categorization of malware.To effectually fine-tune the hyperparameters related to BDDBN model,GrasshopperOptimizationAlgorithm(GOA)is applied.This scenario enhances the classification results and also shows the novelty of current study.The experimental analysis was conducted upon OBDDBN-CMC model for validation and the results confirmed the enhanced performance ofOBDDBNCMC model over recent approaches.展开更多
In recent times,cities are getting smart and can be managed effectively through diverse architectures and services.Smart cities have the ability to support smart medical systems that can infiltrate distinct events(i.e...In recent times,cities are getting smart and can be managed effectively through diverse architectures and services.Smart cities have the ability to support smart medical systems that can infiltrate distinct events(i.e.,smart hospitals,smart homes,and community health centres)and scenarios(e.g.,rehabilitation,abnormal behavior monitoring,clinical decision-making,disease prevention and diagnosis postmarking surveillance and prescription recommendation).The integration of Artificial Intelligence(AI)with recent technologies,for instance medical screening gadgets,are significant enough to deliver maximum performance and improved management services to handle chronic diseases.With latest developments in digital data collection,AI techniques can be employed for clinical decision making process.On the other hand,Cardiovascular Disease(CVD)is one of the major illnesses that increase the mortality rate across the globe.Generally,wearables can be employed in healthcare systems that instigate the development of CVD detection and classification.With this motivation,the current study develops an Artificial Intelligence Enabled Decision Support System for CVD Disease Detection and Classification in e-healthcare environment,abbreviated as AIDSS-CDDC technique.The proposed AIDSS-CDDC model enables the Internet of Things(IoT)devices for healthcare data collection.Then,the collected data is saved in cloud server for examination.Followed by,training 4484 CMC,2023,vol.74,no.2 and testing processes are executed to determine the patient’s health condition.To accomplish this,the presented AIDSS-CDDC model employs data preprocessing and Improved Sine Cosine Optimization based Feature Selection(ISCO-FS)technique.In addition,Adam optimizer with Autoencoder Gated RecurrentUnit(AE-GRU)model is employed for detection and classification of CVD.The experimental results highlight that the proposed AIDSS-CDDC model is a promising performer compared to other existing models.展开更多
Emerging technologies such as edge computing,Internet of Things(IoT),5G networks,big data,Artificial Intelligence(AI),and Unmanned Aerial Vehicles(UAVs)empower,Industry 4.0,with a progressive production methodology th...Emerging technologies such as edge computing,Internet of Things(IoT),5G networks,big data,Artificial Intelligence(AI),and Unmanned Aerial Vehicles(UAVs)empower,Industry 4.0,with a progressive production methodology that shows attention to the interaction between machine and human beings.In the literature,various authors have focused on resolving security problems in UAV communication to provide safety for vital applications.The current research article presents a Circle Search Optimization with Deep Learning Enabled Secure UAV Classification(CSODL-SUAVC)model for Industry 4.0 environment.The suggested CSODL-SUAVC methodology is aimed at accomplishing two core objectives such as secure communication via image steganography and image classification.Primarily,the proposed CSODL-SUAVC method involves the following methods such as Multi-Level Discrete Wavelet Transformation(ML-DWT),CSO-related Optimal Pixel Selection(CSO-OPS),and signcryption-based encryption.The proposed model deploys the CSO-OPS technique to select the optimal pixel points in cover images.The secret images,encrypted by signcryption technique,are embedded into cover images.Besides,the image classification process includes three components namely,Super-Resolution using Convolution Neural Network(SRCNN),Adam optimizer,and softmax classifier.The integration of the CSO-OPS algorithm and Adam optimizer helps in achieving the maximum performance upon UAV communication.The proposed CSODLSUAVC model was experimentally validated using benchmark datasets and the outcomes were evaluated under distinct aspects.The simulation outcomes established the supreme better performance of the CSODL-SUAVC model over recent approaches.展开更多
The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models...The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models such as speech understanding,emotion detection,home automation,and so on.If an image needs to be captioned,then the objects in that image,its actions and connections,and any silent feature that remains under-projected or missing from the images should be identified.The aim of the image captioning process is to generate a caption for image.In next step,the image should be provided with one of the most significant and detailed descriptions that is syntactically as well as semantically correct.In this scenario,computer vision model is used to identify the objects and NLP approaches are followed to describe the image.The current study develops aNatural Language Processing with Optimal Deep Learning Enabled Intelligent Image Captioning System(NLPODL-IICS).The aim of the presented NLPODL-IICS model is to produce a proper description for input image.To attain this,the proposed NLPODL-IICS follows two stages such as encoding and decoding processes.Initially,at the encoding side,the proposed NLPODL-IICS model makes use of Hunger Games Search(HGS)with Neural Search Architecture Network(NASNet)model.This model represents the input data appropriately by inserting it into a predefined length vector.Besides,during decoding phase,Chimp Optimization Algorithm(COA)with deeper Long Short Term Memory(LSTM)approach is followed to concatenate the description sentences 4436 CMC,2023,vol.74,no.2 produced by the method.The application of HGS and COA algorithms helps in accomplishing proper parameter tuning for NASNet and LSTM models respectively.The proposed NLPODL-IICS model was experimentally validated with the help of two benchmark datasets.Awidespread comparative analysis confirmed the superior performance of NLPODL-IICS model over other models.展开更多
Sentiment Analysis(SA),a Machine Learning(ML)technique,is often applied in the literature.The SA technique is specifically applied to the data collected from social media sites.The research studies conducted earlier u...Sentiment Analysis(SA),a Machine Learning(ML)technique,is often applied in the literature.The SA technique is specifically applied to the data collected from social media sites.The research studies conducted earlier upon the SA of the tweets were mostly aimed at automating the feature extraction process.In this background,the current study introduces a novel method called Quantum Particle Swarm Optimization with Deep Learning-Based Sentiment Analysis on Arabic Tweets(QPSODL-SAAT).The presented QPSODL-SAAT model determines and classifies the sentiments of the tweets written in Arabic.Initially,the data pre-processing is performed to convert the raw tweets into a useful format.Then,the word2vec model is applied to generate the feature vectors.The Bidirectional Gated Recurrent Unit(BiGRU)classifier is utilized to identify and classify the sentiments.Finally,the QPSO algorithm is exploited for the optimal finetuning of the hyperparameters involved in the BiGRU model.The proposed QPSODL-SAAT model was experimentally validated using the standard datasets.An extensive comparative analysis was conducted,and the proposed model achieved a maximum accuracy of 98.35%.The outcomes confirmed the supremacy of the proposed QPSODL-SAAT model over the rest of the approaches,such as the Surface Features(SF),Generic Embeddings(GE),Arabic Sentiment Embeddings constructed using the Hybrid(ASEH)model and the Bidirectional Encoder Representations from Transformers(BERT)model.展开更多
Autism Spectrum Disorder (ASD) refers to a neuro-disorder wherean individual has long-lasting effects on communication and interaction withothers.Advanced information technologywhich employs artificial intelligence(AI...Autism Spectrum Disorder (ASD) refers to a neuro-disorder wherean individual has long-lasting effects on communication and interaction withothers.Advanced information technologywhich employs artificial intelligence(AI) model has assisted in early identify ASD by using pattern detection.Recent advances of AI models assist in the automated identification andclassification of ASD, which helps to reduce the severity of the disease.This study introduces an automated ASD classification using owl searchalgorithm with machine learning (ASDC-OSAML) model. The proposedASDC-OSAML model majorly focuses on the identification and classificationof ASD. To attain this, the presentedASDC-OSAML model follows minmaxnormalization approach as a pre-processing stage. Next, the owl searchalgorithm (OSA)-based feature selection (OSA-FS) model is used to derivefeature subsets. Then, beetle swarm antenna search (BSAS) algorithm withIterative Dichotomiser 3 (ID3) classification method was implied for ASDdetection and classification. The design of BSAS algorithm helps to determinethe parameter values of the ID3 classifier. The performance analysis of theASDC-OSAML model is performed using benchmark dataset. An extensivecomparison study highlighted the supremacy of the ASDC-OSAML modelover recent state of art approaches.展开更多
Nowadays,security plays an important role in Internet of Things(IoT)environment especially in medical services’domains like disease prediction and medical data storage.In healthcare sector,huge volumes of data are ge...Nowadays,security plays an important role in Internet of Things(IoT)environment especially in medical services’domains like disease prediction and medical data storage.In healthcare sector,huge volumes of data are generated on a daily basis,owing to the involvement of advanced health care devices.In general terms,health care images are highly sensitive to alterations due to which any modifications in its content can result in faulty diagnosis.At the same time,it is also significant to maintain the delicate contents of health care images during reconstruction stage.Therefore,an encryption system is required in order to raise the privacy and security of healthcare data by not leaking any sensitive data.The current study introduces Improved Multileader Optimization with Shadow Image Encryption for Medical Image Security(IMLOSIE-MIS)technique for IoT environment.The aim of the proposed IMLOSIE-MIS model is to accomplish security by generating shadows and encrypting them effectively.To do so,the presented IMLOSIE-MIS model initially generates a set of shadows for every input medical image.Besides,shadow image encryption process takes place with the help of Multileader Optimization(MLO)withHomomorphic Encryption(IMLO-HE)technique,where the optimal keys are generated with the help of MLO algorithm.On the receiver side,decryption process is initially carried out and shadow image reconstruction process is conducted.The experimentation analysis was carried out on medical images and the results inferred that the proposed IMLOSIE-MIS model is an excellent performer compared to other models.The comparison study outcomes demonstrate that IMLOSIE-MIS model is robust and offers high security in IoT-enabled healthcare environment.展开更多
As the Internet of Things(IoT)endures to develop,a huge count of data has been created.An IoT platform is rather sensitive to security challenges as individual data can be leaked,or sensor data could be used to cause ...As the Internet of Things(IoT)endures to develop,a huge count of data has been created.An IoT platform is rather sensitive to security challenges as individual data can be leaked,or sensor data could be used to cause accidents.As typical intrusion detection system(IDS)studies can be frequently designed for working well on databases,it can be unknown if they intend to work well in altering network environments.Machine learning(ML)techniques are depicted to have a higher capacity at assisting mitigate an attack on IoT device and another edge system with reasonable accuracy.This article introduces a new Bird Swarm Algorithm with Wavelet Neural Network for Intrusion Detection(BSAWNN-ID)in the IoT platform.The main intention of the BSAWNN-ID algorithm lies in detecting and classifying intrusions in the IoT platform.The BSAWNN-ID technique primarily designs a feature subset selection using the coyote optimization algorithm(FSS-COA)to attain this.Next,to detect intrusions,the WNN model is utilized.At last,theWNNparameters are optimally modified by the use of BSA.Awidespread experiment is performed to depict the better performance of the BSAWNNID technique.The resultant values indicated the better performance of the BSAWNN-ID technique over other models,with an accuracy of 99.64%on the UNSW-NB15 dataset.展开更多
Malware is a‘malicious software program that performs multiple cyberattacks on the Internet,involving fraud,scams,nation-state cyberwar,and cybercrime.Such malicious software programs come under different classificat...Malware is a‘malicious software program that performs multiple cyberattacks on the Internet,involving fraud,scams,nation-state cyberwar,and cybercrime.Such malicious software programs come under different classifications,namely Trojans,viruses,spyware,worms,ransomware,Rootkit,botnet malware,etc.Ransomware is a kind of malware that holds the victim’s data hostage by encrypting the information on the user’s computer to make it inaccessible to users and only decrypting it;then,the user pays a ransom procedure of a sum of money.To prevent detection,various forms of ransomware utilize more than one mechanism in their attack flow in conjunction with Machine Learning(ML)algorithm.This study focuses on designing a Learning-Based Artificial Algae Algorithm with Optimal Machine Learning Enabled Malware Detection(LBAAA-OMLMD)approach in Computer Networks.The presented LBAAA-OMLMDmodelmainly aims to detect and classify the existence of ransomware and goodware in the network.To accomplish this,the LBAAA-OMLMD model initially derives a Learning-Based Artificial Algae Algorithm based Feature Selection(LBAAA-FS)model to reduce the curse of dimensionality problems.Besides,the Flower Pollination Algorithm(FPA)with Echo State Network(ESN)Classification model is applied.The FPA model helps to appropriately adjust the parameters related to the ESN model to accomplish enhanced classifier results.The experimental validation of the LBAAA-OMLMD model is tested using a benchmark dataset,and the outcomes are inspected in distinct measures.The comprehensive comparative examination demonstrated the betterment of the LBAAAOMLMD model over recent algorithms.展开更多
Recently,computer aided diagnosis(CAD)model becomes an effective tool for decision making in healthcare sector.The advances in computer vision and artificial intelligence(AI)techniques have resulted in the effective d...Recently,computer aided diagnosis(CAD)model becomes an effective tool for decision making in healthcare sector.The advances in computer vision and artificial intelligence(AI)techniques have resulted in the effective design of CAD models,which enables to detection of the existence of diseases using various imaging modalities.Oral cancer(OC)has commonly occurred in head and neck globally.Earlier identification of OC enables to improve survival rate and reduce mortality rate.Therefore,the design of CAD model for OC detection and classification becomes essential.Therefore,this study introduces a novel Computer Aided Diagnosis for OC using Sailfish Optimization with Fusion based Classification(CADOC-SFOFC)model.The proposed CADOC-SFOFC model determines the existence of OC on the medical images.To accomplish this,a fusion based feature extraction process is carried out by the use of VGGNet-16 and Residual Network(ResNet)model.Besides,feature vectors are fused and passed into the extreme learning machine(ELM)model for classification process.Moreover,SFO algorithm is utilized for effective parameter selection of the ELM model,consequently resulting in enhanced performance.The experimental analysis of the CADOC-SFOFC model was tested on Kaggle dataset and the results reported the betterment of the CADOC-SFOFC model over the compared methods with maximum accuracy of 98.11%.Therefore,the CADOC-SFOFC model has maximum potential as an inexpensive and non-invasive tool which supports screening process and enhances the detection efficiency.展开更多
With the increased advancements of smart industries,cybersecurity has become a vital growth factor in the success of industrial transformation.The Industrial Internet of Things(IIoT)or Industry 4.0 has revolutionized ...With the increased advancements of smart industries,cybersecurity has become a vital growth factor in the success of industrial transformation.The Industrial Internet of Things(IIoT)or Industry 4.0 has revolutionized the concepts of manufacturing and production altogether.In industry 4.0,powerful IntrusionDetection Systems(IDS)play a significant role in ensuring network security.Though various intrusion detection techniques have been developed so far,it is challenging to protect the intricate data of networks.This is because conventional Machine Learning(ML)approaches are inadequate and insufficient to address the demands of dynamic IIoT networks.Further,the existing Deep Learning(DL)can be employed to identify anonymous intrusions.Therefore,the current study proposes a Hunger Games Search Optimization with Deep Learning-Driven Intrusion Detection(HGSODLID)model for the IIoT environment.The presented HGSODL-ID model exploits the linear normalization approach to transform the input data into a useful format.The HGSO algorithm is employed for Feature Selection(HGSO-FS)to reduce the curse of dimensionality.Moreover,Sparrow Search Optimization(SSO)is utilized with a Graph Convolutional Network(GCN)to classify and identify intrusions in the network.Finally,the SSO technique is exploited to fine-tune the hyper-parameters involved in the GCN model.The proposed HGSODL-ID model was experimentally validated using a benchmark dataset,and the results confirmed the superiority of the proposed HGSODL-ID method over recent approaches.展开更多
Natural Language Processing(NLP)for the Arabic language has gained much significance in recent years.The most commonly-utilized NLP task is the‘Text Classification’process.Its main intention is to apply the Machine ...Natural Language Processing(NLP)for the Arabic language has gained much significance in recent years.The most commonly-utilized NLP task is the‘Text Classification’process.Its main intention is to apply the Machine Learning(ML)approaches for automatically classifying the textual files into one or more pre-defined categories.In ML approaches,the first and foremost crucial step is identifying an appropriate large dataset to test and train the method.One of the trending ML techniques,i.e.,Deep Learning(DL)technique needs huge volumes of different types of datasets for training to yield the best outcomes.The current study designs a new Dice Optimization with a Deep Hybrid Boltzmann Machinebased Arabic Corpus Classification(DODHBM-ACC)model in this background.The presented DODHBM-ACC model primarily relies upon different stages of pre-processing and the word2vec word embedding process.For Arabic text classification,the DHBM technique is utilized.This technique is a hybrid version of the Deep Boltzmann Machine(DBM)and Deep Belief Network(DBN).It has the advantage of learning the decisive intention of the classification process.To adjust the hyperparameters of the DHBM technique,the Dice Optimization Algorithm(DOA)is exploited in this study.The experimental analysis was conducted to establish the superior performance of the proposed DODHBM-ACC model.The outcomes inferred the better performance of the proposed DODHBM-ACC model over other recent approaches.展开更多
Computational linguistics is an engineering-based scientific discipline.It deals with understanding written and spoken language from a computational viewpoint.Further,the domain also helps construct the artefacts that...Computational linguistics is an engineering-based scientific discipline.It deals with understanding written and spoken language from a computational viewpoint.Further,the domain also helps construct the artefacts that are useful in processing and producing a language either in bulk or in a dialogue setting.Named Entity Recognition(NER)is a fundamental task in the data extraction process.It concentrates on identifying and labelling the atomic components from several texts grouped under different entities,such as organizations,people,places,and times.Further,the NER mechanism identifies and removes more types of entities as per the requirements.The significance of the NER mechanism has been well-established in Natural Language Processing(NLP)tasks,and various research investigations have been conducted to develop novel NER methods.The conventional ways of managing the tasks range from rule-related and hand-crafted feature-related Machine Learning(ML)techniques to Deep Learning(DL)techniques.In this aspect,the current study introduces a novel Dart Games Optimizer with Hybrid Deep Learning-Driven Computational Linguistics(DGOHDL-CL)model for NER.The presented DGOHDL-CL technique aims to determine and label the atomic components from several texts as a collection of the named entities.In the presented DGOHDL-CL technique,the word embed-ding process is executed at the initial stage with the help of the word2vec model.For the NER mechanism,the Convolutional Gated Recurrent Unit(CGRU)model is employed in this work.At last,the DGO technique is used as a hyperparameter tuning strategy for the CGRU algorithm to boost the NER’s outcomes.No earlier studies integrated the DGO mechanism with the CGRU model for NER.To exhibit the superiority of the proposed DGOHDL-CL technique,a widespread simulation analysis was executed on two datasets,CoNLL-2003 and OntoNotes 5.0.The experimental outcomes establish the promising performance of the DGOHDL-CL technique over other models.展开更多
The Internet of Things(IoT)offers a new era of connectivity,which goes beyond laptops and smart connected devices for connected vehicles,smart homes,smart cities,and connected healthcare.The massive quantity of data g...The Internet of Things(IoT)offers a new era of connectivity,which goes beyond laptops and smart connected devices for connected vehicles,smart homes,smart cities,and connected healthcare.The massive quantity of data gathered from numerous IoT devices poses security and privacy concerns for users.With the increasing use of multimedia in communications,the content security of remote-sensing images attracted much attention in academia and industry.Image encryption is important for securing remote sensing images in the IoT environment.Recently,researchers have introduced plenty of algorithms for encrypting images.This study introduces an Improved Sine Cosine Algorithm with Chaotic Encryption based Remote Sensing Image Encryption(ISCACE-RSI)technique in IoT Environment.The proposed model follows a three-stage process,namely pre-processing,encryption,and optimal key generation.The remote sensing images were preprocessed at the initial stage to enhance the image quality.Next,the ISCACERSI technique exploits the double-layer remote sensing image encryption(DLRSIE)algorithm for encrypting the images.The DLRSIE methodology incorporates the design of Chaotic Maps and deoxyribonucleic acid(DNA)Strand Displacement(DNASD)approach.The chaotic map is employed for generating pseudorandom sequences and implementing routine scrambling and diffusion processes on the plaintext images.Then,the study presents three DNASD-related encryption rules based on the variety of DNASD,and those rules are applied for encrypting the images at the DNA sequence level.For an optimal key generation of the DLRSIE technique,the ISCA is applied with an objective function of the maximization of peak signal to noise ratio(PSNR).To examine the performance of the ISCACE-RSI model,a detailed set of simulations were conducted.The comparative study reported the better performance of the ISCACE-RSI model over other existing approaches.展开更多
Wireless Sensor Networks(WSN)has evolved into a key technology for ubiquitous living and the domain of interest has remained active in research owing to its extensive range of applications.In spite of this,it is chall...Wireless Sensor Networks(WSN)has evolved into a key technology for ubiquitous living and the domain of interest has remained active in research owing to its extensive range of applications.In spite of this,it is challenging to design energy-efficient WSN.The routing approaches are leveraged to reduce the utilization of energy and prolonging the lifespan of network.In order to solve the restricted energy problem,it is essential to reduce the energy utilization of data,transmitted from the routing protocol and improve network development.In this background,the current study proposes a novel Differential Evolution with Arithmetic Optimization Algorithm Enabled Multi-hop Routing Protocol(DEAOA-MHRP)for WSN.The aim of the proposed DEAOA-MHRP model is select the optimal routes to reach the destination in WSN.To accomplish this,DEAOA-MHRP model initially integrates the concepts of Different Evolution(DE)and Arithmetic Optimization Algorithms(AOA)to improve convergence rate and solution quality.Besides,the inclusion of DE in traditional AOA helps in overcoming local optima problems.In addition,the proposed DEAOA-MRP technique derives a fitness function comprising two input variables such as residual energy and distance.In order to ensure the energy efficient performance of DEAOA-MHRP model,a detailed comparative study was conducted and the results established its superior performance over recent approaches.展开更多
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Research Groups Program Grant No.(RGP-1443-0051).
文摘Owing to the rapid increase in the interchange of text information through internet networks,the reliability and security of digital content are becoming a major research problem.Tampering detection,Content authentication,and integrity verification of digital content interchanged through the Internet were utilized to solve a major concern in information and communication technologies.The authors’difficulties were tampering detection,authentication,and integrity verification of the digital contents.This study develops an Automated Data Mining based Digital Text Document Watermarking for Tampering Attack Detection(ADMDTW-TAD)via the Internet.The DM concept is exploited in the presented ADMDTW-TAD technique to identify the document’s appropriate characteristics to embed larger watermark information.The presented secure watermarking scheme intends to transmit digital text documents over the Internet securely.Once the watermark is embedded with no damage to the original document,it is then shared with the destination.The watermark extraction process is performed to get the original document securely.The experimental validation of the ADMDTW-TAD technique is carried out under varying levels of attack volumes,and the outcomes were inspected in terms of different measures.The simulation values indicated that the ADMDTW-TAD technique improved performance over other models.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under grant number(235/44)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2023R114)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR71)This study is supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2023/R/1444).
文摘With the flexible deployment and high mobility of Unmanned Aerial Vehicles(UAVs)in an open environment,they have generated con-siderable attention in military and civil applications intending to enable ubiquitous connectivity and foster agile communications.The difficulty stems from features other than mobile ad-hoc network(MANET),namely aerial mobility in three-dimensional space and often changing topology.In the UAV network,a single node serves as a forwarding,transmitting,and receiving node at the same time.Typically,the communication path is multi-hop,and routing significantly affects the network’s performance.A lot of effort should be invested in performance analysis for selecting the optimum routing system.With this motivation,this study modelled a new Coati Optimization Algorithm-based Energy-Efficient Routing Process for Unmanned Aerial Vehicle Communication(COAER-UAVC)technique.The presented COAER-UAVC technique establishes effective routes for communication between the UAVs.It is primarily based on the coati characteristics in nature:if attacking and hunting iguanas and escaping from predators.Besides,the presented COAER-UAVC technique concentrates on the design of fitness functions to minimize energy utilization and communication delay.A varied group of simulations was performed to depict the optimum performance of the COAER-UAVC system.The experimental results verified that the COAER-UAVC technique had assured improved performance over other approaches.
基金the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under Grant Number(71/43)Princess Nourah Bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R203)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR29).
文摘With new developments experienced in Internet of Things(IoT),wearable,and sensing technology,the value of healthcare services has enhanced.This evolution has brought significant changes from conventional medicine-based healthcare to real-time observation-based healthcare.Biomedical Electrocardiogram(ECG)signals are generally utilized in examination and diagnosis of Cardiovascular Diseases(CVDs)since it is quick and non-invasive in nature.Due to increasing number of patients in recent years,the classifier efficiency gets reduced due to high variances observed in ECG signal patterns obtained from patients.In such scenario computer-assisted automated diagnostic tools are important for classification of ECG signals.The current study devises an Improved Bat Algorithm with Deep Learning Based Biomedical ECGSignal Classification(IBADL-BECGC)approach.To accomplish this,the proposed IBADL-BECGC model initially pre-processes the input signals.Besides,IBADL-BECGC model applies NasNet model to derive the features from test ECG signals.In addition,Improved Bat Algorithm(IBA)is employed to optimally fine-tune the hyperparameters related to NasNet approach.Finally,Extreme Learning Machine(ELM)classification algorithm is executed to perform ECG classification method.The presented IBADL-BECGC model was experimentally validated utilizing benchmark dataset.The comparison study outcomes established the improved performance of IBADL-BECGC model over other existing methodologies since the former achieved a maximum accuracy of 97.49%.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R319),Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR27).
文摘Recently,Internet of Things(IoT)devices have developed at a faster rate and utilization of devices gets considerably increased in day to day lives.Despite the benefits of IoT devices,security issues remain challenging owing to the fact that most devices do not include memory and computing resources essential for satisfactory security operation.Consequently,IoT devices are vulnerable to different kinds of attacks.A single attack on networking system/device could result in considerable data to data security and privacy.But the emergence of artificial intelligence(AI)techniques can be exploited for attack detection and classification in the IoT environment.In this view,this paper presents novel metaheuristics feature selection with fuzzy logic enabled intrusion detection system(MFSFL-IDS)in the IoT environment.The presented MFSFL-IDS approach purposes for recognizing the existence of intrusions and accomplish security in the IoT environment.To achieve this,the MFSFL-IDS model employs data pre-processing to transform the data into useful format.Besides,henry gas solubility optimization(HGSO)algorithm is applied as a feature selection approach to derive useful feature vectors.Moreover,adaptive neuro fuzzy inference system(ANFIS)technique was utilized for the recognition and classification of intrusions in the network.Finally,binary bat algorithm(BBA)is exploited for adjusting parameters involved in the ANFIS model.A comprehensive experimental validation of the MFSFL-IDS model is carried out using benchmark dataset and the outcomes are assessed under distinct aspects.The experimentation outcomes highlighted the superior performance of the MFSFL-IDS model over recentapproaches with maximum accuracy of 99.80%.
基金the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under Grant Number(25/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R303)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR28.
文摘Hyperspectral remote sensing/imaging spectroscopy is a novel approach to reaching a spectrum from all the places of a huge array of spatial places so that several spectral wavelengths are utilized for making coherent images.Hyperspectral remote sensing contains acquisition of digital images from several narrow,contiguous spectral bands throughout the visible,Thermal Infrared(TIR),Near Infrared(NIR),and Mid-Infrared(MIR)regions of the electromagnetic spectrum.In order to the application of agricultural regions,remote sensing approaches are studied and executed to their benefit of continuous and quantitativemonitoring.Particularly,hyperspectral images(HSI)are considered the precise for agriculture as they can offer chemical and physical data on vegetation.With this motivation,this article presents a novel Hurricane Optimization Algorithm with Deep Transfer Learning Driven Crop Classification(HOADTL-CC)model onHyperspectralRemote Sensing Images.The presentedHOADTL-CC model focuses on the identification and categorization of crops on hyperspectral remote sensing images.To accomplish this,the presentedHOADTL-CC model involves the design ofHOAwith capsule network(CapsNet)model for generating a set of useful feature vectors.Besides,Elman neural network(ENN)model is applied to allot proper class labels into the input HSI.Finally,glowworm swarm optimization(GSO)algorithm is exploited to fine tune the ENNparameters involved in this article.The experimental result scrutiny of the HOADTL-CC method can be tested with the help of benchmark dataset and the results are assessed under distinct aspects.Extensive comparative studies stated the enhanced performance of the HOADTL-CC model over recent approaches with maximum accuracy of 99.51%.
基金the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under grant number(61/43).Princess Nourah Bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R319)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR24).
文摘Cloud Computing(CC)is the most promising and advanced technology to store data and offer online services in an effective manner.When such fast evolving technologies are used in the protection of computerbased systems from cyberattacks,it brings several advantages compared to conventional data protection methods.Some of the computer-based systems that effectively protect the data include Cyber-Physical Systems(CPS),Internet of Things(IoT),mobile devices,desktop and laptop computer,and critical systems.Malicious software(malware)is nothing but a type of software that targets the computer-based systems so as to launch cyberattacks and threaten the integrity,secrecy,and accessibility of the information.The current study focuses on design of Optimal Bottleneck driven Deep Belief Network-enabled Cybersecurity Malware Classification(OBDDBNCMC)model.The presentedOBDDBN-CMCmodel intends to recognize and classify the malware that exists in IoT-based cloud platform.To attain this,Zscore data normalization is utilized to scale the data into a uniform format.In addition,BDDBN model is also exploited for recognition and categorization of malware.To effectually fine-tune the hyperparameters related to BDDBN model,GrasshopperOptimizationAlgorithm(GOA)is applied.This scenario enhances the classification results and also shows the novelty of current study.The experimental analysis was conducted upon OBDDBN-CMC model for validation and the results confirmed the enhanced performance ofOBDDBNCMC model over recent approaches.
基金the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under Grant Number(71/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R114)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR26).
文摘In recent times,cities are getting smart and can be managed effectively through diverse architectures and services.Smart cities have the ability to support smart medical systems that can infiltrate distinct events(i.e.,smart hospitals,smart homes,and community health centres)and scenarios(e.g.,rehabilitation,abnormal behavior monitoring,clinical decision-making,disease prevention and diagnosis postmarking surveillance and prescription recommendation).The integration of Artificial Intelligence(AI)with recent technologies,for instance medical screening gadgets,are significant enough to deliver maximum performance and improved management services to handle chronic diseases.With latest developments in digital data collection,AI techniques can be employed for clinical decision making process.On the other hand,Cardiovascular Disease(CVD)is one of the major illnesses that increase the mortality rate across the globe.Generally,wearables can be employed in healthcare systems that instigate the development of CVD detection and classification.With this motivation,the current study develops an Artificial Intelligence Enabled Decision Support System for CVD Disease Detection and Classification in e-healthcare environment,abbreviated as AIDSS-CDDC technique.The proposed AIDSS-CDDC model enables the Internet of Things(IoT)devices for healthcare data collection.Then,the collected data is saved in cloud server for examination.Followed by,training 4484 CMC,2023,vol.74,no.2 and testing processes are executed to determine the patient’s health condition.To accomplish this,the presented AIDSS-CDDC model employs data preprocessing and Improved Sine Cosine Optimization based Feature Selection(ISCO-FS)technique.In addition,Adam optimizer with Autoencoder Gated RecurrentUnit(AE-GRU)model is employed for detection and classification of CVD.The experimental results highlight that the proposed AIDSS-CDDC model is a promising performer compared to other existing models.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through the small Groups Project under grant number(168/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R151),Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR59).
文摘Emerging technologies such as edge computing,Internet of Things(IoT),5G networks,big data,Artificial Intelligence(AI),and Unmanned Aerial Vehicles(UAVs)empower,Industry 4.0,with a progressive production methodology that shows attention to the interaction between machine and human beings.In the literature,various authors have focused on resolving security problems in UAV communication to provide safety for vital applications.The current research article presents a Circle Search Optimization with Deep Learning Enabled Secure UAV Classification(CSODL-SUAVC)model for Industry 4.0 environment.The suggested CSODL-SUAVC methodology is aimed at accomplishing two core objectives such as secure communication via image steganography and image classification.Primarily,the proposed CSODL-SUAVC method involves the following methods such as Multi-Level Discrete Wavelet Transformation(ML-DWT),CSO-related Optimal Pixel Selection(CSO-OPS),and signcryption-based encryption.The proposed model deploys the CSO-OPS technique to select the optimal pixel points in cover images.The secret images,encrypted by signcryption technique,are embedded into cover images.Besides,the image classification process includes three components namely,Super-Resolution using Convolution Neural Network(SRCNN),Adam optimizer,and softmax classifier.The integration of the CSO-OPS algorithm and Adam optimizer helps in achieving the maximum performance upon UAV communication.The proposed CSODLSUAVC model was experimentally validated using benchmark datasets and the outcomes were evaluated under distinct aspects.The simulation outcomes established the supreme better performance of the CSODL-SUAVC model over recent approaches.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R161)PrincessNourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the|Deanship of Scientific Research at Umm Al-Qura University|for supporting this work by Grant Code:(22UQU4310373DSR33).
文摘The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models such as speech understanding,emotion detection,home automation,and so on.If an image needs to be captioned,then the objects in that image,its actions and connections,and any silent feature that remains under-projected or missing from the images should be identified.The aim of the image captioning process is to generate a caption for image.In next step,the image should be provided with one of the most significant and detailed descriptions that is syntactically as well as semantically correct.In this scenario,computer vision model is used to identify the objects and NLP approaches are followed to describe the image.The current study develops aNatural Language Processing with Optimal Deep Learning Enabled Intelligent Image Captioning System(NLPODL-IICS).The aim of the presented NLPODL-IICS model is to produce a proper description for input image.To attain this,the proposed NLPODL-IICS follows two stages such as encoding and decoding processes.Initially,at the encoding side,the proposed NLPODL-IICS model makes use of Hunger Games Search(HGS)with Neural Search Architecture Network(NASNet)model.This model represents the input data appropriately by inserting it into a predefined length vector.Besides,during decoding phase,Chimp Optimization Algorithm(COA)with deeper Long Short Term Memory(LSTM)approach is followed to concatenate the description sentences 4436 CMC,2023,vol.74,no.2 produced by the method.The application of HGS and COA algorithms helps in accomplishing proper parameter tuning for NASNet and LSTM models respectively.The proposed NLPODL-IICS model was experimentally validated with the help of two benchmark datasets.Awidespread comparative analysis confirmed the superior performance of NLPODL-IICS model over other models.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Small Groups Project under Grant Number(120/43)Princess Nourah Bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R263)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura Universitysupporting this work by Grant Code:(22UQU4310373DSR36).
文摘Sentiment Analysis(SA),a Machine Learning(ML)technique,is often applied in the literature.The SA technique is specifically applied to the data collected from social media sites.The research studies conducted earlier upon the SA of the tweets were mostly aimed at automating the feature extraction process.In this background,the current study introduces a novel method called Quantum Particle Swarm Optimization with Deep Learning-Based Sentiment Analysis on Arabic Tweets(QPSODL-SAAT).The presented QPSODL-SAAT model determines and classifies the sentiments of the tweets written in Arabic.Initially,the data pre-processing is performed to convert the raw tweets into a useful format.Then,the word2vec model is applied to generate the feature vectors.The Bidirectional Gated Recurrent Unit(BiGRU)classifier is utilized to identify and classify the sentiments.Finally,the QPSO algorithm is exploited for the optimal finetuning of the hyperparameters involved in the BiGRU model.The proposed QPSODL-SAAT model was experimentally validated using the standard datasets.An extensive comparative analysis was conducted,and the proposed model achieved a maximum accuracy of 98.35%.The outcomes confirmed the supremacy of the proposed QPSODL-SAAT model over the rest of the approaches,such as the Surface Features(SF),Generic Embeddings(GE),Arabic Sentiment Embeddings constructed using the Hybrid(ASEH)model and the Bidirectional Encoder Representations from Transformers(BERT)model.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project Under Grant Number(61/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R114)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR26).
文摘Autism Spectrum Disorder (ASD) refers to a neuro-disorder wherean individual has long-lasting effects on communication and interaction withothers.Advanced information technologywhich employs artificial intelligence(AI) model has assisted in early identify ASD by using pattern detection.Recent advances of AI models assist in the automated identification andclassification of ASD, which helps to reduce the severity of the disease.This study introduces an automated ASD classification using owl searchalgorithm with machine learning (ASDC-OSAML) model. The proposedASDC-OSAML model majorly focuses on the identification and classificationof ASD. To attain this, the presentedASDC-OSAML model follows minmaxnormalization approach as a pre-processing stage. Next, the owl searchalgorithm (OSA)-based feature selection (OSA-FS) model is used to derivefeature subsets. Then, beetle swarm antenna search (BSAS) algorithm withIterative Dichotomiser 3 (ID3) classification method was implied for ASDdetection and classification. The design of BSAS algorithm helps to determinethe parameter values of the ID3 classifier. The performance analysis of theASDC-OSAML model is performed using benchmark dataset. An extensivecomparison study highlighted the supremacy of the ASDC-OSAML modelover recent state of art approaches.
基金the Deanship of Scientific Research at King Khalid University for funding this work through Small Groups Project under Grant Number(241/43)Princess Nourah Bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R319)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4340237DSR30).
文摘Nowadays,security plays an important role in Internet of Things(IoT)environment especially in medical services’domains like disease prediction and medical data storage.In healthcare sector,huge volumes of data are generated on a daily basis,owing to the involvement of advanced health care devices.In general terms,health care images are highly sensitive to alterations due to which any modifications in its content can result in faulty diagnosis.At the same time,it is also significant to maintain the delicate contents of health care images during reconstruction stage.Therefore,an encryption system is required in order to raise the privacy and security of healthcare data by not leaking any sensitive data.The current study introduces Improved Multileader Optimization with Shadow Image Encryption for Medical Image Security(IMLOSIE-MIS)technique for IoT environment.The aim of the proposed IMLOSIE-MIS model is to accomplish security by generating shadows and encrypting them effectively.To do so,the presented IMLOSIE-MIS model initially generates a set of shadows for every input medical image.Besides,shadow image encryption process takes place with the help of Multileader Optimization(MLO)withHomomorphic Encryption(IMLO-HE)technique,where the optimal keys are generated with the help of MLO algorithm.On the receiver side,decryption process is initially carried out and shadow image reconstruction process is conducted.The experimentation analysis was carried out on medical images and the results inferred that the proposed IMLOSIE-MIS model is an excellent performer compared to other models.The comparison study outcomes demonstrate that IMLOSIE-MIS model is robust and offers high security in IoT-enabled healthcare environment.
基金This work was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Groups Program Grant No.(RGP-1443-0048).
文摘As the Internet of Things(IoT)endures to develop,a huge count of data has been created.An IoT platform is rather sensitive to security challenges as individual data can be leaked,or sensor data could be used to cause accidents.As typical intrusion detection system(IDS)studies can be frequently designed for working well on databases,it can be unknown if they intend to work well in altering network environments.Machine learning(ML)techniques are depicted to have a higher capacity at assisting mitigate an attack on IoT device and another edge system with reasonable accuracy.This article introduces a new Bird Swarm Algorithm with Wavelet Neural Network for Intrusion Detection(BSAWNN-ID)in the IoT platform.The main intention of the BSAWNN-ID algorithm lies in detecting and classifying intrusions in the IoT platform.The BSAWNN-ID technique primarily designs a feature subset selection using the coyote optimization algorithm(FSS-COA)to attain this.Next,to detect intrusions,the WNN model is utilized.At last,theWNNparameters are optimally modified by the use of BSA.Awidespread experiment is performed to depict the better performance of the BSAWNNID technique.The resultant values indicated the better performance of the BSAWNN-ID technique over other models,with an accuracy of 99.64%on the UNSW-NB15 dataset.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R319)PrincessNourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4310373DSR34The authors are thankful to the Deanship of Scientific Research at Najran University for funding thiswork under theResearch Groups Funding program Grant Code(NU/RG/SERC/11/4).
文摘Malware is a‘malicious software program that performs multiple cyberattacks on the Internet,involving fraud,scams,nation-state cyberwar,and cybercrime.Such malicious software programs come under different classifications,namely Trojans,viruses,spyware,worms,ransomware,Rootkit,botnet malware,etc.Ransomware is a kind of malware that holds the victim’s data hostage by encrypting the information on the user’s computer to make it inaccessible to users and only decrypting it;then,the user pays a ransom procedure of a sum of money.To prevent detection,various forms of ransomware utilize more than one mechanism in their attack flow in conjunction with Machine Learning(ML)algorithm.This study focuses on designing a Learning-Based Artificial Algae Algorithm with Optimal Machine Learning Enabled Malware Detection(LBAAA-OMLMD)approach in Computer Networks.The presented LBAAA-OMLMDmodelmainly aims to detect and classify the existence of ransomware and goodware in the network.To accomplish this,the LBAAA-OMLMD model initially derives a Learning-Based Artificial Algae Algorithm based Feature Selection(LBAAA-FS)model to reduce the curse of dimensionality problems.Besides,the Flower Pollination Algorithm(FPA)with Echo State Network(ESN)Classification model is applied.The FPA model helps to appropriately adjust the parameters related to the ESN model to accomplish enhanced classifier results.The experimental validation of the LBAAA-OMLMD model is tested using a benchmark dataset,and the outcomes are inspected in distinct measures.The comprehensive comparative examination demonstrated the betterment of the LBAAAOMLMD model over recent algorithms.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/142/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R151)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4310373DSR13This research project was supported by a grant from the Research Center of the Female Scientific and Medical Colleges,Deanship of Scientific Research,King Saud University.
文摘Recently,computer aided diagnosis(CAD)model becomes an effective tool for decision making in healthcare sector.The advances in computer vision and artificial intelligence(AI)techniques have resulted in the effective design of CAD models,which enables to detection of the existence of diseases using various imaging modalities.Oral cancer(OC)has commonly occurred in head and neck globally.Earlier identification of OC enables to improve survival rate and reduce mortality rate.Therefore,the design of CAD model for OC detection and classification becomes essential.Therefore,this study introduces a novel Computer Aided Diagnosis for OC using Sailfish Optimization with Fusion based Classification(CADOC-SFOFC)model.The proposed CADOC-SFOFC model determines the existence of OC on the medical images.To accomplish this,a fusion based feature extraction process is carried out by the use of VGGNet-16 and Residual Network(ResNet)model.Besides,feature vectors are fused and passed into the extreme learning machine(ELM)model for classification process.Moreover,SFO algorithm is utilized for effective parameter selection of the ELM model,consequently resulting in enhanced performance.The experimental analysis of the CADOC-SFOFC model was tested on Kaggle dataset and the results reported the betterment of the CADOC-SFOFC model over the compared methods with maximum accuracy of 98.11%.Therefore,the CADOC-SFOFC model has maximum potential as an inexpensive and non-invasive tool which supports screening process and enhances the detection efficiency.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R319)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR44The authors are thankful to the Deanship of Scientific Research at Najran University for funding thiswork under theResearch Groups Funding program Grant Code(NU/RG/SERC/11/4).
文摘With the increased advancements of smart industries,cybersecurity has become a vital growth factor in the success of industrial transformation.The Industrial Internet of Things(IIoT)or Industry 4.0 has revolutionized the concepts of manufacturing and production altogether.In industry 4.0,powerful IntrusionDetection Systems(IDS)play a significant role in ensuring network security.Though various intrusion detection techniques have been developed so far,it is challenging to protect the intricate data of networks.This is because conventional Machine Learning(ML)approaches are inadequate and insufficient to address the demands of dynamic IIoT networks.Further,the existing Deep Learning(DL)can be employed to identify anonymous intrusions.Therefore,the current study proposes a Hunger Games Search Optimization with Deep Learning-Driven Intrusion Detection(HGSODLID)model for the IIoT environment.The presented HGSODL-ID model exploits the linear normalization approach to transform the input data into a useful format.The HGSO algorithm is employed for Feature Selection(HGSO-FS)to reduce the curse of dimensionality.Moreover,Sparrow Search Optimization(SSO)is utilized with a Graph Convolutional Network(GCN)to classify and identify intrusions in the network.Finally,the SSO technique is exploited to fine-tune the hyper-parameters involved in the GCN model.The proposed HGSODL-ID model was experimentally validated using a benchmark dataset,and the results confirmed the superiority of the proposed HGSODL-ID method over recent approaches.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR53).
文摘Natural Language Processing(NLP)for the Arabic language has gained much significance in recent years.The most commonly-utilized NLP task is the‘Text Classification’process.Its main intention is to apply the Machine Learning(ML)approaches for automatically classifying the textual files into one or more pre-defined categories.In ML approaches,the first and foremost crucial step is identifying an appropriate large dataset to test and train the method.One of the trending ML techniques,i.e.,Deep Learning(DL)technique needs huge volumes of different types of datasets for training to yield the best outcomes.The current study designs a new Dice Optimization with a Deep Hybrid Boltzmann Machinebased Arabic Corpus Classification(DODHBM-ACC)model in this background.The presented DODHBM-ACC model primarily relies upon different stages of pre-processing and the word2vec word embedding process.For Arabic text classification,the DHBM technique is utilized.This technique is a hybrid version of the Deep Boltzmann Machine(DBM)and Deep Belief Network(DBN).It has the advantage of learning the decisive intention of the classification process.To adjust the hyperparameters of the DHBM technique,the Dice Optimization Algorithm(DOA)is exploited in this study.The experimental analysis was conducted to establish the superior performance of the proposed DODHBM-ACC model.The outcomes inferred the better performance of the proposed DODHBM-ACC model over other recent approaches.
基金Princess Nourah Bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R281)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4331004DSR10).
文摘Computational linguistics is an engineering-based scientific discipline.It deals with understanding written and spoken language from a computational viewpoint.Further,the domain also helps construct the artefacts that are useful in processing and producing a language either in bulk or in a dialogue setting.Named Entity Recognition(NER)is a fundamental task in the data extraction process.It concentrates on identifying and labelling the atomic components from several texts grouped under different entities,such as organizations,people,places,and times.Further,the NER mechanism identifies and removes more types of entities as per the requirements.The significance of the NER mechanism has been well-established in Natural Language Processing(NLP)tasks,and various research investigations have been conducted to develop novel NER methods.The conventional ways of managing the tasks range from rule-related and hand-crafted feature-related Machine Learning(ML)techniques to Deep Learning(DL)techniques.In this aspect,the current study introduces a novel Dart Games Optimizer with Hybrid Deep Learning-Driven Computational Linguistics(DGOHDL-CL)model for NER.The presented DGOHDL-CL technique aims to determine and label the atomic components from several texts as a collection of the named entities.In the presented DGOHDL-CL technique,the word embed-ding process is executed at the initial stage with the help of the word2vec model.For the NER mechanism,the Convolutional Gated Recurrent Unit(CGRU)model is employed in this work.At last,the DGO technique is used as a hyperparameter tuning strategy for the CGRU algorithm to boost the NER’s outcomes.No earlier studies integrated the DGO mechanism with the CGRU model for NER.To exhibit the superiority of the proposed DGOHDL-CL technique,a widespread simulation analysis was executed on two datasets,CoNLL-2003 and OntoNotes 5.0.The experimental outcomes establish the promising performance of the DGOHDL-CL technique over other models.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R319)PrincessNourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR48).
文摘The Internet of Things(IoT)offers a new era of connectivity,which goes beyond laptops and smart connected devices for connected vehicles,smart homes,smart cities,and connected healthcare.The massive quantity of data gathered from numerous IoT devices poses security and privacy concerns for users.With the increasing use of multimedia in communications,the content security of remote-sensing images attracted much attention in academia and industry.Image encryption is important for securing remote sensing images in the IoT environment.Recently,researchers have introduced plenty of algorithms for encrypting images.This study introduces an Improved Sine Cosine Algorithm with Chaotic Encryption based Remote Sensing Image Encryption(ISCACE-RSI)technique in IoT Environment.The proposed model follows a three-stage process,namely pre-processing,encryption,and optimal key generation.The remote sensing images were preprocessed at the initial stage to enhance the image quality.Next,the ISCACERSI technique exploits the double-layer remote sensing image encryption(DLRSIE)algorithm for encrypting the images.The DLRSIE methodology incorporates the design of Chaotic Maps and deoxyribonucleic acid(DNA)Strand Displacement(DNASD)approach.The chaotic map is employed for generating pseudorandom sequences and implementing routine scrambling and diffusion processes on the plaintext images.Then,the study presents three DNASD-related encryption rules based on the variety of DNASD,and those rules are applied for encrypting the images at the DNA sequence level.For an optimal key generation of the DLRSIE technique,the ISCA is applied with an objective function of the maximization of peak signal to noise ratio(PSNR).To examine the performance of the ISCACE-RSI model,a detailed set of simulations were conducted.The comparative study reported the better performance of the ISCACE-RSI model over other existing approaches.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/142/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R237)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR14).
文摘Wireless Sensor Networks(WSN)has evolved into a key technology for ubiquitous living and the domain of interest has remained active in research owing to its extensive range of applications.In spite of this,it is challenging to design energy-efficient WSN.The routing approaches are leveraged to reduce the utilization of energy and prolonging the lifespan of network.In order to solve the restricted energy problem,it is essential to reduce the energy utilization of data,transmitted from the routing protocol and improve network development.In this background,the current study proposes a novel Differential Evolution with Arithmetic Optimization Algorithm Enabled Multi-hop Routing Protocol(DEAOA-MHRP)for WSN.The aim of the proposed DEAOA-MHRP model is select the optimal routes to reach the destination in WSN.To accomplish this,DEAOA-MHRP model initially integrates the concepts of Different Evolution(DE)and Arithmetic Optimization Algorithms(AOA)to improve convergence rate and solution quality.Besides,the inclusion of DE in traditional AOA helps in overcoming local optima problems.In addition,the proposed DEAOA-MRP technique derives a fitness function comprising two input variables such as residual energy and distance.In order to ensure the energy efficient performance of DEAOA-MHRP model,a detailed comparative study was conducted and the results established its superior performance over recent approaches.