Combined Economic and Emission Dispatch(CEED)task forms multi-objective optimization problems to be resolved to minimize emission and fuel costs.The disadvantage of the conventional method is its incapability to avoid...Combined Economic and Emission Dispatch(CEED)task forms multi-objective optimization problems to be resolved to minimize emission and fuel costs.The disadvantage of the conventional method is its incapability to avoid falling in local optimal,particularly when handling nonlinear and complex systems.Metaheuristics have recently received considerable attention due to their enhanced capacity to prevent local optimal solutions in addressing all the optimization problems as a black box.Therefore,this paper focuses on the design of an improved sand cat optimization algorithm based CEED(ISCOA-CEED)technique.The ISCOA-CEED technique majorly concen-trates on reducing fuel costs and the emission of generation units.Moreover,the presented ISCOA-CEED technique transforms the equality constraints of the CEED issue into inequality constraints.Besides,the improved sand cat optimization algorithm(ISCOA)is derived from the integration of tra-ditional SCOA with the Levy Flight(LF)concept.At last,the ISCOA-CEED technique is applied to solve a series of 6 and 11 generators in the CEED issue.The experimental validation of the ISCOA-CEED technique ensured the enhanced performance of the presented ISCOA-CEED technique over other recent approaches.展开更多
With the advent of the Internet of Things(IoT),several devices like sensors nowadays can interact and easily share information.But the IoT model is prone to security concerns as several attackers try to hit the networ...With the advent of the Internet of Things(IoT),several devices like sensors nowadays can interact and easily share information.But the IoT model is prone to security concerns as several attackers try to hit the network and make it vulnerable.In such scenarios,security concern is the most prominent.Different models were intended to address these security problems;still,several emergent variants of botnet attacks like Bashlite,Mirai,and Persirai use security breaches.The malware classification and detection in the IoT model is still a problem,as the adversary reliably generates a new variant of IoT malware and actively searches for compromise on the victim devices.This article develops a Sine Cosine Algorithm with Deep Learning based Ransomware Detection and Classification(SCADL-RWDC)method in an IoT environment.In the presented SCADL-RWDCtechnique,the major intention exists in recognizing and classifying ransomware attacks in the IoT platform.The SCADL-RWDC technique uses the SCA feature selection(SCA-FS)model to improve the detection rate.Besides,the SCADL-RWDC technique exploits the hybrid grey wolf optimizer(HGWO)with a gated recurrent unit(GRU)model for ransomware classification.A widespread experimental analysis is performed to exhibit the enhanced ransomware detection outcomes of the SCADL-RWDC technique.The comparison study reported the enhancement of the SCADL-RWDC technique over other models.展开更多
Handwritten character recognition becomes one of the challenging research matters.More studies were presented for recognizing letters of various languages.The availability of Arabic handwritten characters databases wa...Handwritten character recognition becomes one of the challenging research matters.More studies were presented for recognizing letters of various languages.The availability of Arabic handwritten characters databases was confined.Almost a quarter of a billion people worldwide write and speak Arabic.More historical books and files indicate a vital data set for many Arab nationswritten in Arabic.Recently,Arabic handwritten character recognition(AHCR)has grabbed the attention and has become a difficult topic for pattern recognition and computer vision(CV).Therefore,this study develops fireworks optimizationwith the deep learning-based AHCR(FWODL-AHCR)technique.Themajor intention of the FWODL-AHCR technique is to recognize the distinct handwritten characters in the Arabic language.It initially pre-processes the handwritten images to improve their quality of them.Then,the RetinaNet-based deep convolutional neural network is applied as a feature extractor to produce feature vectors.Next,the deep echo state network(DESN)model is utilized to classify handwritten characters.Finally,the FWO algorithm is exploited as a hyperparameter tuning strategy to boost recognition performance.Various simulations in series were performed to exhibit the enhanced performance of the FWODL-AHCR technique.The comparison study portrayed the supremacy of the FWODL-AHCR technique over other approaches,with 99.91%and 98.94%on Hijja and AHCD datasets,respectively.展开更多
The human motion data collected using wearables like smartwatches can be used for activity recognition and emergency event detection.This is especially applicable in the case of elderly or disabled people who live sel...The human motion data collected using wearables like smartwatches can be used for activity recognition and emergency event detection.This is especially applicable in the case of elderly or disabled people who live self-reliantly in their homes.These sensors produce a huge volume of physical activity data that necessitates real-time recognition,especially during emergencies.Falling is one of the most important problems confronted by older people and people with movement disabilities.Numerous previous techniques were introduced and a few used webcam to monitor the activity of elderly or disabled people.But,the costs incurred upon installation and operation are high,whereas the technology is relevant only for indoor environments.Currently,commercial wearables use a wireless emergency transmitter that produces a number of false alarms and restricts a user’s movements.Against this background,the current study develops an Improved WhaleOptimizationwithDeep Learning-Enabled Fall Detection for Disabled People(IWODL-FDDP)model.The presented IWODL-FDDP model aims to identify the fall events to assist disabled people.The presented IWODLFDDP model applies an image filtering approach to pre-process the image.Besides,the EfficientNet-B0 model is utilized to generate valuable feature vector sets.Next,the Bidirectional Long Short Term Memory(BiLSTM)model is used for the recognition and classification of fall events.Finally,the IWO method is leveraged to fine-tune the hyperparameters related to the BiLSTM method,which shows the novelty of the work.The experimental analysis outcomes established the superior performance of the proposed IWODL-FDDP method with a maximum accuracy of 97.02%.展开更多
Recently,Internet of Things(IoT)devices produces massive quantity of data from distinct sources that get transmitted over public networks.Cybersecurity becomes a challenging issue in the IoT environment where the exis...Recently,Internet of Things(IoT)devices produces massive quantity of data from distinct sources that get transmitted over public networks.Cybersecurity becomes a challenging issue in the IoT environment where the existence of cyber threats needs to be resolved.The development of automated tools for cyber threat detection and classification using machine learning(ML)and artificial intelligence(AI)tools become essential to accomplish security in the IoT environment.It is needed to minimize security issues related to IoT gadgets effectively.Therefore,this article introduces a new Mayfly optimization(MFO)with regularized extreme learning machine(RELM)model,named MFO-RELM for Cybersecurity Threat Detection and classification in IoT environment.The presented MFORELM technique accomplishes the effectual identification of cybersecurity threats that exist in the IoT environment.For accomplishing this,the MFO-RELM model pre-processes the actual IoT data into a meaningful format.In addition,the RELM model receives the pre-processed data and carries out the classification process.In order to boost the performance of the RELM model,the MFO algorithm has been employed to it.The performance validation of the MFO-RELM model is tested using standard datasets and the results highlighted the better outcomes of the MFO-RELM model under distinct aspects.展开更多
Energy is an essential element for any civilized country’s social and economic development,but the use of fossil fuels and nonrenewable energy forms has many negative impacts on the environment and the ecosystem.The ...Energy is an essential element for any civilized country’s social and economic development,but the use of fossil fuels and nonrenewable energy forms has many negative impacts on the environment and the ecosystem.The Republic of Yemen has very good potential to use renewable energy.Unfortunately,we find few studies on renewable wind energy in Yemen.Given the lack of a similar analysis for the coastal city,this research newly investigates wind energy’s potential near the Almukalla area by analyzing wind characteristics.Thus,evaluation,model identification,determination of available energy density,computing the capacity factors for several wind turbines and calculation of wind energy were extracted at three heights of 15,30,and 50meters.Average wind speeds were obtained only for the currently available data of five recent years,2005–2009.This study involves a preliminary assessment of Almukalla’s wind energy potential to provide a primary base and useful insights for wind engineers and experts.This research aims to provide useful assessment of the potential of wind energy in Almukalla for developing wind energy and an efficient wind approach.The Weibull distribution shows a perfect approximation for estimating the intensity of Yemen’s wind energy.Depending on both theWeibullmodel and the results of the annual wind speed data analysis for the study site in Mukalla,the capacity factor for many turbines was also calculated,and the best suitable turbine was selected.According to the International Wind Energy Rating criteria,Almukalla falls under Category 7,which is,rated“Superb”most of the year.展开更多
Sign language recognition can be considered as an effective solution for disabled people to communicate with others.It helps them in conveying the intended information using sign languages without any challenges.Recen...Sign language recognition can be considered as an effective solution for disabled people to communicate with others.It helps them in conveying the intended information using sign languages without any challenges.Recent advancements in computer vision and image processing techniques can be leveraged to detect and classify the signs used by disabled people in an effective manner.Metaheuristic optimization algorithms can be designed in a manner such that it fine tunes the hyper parameters,used in Deep Learning(DL)models as the latter considerably impacts the classification results.With this motivation,the current study designs the Optimal Deep Transfer Learning Driven Sign Language Recognition and Classification(ODTL-SLRC)model for disabled people.The aim of the proposed ODTL-SLRC technique is to recognize and classify sign languages used by disabled people.The proposed ODTL-SLRC technique derives EfficientNet model to generate a collection of useful feature vectors.In addition,the hyper parameters involved in EfficientNet model are fine-tuned with the help of HGSO algorithm.Moreover,Bidirectional Long Short Term Memory(BiLSTM)technique is employed for sign language classification.The proposed ODTL-SLRC technique was experimentally validated using benchmark dataset and the results were inspected under several measures.The comparative analysis results established the superior performance of the proposed ODTL-SLRC technique over recent approaches in terms of efficiency.展开更多
Melanoma remains a serious illness which is a common formof skin cancer.Since the earlier detection of melanoma reduces the mortality rate,it is essential to design reliable and automated disease diagnosis model using...Melanoma remains a serious illness which is a common formof skin cancer.Since the earlier detection of melanoma reduces the mortality rate,it is essential to design reliable and automated disease diagnosis model using dermoscopic images.The recent advances in deep learning(DL)models find useful to examine the medical image and make proper decisions.In this study,an automated deep learning based melanoma detection and classification(ADL-MDC)model is presented.The goal of the ADL-MDC technique is to examine the dermoscopic images to determine the existence of melanoma.The ADL-MDC technique performs contrast enhancement and data augmentation at the initial stage.Besides,the k-means clustering technique is applied for the image segmentation process.In addition,Adagrad optimizer based Capsule Network(CapsNet)model is derived for effective feature extraction process.Lastly,crow search optimization(CSO)algorithm with sparse autoencoder(SAE)model is utilized for the melanoma classification process.The exploitation of the Adagrad and CSO algorithm helps to properly accomplish improved performance.A wide range of simulation analyses is carried out on benchmark datasets and the results are inspected under several aspects.The simulation results reported the enhanced performance of the ADL-MDC technique over the recent approaches.展开更多
Mobile communication and the Internet of Things(IoT)technologies have recently been established to collect data from human beings and the environment.The data collected can be leveraged to provide intelligent services...Mobile communication and the Internet of Things(IoT)technologies have recently been established to collect data from human beings and the environment.The data collected can be leveraged to provide intelligent services through different applications.It is an extreme challenge to monitor disabled people from remote locations.It is because day-to-day events like falls heavily result in accidents.For a person with disabilities,a fall event is an important cause of mortality and post-traumatic complications.Therefore,detecting the fall events of disabled persons in smart homes at early stages is essential to provide the necessary support and increase their survival rate.The current study introduces a Whale Optimization Algorithm Deep Transfer Learning-DrivenAutomated Fall Detection(WOADTL-AFD)technique to improve the Quality of Life for persons with disabilities.The primary aim of the presented WOADTL-AFD technique is to identify and classify the fall events to help disabled individuals.To attain this,the proposed WOADTL-AFDmodel initially uses amodified SqueezeNet feature extractor which proficiently extracts the feature vectors.In addition,the WOADTLAFD technique classifies the fall events using an extreme Gradient Boosting(XGBoost)classifier.In the presented WOADTL-AFD technique,the WOA approach is used to fine-tune the hyperparameters involved in the modified SqueezeNet model.The proposedWOADTL-AFD technique was experimentally validated using the benchmark datasets,and the results confirmed the superior performance of the proposedWOADTL-AFD method compared to other recent approaches.展开更多
Soil classification is one of the emanating topics and major concerns in many countries.As the population has been increasing at a rapid pace,the demand for food also increases dynamically.Common approaches used by ag...Soil classification is one of the emanating topics and major concerns in many countries.As the population has been increasing at a rapid pace,the demand for food also increases dynamically.Common approaches used by agriculturalists are inadequate to satisfy the rising demand,and thus they have hindered soil cultivation.There comes a demand for computer-related soil classification methods to support agriculturalists.This study introduces a Gradient-Based Optimizer and Deep Learning(DL)for Automated Soil Clas-sification(GBODL-ASC)technique.The presented GBODL-ASC technique identifies various kinds of soil using DL and computer vision approaches.In the presented GBODL-ASC technique,three major processes are involved.At the initial stage,the presented GBODL-ASC technique applies the GBO algorithm with the EfficientNet prototype to generate feature vectors.For soil categorization,the GBODL-ASC procedure uses an arithmetic optimization algorithm(AOA)with a Back Propagation Neural Network(BPNN)model.The design of GBO and AOA algorithms assist in the proper selection of parameter values for the EfficientNet and BPNN models,respectively.To demonstrate the significant soil classification outcomes of the GBODL-ASC methodology,a wide-ranging simulation analysis is performed on a soil dataset comprising 156 images and five classes.The simulation values show the betterment of the GBODL-ASC model through other models with maximum precision of 95.64%.展开更多
Proper waste management models using recent technologies like computer vision,machine learning(ML),and deep learning(DL)are needed to effectively handle the massive quantity of increasing waste.Therefore,waste classif...Proper waste management models using recent technologies like computer vision,machine learning(ML),and deep learning(DL)are needed to effectively handle the massive quantity of increasing waste.Therefore,waste classification becomes a crucial topic which helps to categorize waste into hazardous or non-hazardous ones and thereby assist in the decision making of the waste management process.This study concentrates on the design of hazardous waste detection and classification using ensemble learning(HWDC-EL)technique to reduce toxicity and improve human health.The goal of the HWDC-EL technique is to detect the multiple classes of wastes,particularly hazardous and non-hazardous wastes.The HWDC-EL technique involves the ensemble of three feature extractors using Model Averaging technique namely discrete local binary patterns(DLBP),EfficientNet,and DenseNet121.In addition,the flower pollination algorithm(FPA)based hyperparameter optimizers are used to optimally adjust the parameters involved in the EfficientNet and DenseNet121 models.Moreover,a weighted voting-based ensemble classifier is derived using three machine learning algorithms namely support vector machine(SVM),extreme learning machine(ELM),and gradient boosting tree(GBT).The performance of the HWDC-EL technique is tested using a benchmark Garbage dataset and it obtains a maximum accuracy of 98.85%.展开更多
Cybersecurity-related solutions have become familiar since it ensures security and privacy against cyberattacks in this digital era.Malicious Uniform Resource Locators(URLs)can be embedded in email or Twitter and used...Cybersecurity-related solutions have become familiar since it ensures security and privacy against cyberattacks in this digital era.Malicious Uniform Resource Locators(URLs)can be embedded in email or Twitter and used to lure vulnerable internet users to implement malicious data in their systems.This may result in compromised security of the systems,scams,and other such cyberattacks.These attacks hijack huge quantities of the available data,incurring heavy financial loss.At the same time,Machine Learning(ML)and Deep Learning(DL)models paved the way for designing models that can detect malicious URLs accurately and classify them.With this motivation,the current article develops an Artificial Fish Swarm Algorithm(AFSA)with Deep Learning Enabled Malicious URL Detection and Classification(AFSADL-MURLC)model.The presented AFSADL-MURLC model intends to differentiate the malicious URLs from genuine URLs.To attain this,AFSADL-MURLC model initially carries out data preprocessing and makes use of glove-based word embedding technique.In addition,the created vector model is then passed onto Gated Recurrent Unit(GRU)classification to recognize the malicious URLs.Finally,AFSA is applied to the proposed model to enhance the efficiency of GRU model.The proposed AFSADL-MURLC technique was experimentally validated using benchmark dataset sourced from Kaggle repository.The simulation results confirmed the supremacy of the proposed AFSADL-MURLC model over recent approaches under distinct measures.展开更多
As the Internet of Things(IoT)endures to develop,a huge count of data has been created.An IoT platform is rather sensitive to security challenges as individual data can be leaked,or sensor data could be used to cause ...As the Internet of Things(IoT)endures to develop,a huge count of data has been created.An IoT platform is rather sensitive to security challenges as individual data can be leaked,or sensor data could be used to cause accidents.As typical intrusion detection system(IDS)studies can be frequently designed for working well on databases,it can be unknown if they intend to work well in altering network environments.Machine learning(ML)techniques are depicted to have a higher capacity at assisting mitigate an attack on IoT device and another edge system with reasonable accuracy.This article introduces a new Bird Swarm Algorithm with Wavelet Neural Network for Intrusion Detection(BSAWNN-ID)in the IoT platform.The main intention of the BSAWNN-ID algorithm lies in detecting and classifying intrusions in the IoT platform.The BSAWNN-ID technique primarily designs a feature subset selection using the coyote optimization algorithm(FSS-COA)to attain this.Next,to detect intrusions,the WNN model is utilized.At last,theWNNparameters are optimally modified by the use of BSA.Awidespread experiment is performed to depict the better performance of the BSAWNNID technique.The resultant values indicated the better performance of the BSAWNN-ID technique over other models,with an accuracy of 99.64%on the UNSW-NB15 dataset.展开更多
Recently,renewable energy(RE)has become popular due to its benefits,such as being inexpensive,low-carbon,ecologically friendly,steady,and reliable.The RE sources are gradually combined with non-renewable energy(NRE)so...Recently,renewable energy(RE)has become popular due to its benefits,such as being inexpensive,low-carbon,ecologically friendly,steady,and reliable.The RE sources are gradually combined with non-renewable energy(NRE)sources into electric grids to satisfy energy demands.Since energy utilization is highly related to national energy policy,energy prediction using artificial intelligence(AI)and deep learning(DL)based models can be employed for energy prediction on RE and NRE power resources.Predicting energy consumption of RE and NRE sources using effective models becomes necessary.With this motivation,this study presents a new multimodal fusionbased predictive tool for energy consumption prediction(MDLFM-ECP)of RE and NRE power sources.Actual data may influence the prediction performance of the results in prediction approaches.The proposed MDLFMECP technique involves pre-processing,fusion-based prediction,and hyperparameter optimization.In addition,the MDLFM-ECP technique involves the fusion of four deep learning(DL)models,namely long short-termmemory(LSTM),bidirectional LSTM(Bi-LSTM),deep belief network(DBN),and gated recurrent unit(GRU).Moreover,the chaotic cat swarm optimization(CCSO)algorithm is applied to tune the hyperparameters of the DL models.The design of the CCSO algorithm for optimal hyperparameter tuning of the DL models,showing the novelty of the work.A series of simulations took place to validate the superior performance of the proposed method,and the simulation outcome emphasized the improved results of the MDLFM-ECP technique over the recent approaches with minimum overall mean absolute percentage error of 3.58%.展开更多
With recent advancements in information and communication technology,a huge volume of corporate and sensitive user data was shared consistently across the network,making it vulnerable to an attack that may be brought ...With recent advancements in information and communication technology,a huge volume of corporate and sensitive user data was shared consistently across the network,making it vulnerable to an attack that may be brought some factors under risk:data availability,confidentiality,and integrity.Intrusion Detection Systems(IDS)were mostly exploited in various networks to help promptly recognize intrusions.Nowadays,blockchain(BC)technology has received much more interest as a means to share data without needing a trusted third person.Therefore,this study designs a new Blockchain Assisted Optimal Machine Learning based Cyberattack Detection and Classification(BAOML-CADC)technique.In the BAOML-CADC technique,the major focus lies in identifying cyberattacks.To do so,the presented BAOML-CADC technique applies a thermal equilibrium algorithm-based feature selection(TEA-FS)method for the optimal choice of features.The BAOML-CADC technique uses an extreme learning machine(ELM)model for cyberattack recognition.In addition,a BC-based integrity verification technique is developed to defend against the misrouting attack,showing the innovation of the work.The experimental validation of BAOML-CADC algorithm is tested on a benchmark cyberattack dataset.The obtained values implied the improved performance of the BAOML-CADC algorithm over other techniques.展开更多
The recognition of the Arabic characters is a crucial task incomputer vision and Natural Language Processing fields. Some major complicationsin recognizing handwritten texts include distortion and patternvariabilities...The recognition of the Arabic characters is a crucial task incomputer vision and Natural Language Processing fields. Some major complicationsin recognizing handwritten texts include distortion and patternvariabilities. So, the feature extraction process is a significant task in NLPmodels. If the features are automatically selected, it might result in theunavailability of adequate data for accurately forecasting the character classes.But, many features usually create difficulties due to high dimensionality issues.Against this background, the current study develops a Sailfish Optimizer withDeep Transfer Learning-Enabled Arabic Handwriting Character Recognition(SFODTL-AHCR) model. The projected SFODTL-AHCR model primarilyfocuses on identifying the handwritten Arabic characters in the inputimage. The proposed SFODTL-AHCR model pre-processes the input imageby following the Histogram Equalization approach to attain this objective.The Inception with ResNet-v2 model examines the pre-processed image toproduce the feature vectors. The Deep Wavelet Neural Network (DWNN)model is utilized to recognize the handwritten Arabic characters. At last,the SFO algorithm is utilized for fine-tuning the parameters involved in theDWNNmodel to attain better performance. The performance of the proposedSFODTL-AHCR model was validated using a series of images. Extensivecomparative analyses were conducted. The proposed method achieved a maximum accuracy of 99.73%. The outcomes inferred the supremacy of theproposed SFODTL-AHCR model over other approaches.展开更多
Applied linguistics is one of the fields in the linguistics domain and deals with the practical applications of the language studies such as speech processing,language teaching,translation and speech therapy.The ever-...Applied linguistics is one of the fields in the linguistics domain and deals with the practical applications of the language studies such as speech processing,language teaching,translation and speech therapy.The ever-growing Online Social Networks(OSNs)experience a vital issue to confront,i.e.,hate speech.Amongst the OSN-oriented security problems,the usage of offensive language is the most important threat that is prevalently found across the Internet.Based on the group targeted,the offensive language varies in terms of adult content,hate speech,racism,cyberbullying,abuse,trolling and profanity.Amongst these,hate speech is the most intimidating form of using offensive language in which the targeted groups or individuals are intimidated with the intent of creating harm,social chaos or violence.Machine Learning(ML)techniques have recently been applied to recognize hate speech-related content.The current research article introduces a Grasshopper Optimization with an Attentive Recurrent Network for Offensive Speech Detection(GOARN-OSD)model for social media.The GOARNOSD technique integrates the concepts of DL and metaheuristic algorithms for detecting hate speech.In the presented GOARN-OSD technique,the primary stage involves the data pre-processing and word embedding processes.Then,this study utilizes the Attentive Recurrent Network(ARN)model for hate speech recognition and classification.At last,the Grasshopper Optimization Algorithm(GOA)is exploited as a hyperparameter optimizer to boost the performance of the hate speech recognition process.To depict the promising performance of the proposed GOARN-OSD method,a widespread experimental analysis was conducted.The comparison study outcomes demonstrate the superior performance of the proposed GOARN-OSD model over other state-of-the-art approaches.展开更多
The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,...The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,working environments,topologies,and so on.The existing multi-core technology unlocks additional research opportunities for energy minimization by the use of effective task scheduling.At the same time,the task scheduling process is yet to be explored in the multi-core systems.This paper presents a new hybrid genetic algorithm(GA)with a krill herd(KH)based energy-efficient scheduling techni-que for multi-core systems(GAKH-SMCS).The goal of the GAKH-SMCS tech-nique is to derive scheduling tasks in such a way to achieve faster completion time and minimum energy dissipation.The GAKH-SMCS model involves a multi-objectivefitness function using four parameters such as makespan,processor utilization,speedup,and energy consumption to schedule tasks proficiently.The performance of the GAKH-SMCS model has been validated against two datasets namely random dataset and benchmark dataset.The experimental outcome ensured the effectiveness of the GAKH-SMCS model interms of makespan,pro-cessor utilization,speedup,and energy consumption.The overall simulation results depicted that the presented GAKH-SMCS model achieves energy effi-ciency by optimal task scheduling process in MCS.展开更多
Biomedical data classification has become a hot research topic in recent years,thanks to the latest technological advancements made in healthcare.Biome-dical data is usually examined by physicians for decision making ...Biomedical data classification has become a hot research topic in recent years,thanks to the latest technological advancements made in healthcare.Biome-dical data is usually examined by physicians for decision making process in patient treatment.Since manual diagnosis is a tedious and time consuming task,numerous automated models,using Artificial Intelligence(AI)techniques,have been presented so far.With this motivation,the current research work presents a novel Biomedical Data Classification using Cat and Mouse Based Optimizer with AI(BDC-CMBOAI)technique.The aim of the proposed BDC-CMBOAI technique is to determine the occurrence of diseases using biomedical data.Besides,the proposed BDC-CMBOAI technique involves the design of Cat and Mouse Optimizer-based Feature Selection(CMBO-FS)technique to derive a useful subset of features.In addition,Ridge Regression(RR)model is also utilized as a classifier to identify the existence of disease.The novelty of the current work is its designing of CMBO-FS model for data classification.Moreover,CMBO-FS technique is used to get rid of unwanted features and boosts the classification accuracy.The results of the experimental analysis accomplished by BDC-CMBOAI technique on benchmark medical dataset established the supremacy of the proposed technique under different evaluation measures.展开更多
Presently,smart cities play a vital role to enhance the quality of living among human beings in several ways such as online shopping,e-learning,ehealthcare,etc.Despite the benefits of advanced technologies,issues are ...Presently,smart cities play a vital role to enhance the quality of living among human beings in several ways such as online shopping,e-learning,ehealthcare,etc.Despite the benefits of advanced technologies,issues are also existed from the transformation of the physical word into digital word,particularly in online social networks(OSN).Cyberbullying(CB)is a major problem in OSN which needs to be addressed by the use of automated natural language processing(NLP)and machine learning(ML)approaches.This article devises a novel search and rescue optimization with machine learning enabled cybersecurity model for online social networks,named SRO-MLCOSN model.The presented SRO-MLCOSN model focuses on the identification of CB that occurred in social networking sites.The SRO-MLCOSN model initially employs Glove technique for word embedding process.Besides,a multiclass-weighted kernel extreme learning machine(M-WKELM)model is utilized for effectual identification and categorization of CB.Finally,Search and Rescue Optimization(SRO)algorithm is exploited to fine tune the parameters involved in the M-WKELM model.The experimental validation of the SRO-MLCOSN model on the benchmark dataset reported significant outcomes over the other approaches with precision,recall,and F1-score of 96.24%,98.71%,and 97.46%respectively.展开更多
基金supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2023/R/1444)The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR65.
文摘Combined Economic and Emission Dispatch(CEED)task forms multi-objective optimization problems to be resolved to minimize emission and fuel costs.The disadvantage of the conventional method is its incapability to avoid falling in local optimal,particularly when handling nonlinear and complex systems.Metaheuristics have recently received considerable attention due to their enhanced capacity to prevent local optimal solutions in addressing all the optimization problems as a black box.Therefore,this paper focuses on the design of an improved sand cat optimization algorithm based CEED(ISCOA-CEED)technique.The ISCOA-CEED technique majorly concen-trates on reducing fuel costs and the emission of generation units.Moreover,the presented ISCOA-CEED technique transforms the equality constraints of the CEED issue into inequality constraints.Besides,the improved sand cat optimization algorithm(ISCOA)is derived from the integration of tra-ditional SCOA with the Levy Flight(LF)concept.At last,the ISCOA-CEED technique is applied to solve a series of 6 and 11 generators in the CEED issue.The experimental validation of the ISCOA-CEED technique ensured the enhanced performance of the presented ISCOA-CEED technique over other recent approaches.
基金This work was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Groups Program Grant No.(RGP-1443-0051).
文摘With the advent of the Internet of Things(IoT),several devices like sensors nowadays can interact and easily share information.But the IoT model is prone to security concerns as several attackers try to hit the network and make it vulnerable.In such scenarios,security concern is the most prominent.Different models were intended to address these security problems;still,several emergent variants of botnet attacks like Bashlite,Mirai,and Persirai use security breaches.The malware classification and detection in the IoT model is still a problem,as the adversary reliably generates a new variant of IoT malware and actively searches for compromise on the victim devices.This article develops a Sine Cosine Algorithm with Deep Learning based Ransomware Detection and Classification(SCADL-RWDC)method in an IoT environment.In the presented SCADL-RWDCtechnique,the major intention exists in recognizing and classifying ransomware attacks in the IoT platform.The SCADL-RWDC technique uses the SCA feature selection(SCA-FS)model to improve the detection rate.Besides,the SCADL-RWDC technique exploits the hybrid grey wolf optimizer(HGWO)with a gated recurrent unit(GRU)model for ransomware classification.A widespread experimental analysis is performed to exhibit the enhanced ransomware detection outcomes of the SCADL-RWDC technique.The comparison study reported the enhancement of the SCADL-RWDC technique over other models.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabiathe Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR39.
文摘Handwritten character recognition becomes one of the challenging research matters.More studies were presented for recognizing letters of various languages.The availability of Arabic handwritten characters databases was confined.Almost a quarter of a billion people worldwide write and speak Arabic.More historical books and files indicate a vital data set for many Arab nationswritten in Arabic.Recently,Arabic handwritten character recognition(AHCR)has grabbed the attention and has become a difficult topic for pattern recognition and computer vision(CV).Therefore,this study develops fireworks optimizationwith the deep learning-based AHCR(FWODL-AHCR)technique.Themajor intention of the FWODL-AHCR technique is to recognize the distinct handwritten characters in the Arabic language.It initially pre-processes the handwritten images to improve their quality of them.Then,the RetinaNet-based deep convolutional neural network is applied as a feature extractor to produce feature vectors.Next,the deep echo state network(DESN)model is utilized to classify handwritten characters.Finally,the FWO algorithm is exploited as a hyperparameter tuning strategy to boost recognition performance.Various simulations in series were performed to exhibit the enhanced performance of the FWODL-AHCR technique.The comparison study portrayed the supremacy of the FWODL-AHCR technique over other approaches,with 99.91%and 98.94%on Hijja and AHCD datasets,respectively.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under grant number(158/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R77)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR52).
文摘The human motion data collected using wearables like smartwatches can be used for activity recognition and emergency event detection.This is especially applicable in the case of elderly or disabled people who live self-reliantly in their homes.These sensors produce a huge volume of physical activity data that necessitates real-time recognition,especially during emergencies.Falling is one of the most important problems confronted by older people and people with movement disabilities.Numerous previous techniques were introduced and a few used webcam to monitor the activity of elderly or disabled people.But,the costs incurred upon installation and operation are high,whereas the technology is relevant only for indoor environments.Currently,commercial wearables use a wireless emergency transmitter that produces a number of false alarms and restricts a user’s movements.Against this background,the current study develops an Improved WhaleOptimizationwithDeep Learning-Enabled Fall Detection for Disabled People(IWODL-FDDP)model.The presented IWODL-FDDP model aims to identify the fall events to assist disabled people.The presented IWODLFDDP model applies an image filtering approach to pre-process the image.Besides,the EfficientNet-B0 model is utilized to generate valuable feature vector sets.Next,the Bidirectional Long Short Term Memory(BiLSTM)model is used for the recognition and classification of fall events.Finally,the IWO method is leveraged to fine-tune the hyperparameters related to the BiLSTM method,which shows the novelty of the work.The experimental analysis outcomes established the superior performance of the proposed IWODL-FDDP method with a maximum accuracy of 97.02%.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/142/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R161)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR06).
文摘Recently,Internet of Things(IoT)devices produces massive quantity of data from distinct sources that get transmitted over public networks.Cybersecurity becomes a challenging issue in the IoT environment where the existence of cyber threats needs to be resolved.The development of automated tools for cyber threat detection and classification using machine learning(ML)and artificial intelligence(AI)tools become essential to accomplish security in the IoT environment.It is needed to minimize security issues related to IoT gadgets effectively.Therefore,this article introduces a new Mayfly optimization(MFO)with regularized extreme learning machine(RELM)model,named MFO-RELM for Cybersecurity Threat Detection and classification in IoT environment.The presented MFORELM technique accomplishes the effectual identification of cybersecurity threats that exist in the IoT environment.For accomplishing this,the MFO-RELM model pre-processes the actual IoT data into a meaningful format.In addition,the RELM model receives the pre-processed data and carries out the classification process.In order to boost the performance of the RELM model,the MFO algorithm has been employed to it.The performance validation of the MFO-RELM model is tested using standard datasets and the results highlighted the better outcomes of the MFO-RELM model under distinct aspects.
文摘Energy is an essential element for any civilized country’s social and economic development,but the use of fossil fuels and nonrenewable energy forms has many negative impacts on the environment and the ecosystem.The Republic of Yemen has very good potential to use renewable energy.Unfortunately,we find few studies on renewable wind energy in Yemen.Given the lack of a similar analysis for the coastal city,this research newly investigates wind energy’s potential near the Almukalla area by analyzing wind characteristics.Thus,evaluation,model identification,determination of available energy density,computing the capacity factors for several wind turbines and calculation of wind energy were extracted at three heights of 15,30,and 50meters.Average wind speeds were obtained only for the currently available data of five recent years,2005–2009.This study involves a preliminary assessment of Almukalla’s wind energy potential to provide a primary base and useful insights for wind engineers and experts.This research aims to provide useful assessment of the potential of wind energy in Almukalla for developing wind energy and an efficient wind approach.The Weibull distribution shows a perfect approximation for estimating the intensity of Yemen’s wind energy.Depending on both theWeibullmodel and the results of the annual wind speed data analysis for the study site in Mukalla,the capacity factor for many turbines was also calculated,and the best suitable turbine was selected.According to the International Wind Energy Rating criteria,Almukalla falls under Category 7,which is,rated“Superb”most of the year.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 1/322/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R77)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR02).
文摘Sign language recognition can be considered as an effective solution for disabled people to communicate with others.It helps them in conveying the intended information using sign languages without any challenges.Recent advancements in computer vision and image processing techniques can be leveraged to detect and classify the signs used by disabled people in an effective manner.Metaheuristic optimization algorithms can be designed in a manner such that it fine tunes the hyper parameters,used in Deep Learning(DL)models as the latter considerably impacts the classification results.With this motivation,the current study designs the Optimal Deep Transfer Learning Driven Sign Language Recognition and Classification(ODTL-SLRC)model for disabled people.The aim of the proposed ODTL-SLRC technique is to recognize and classify sign languages used by disabled people.The proposed ODTL-SLRC technique derives EfficientNet model to generate a collection of useful feature vectors.In addition,the hyper parameters involved in EfficientNet model are fine-tuned with the help of HGSO algorithm.Moreover,Bidirectional Long Short Term Memory(BiLSTM)technique is employed for sign language classification.The proposed ODTL-SLRC technique was experimentally validated using benchmark dataset and the results were inspected under several measures.The comparative analysis results established the superior performance of the proposed ODTL-SLRC technique over recent approaches in terms of efficiency.
基金the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP 1/80/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R191)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Melanoma remains a serious illness which is a common formof skin cancer.Since the earlier detection of melanoma reduces the mortality rate,it is essential to design reliable and automated disease diagnosis model using dermoscopic images.The recent advances in deep learning(DL)models find useful to examine the medical image and make proper decisions.In this study,an automated deep learning based melanoma detection and classification(ADL-MDC)model is presented.The goal of the ADL-MDC technique is to examine the dermoscopic images to determine the existence of melanoma.The ADL-MDC technique performs contrast enhancement and data augmentation at the initial stage.Besides,the k-means clustering technique is applied for the image segmentation process.In addition,Adagrad optimizer based Capsule Network(CapsNet)model is derived for effective feature extraction process.Lastly,crow search optimization(CSO)algorithm with sparse autoencoder(SAE)model is utilized for the melanoma classification process.The exploitation of the Adagrad and CSO algorithm helps to properly accomplish improved performance.A wide range of simulation analyses is carried out on benchmark datasets and the results are inspected under several aspects.The simulation results reported the enhanced performance of the ADL-MDC technique over the recent approaches.
基金The authors extend their appreciation to the King Salman Center for Disability Research for funding this work through Research Group no KSRG-2022-030.
文摘Mobile communication and the Internet of Things(IoT)technologies have recently been established to collect data from human beings and the environment.The data collected can be leveraged to provide intelligent services through different applications.It is an extreme challenge to monitor disabled people from remote locations.It is because day-to-day events like falls heavily result in accidents.For a person with disabilities,a fall event is an important cause of mortality and post-traumatic complications.Therefore,detecting the fall events of disabled persons in smart homes at early stages is essential to provide the necessary support and increase their survival rate.The current study introduces a Whale Optimization Algorithm Deep Transfer Learning-DrivenAutomated Fall Detection(WOADTL-AFD)technique to improve the Quality of Life for persons with disabilities.The primary aim of the presented WOADTL-AFD technique is to identify and classify the fall events to help disabled individuals.To attain this,the proposed WOADTL-AFDmodel initially uses amodified SqueezeNet feature extractor which proficiently extracts the feature vectors.In addition,the WOADTLAFD technique classifies the fall events using an extreme Gradient Boosting(XGBoost)classifier.In the presented WOADTL-AFD technique,the WOA approach is used to fine-tune the hyperparameters involved in the modified SqueezeNet model.The proposedWOADTL-AFD technique was experimentally validated using the benchmark datasets,and the results confirmed the superior performance of the proposedWOADTL-AFD method compared to other recent approaches.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2023R303)PrincessNourah bint Abdulrahman University,Riyadh,Saudi Arabia.Research Supporting Project number(RSPD2023R787)+1 种基金King Saud University,Riyadh,Saudi ArabiaThis study is supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2023/R/1444).
文摘Soil classification is one of the emanating topics and major concerns in many countries.As the population has been increasing at a rapid pace,the demand for food also increases dynamically.Common approaches used by agriculturalists are inadequate to satisfy the rising demand,and thus they have hindered soil cultivation.There comes a demand for computer-related soil classification methods to support agriculturalists.This study introduces a Gradient-Based Optimizer and Deep Learning(DL)for Automated Soil Clas-sification(GBODL-ASC)technique.The presented GBODL-ASC technique identifies various kinds of soil using DL and computer vision approaches.In the presented GBODL-ASC technique,three major processes are involved.At the initial stage,the presented GBODL-ASC technique applies the GBO algorithm with the EfficientNet prototype to generate feature vectors.For soil categorization,the GBODL-ASC procedure uses an arithmetic optimization algorithm(AOA)with a Back Propagation Neural Network(BPNN)model.The design of GBO and AOA algorithms assist in the proper selection of parameter values for the EfficientNet and BPNN models,respectively.To demonstrate the significant soil classification outcomes of the GBODL-ASC methodology,a wide-ranging simulation analysis is performed on a soil dataset comprising 156 images and five classes.The simulation values show the betterment of the GBODL-ASC model through other models with maximum precision of 95.64%.
基金the Deanship of Scientific Research at King Khalid University for funding this work underGrant Number(RGP 2/209/42)PrincessNourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R136)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR27).
文摘Proper waste management models using recent technologies like computer vision,machine learning(ML),and deep learning(DL)are needed to effectively handle the massive quantity of increasing waste.Therefore,waste classification becomes a crucial topic which helps to categorize waste into hazardous or non-hazardous ones and thereby assist in the decision making of the waste management process.This study concentrates on the design of hazardous waste detection and classification using ensemble learning(HWDC-EL)technique to reduce toxicity and improve human health.The goal of the HWDC-EL technique is to detect the multiple classes of wastes,particularly hazardous and non-hazardous wastes.The HWDC-EL technique involves the ensemble of three feature extractors using Model Averaging technique namely discrete local binary patterns(DLBP),EfficientNet,and DenseNet121.In addition,the flower pollination algorithm(FPA)based hyperparameter optimizers are used to optimally adjust the parameters involved in the EfficientNet and DenseNet121 models.Moreover,a weighted voting-based ensemble classifier is derived using three machine learning algorithms namely support vector machine(SVM),extreme learning machine(ELM),and gradient boosting tree(GBT).The performance of the HWDC-EL technique is tested using a benchmark Garbage dataset and it obtains a maximum accuracy of 98.85%.
基金the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under grant number(45/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R140)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4310373DSR21.
文摘Cybersecurity-related solutions have become familiar since it ensures security and privacy against cyberattacks in this digital era.Malicious Uniform Resource Locators(URLs)can be embedded in email or Twitter and used to lure vulnerable internet users to implement malicious data in their systems.This may result in compromised security of the systems,scams,and other such cyberattacks.These attacks hijack huge quantities of the available data,incurring heavy financial loss.At the same time,Machine Learning(ML)and Deep Learning(DL)models paved the way for designing models that can detect malicious URLs accurately and classify them.With this motivation,the current article develops an Artificial Fish Swarm Algorithm(AFSA)with Deep Learning Enabled Malicious URL Detection and Classification(AFSADL-MURLC)model.The presented AFSADL-MURLC model intends to differentiate the malicious URLs from genuine URLs.To attain this,AFSADL-MURLC model initially carries out data preprocessing and makes use of glove-based word embedding technique.In addition,the created vector model is then passed onto Gated Recurrent Unit(GRU)classification to recognize the malicious URLs.Finally,AFSA is applied to the proposed model to enhance the efficiency of GRU model.The proposed AFSADL-MURLC technique was experimentally validated using benchmark dataset sourced from Kaggle repository.The simulation results confirmed the supremacy of the proposed AFSADL-MURLC model over recent approaches under distinct measures.
基金This work was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Groups Program Grant No.(RGP-1443-0048).
文摘As the Internet of Things(IoT)endures to develop,a huge count of data has been created.An IoT platform is rather sensitive to security challenges as individual data can be leaked,or sensor data could be used to cause accidents.As typical intrusion detection system(IDS)studies can be frequently designed for working well on databases,it can be unknown if they intend to work well in altering network environments.Machine learning(ML)techniques are depicted to have a higher capacity at assisting mitigate an attack on IoT device and another edge system with reasonable accuracy.This article introduces a new Bird Swarm Algorithm with Wavelet Neural Network for Intrusion Detection(BSAWNN-ID)in the IoT platform.The main intention of the BSAWNN-ID algorithm lies in detecting and classifying intrusions in the IoT platform.The BSAWNN-ID technique primarily designs a feature subset selection using the coyote optimization algorithm(FSS-COA)to attain this.Next,to detect intrusions,the WNN model is utilized.At last,theWNNparameters are optimally modified by the use of BSA.Awidespread experiment is performed to depict the better performance of the BSAWNNID technique.The resultant values indicated the better performance of the BSAWNN-ID technique over other models,with an accuracy of 99.64%on the UNSW-NB15 dataset.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through the Large Groups Project under grant number(71/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2023R203)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR61This study is supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2023/R/1444).
文摘Recently,renewable energy(RE)has become popular due to its benefits,such as being inexpensive,low-carbon,ecologically friendly,steady,and reliable.The RE sources are gradually combined with non-renewable energy(NRE)sources into electric grids to satisfy energy demands.Since energy utilization is highly related to national energy policy,energy prediction using artificial intelligence(AI)and deep learning(DL)based models can be employed for energy prediction on RE and NRE power resources.Predicting energy consumption of RE and NRE sources using effective models becomes necessary.With this motivation,this study presents a new multimodal fusionbased predictive tool for energy consumption prediction(MDLFM-ECP)of RE and NRE power sources.Actual data may influence the prediction performance of the results in prediction approaches.The proposed MDLFMECP technique involves pre-processing,fusion-based prediction,and hyperparameter optimization.In addition,the MDLFM-ECP technique involves the fusion of four deep learning(DL)models,namely long short-termmemory(LSTM),bidirectional LSTM(Bi-LSTM),deep belief network(DBN),and gated recurrent unit(GRU).Moreover,the chaotic cat swarm optimization(CCSO)algorithm is applied to tune the hyperparameters of the DL models.The design of the CCSO algorithm for optimal hyperparameter tuning of the DL models,showing the novelty of the work.A series of simulations took place to validate the superior performance of the proposed method,and the simulation outcome emphasized the improved results of the MDLFM-ECP technique over the recent approaches with minimum overall mean absolute percentage error of 3.58%.
基金This work was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Groups Program Grant No.(RGP-1443-0051)。
文摘With recent advancements in information and communication technology,a huge volume of corporate and sensitive user data was shared consistently across the network,making it vulnerable to an attack that may be brought some factors under risk:data availability,confidentiality,and integrity.Intrusion Detection Systems(IDS)were mostly exploited in various networks to help promptly recognize intrusions.Nowadays,blockchain(BC)technology has received much more interest as a means to share data without needing a trusted third person.Therefore,this study designs a new Blockchain Assisted Optimal Machine Learning based Cyberattack Detection and Classification(BAOML-CADC)technique.In the BAOML-CADC technique,the major focus lies in identifying cyberattacks.To do so,the presented BAOML-CADC technique applies a thermal equilibrium algorithm-based feature selection(TEA-FS)method for the optimal choice of features.The BAOML-CADC technique uses an extreme learning machine(ELM)model for cyberattack recognition.In addition,a BC-based integrity verification technique is developed to defend against the misrouting attack,showing the innovation of the work.The experimental validation of BAOML-CADC algorithm is tested on a benchmark cyberattack dataset.The obtained values implied the improved performance of the BAOML-CADC algorithm over other techniques.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under grant number(168/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R263),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+1 种基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4340237DSR32)The author would like to thank the Deanship of Scientific Research at Shaqra University for supporting this work。
文摘The recognition of the Arabic characters is a crucial task incomputer vision and Natural Language Processing fields. Some major complicationsin recognizing handwritten texts include distortion and patternvariabilities. So, the feature extraction process is a significant task in NLPmodels. If the features are automatically selected, it might result in theunavailability of adequate data for accurately forecasting the character classes.But, many features usually create difficulties due to high dimensionality issues.Against this background, the current study develops a Sailfish Optimizer withDeep Transfer Learning-Enabled Arabic Handwriting Character Recognition(SFODTL-AHCR) model. The projected SFODTL-AHCR model primarilyfocuses on identifying the handwritten Arabic characters in the inputimage. The proposed SFODTL-AHCR model pre-processes the input imageby following the Histogram Equalization approach to attain this objective.The Inception with ResNet-v2 model examines the pre-processed image toproduce the feature vectors. The Deep Wavelet Neural Network (DWNN)model is utilized to recognize the handwritten Arabic characters. At last,the SFO algorithm is utilized for fine-tuning the parameters involved in theDWNNmodel to attain better performance. The performance of the proposedSFODTL-AHCR model was validated using a series of images. Extensivecomparative analyses were conducted. The proposed method achieved a maximum accuracy of 99.73%. The outcomes inferred the supremacy of theproposed SFODTL-AHCR model over other approaches.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R281)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+1 种基金Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4331004DSR031)supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2023/R/1444).
文摘Applied linguistics is one of the fields in the linguistics domain and deals with the practical applications of the language studies such as speech processing,language teaching,translation and speech therapy.The ever-growing Online Social Networks(OSNs)experience a vital issue to confront,i.e.,hate speech.Amongst the OSN-oriented security problems,the usage of offensive language is the most important threat that is prevalently found across the Internet.Based on the group targeted,the offensive language varies in terms of adult content,hate speech,racism,cyberbullying,abuse,trolling and profanity.Amongst these,hate speech is the most intimidating form of using offensive language in which the targeted groups or individuals are intimidated with the intent of creating harm,social chaos or violence.Machine Learning(ML)techniques have recently been applied to recognize hate speech-related content.The current research article introduces a Grasshopper Optimization with an Attentive Recurrent Network for Offensive Speech Detection(GOARN-OSD)model for social media.The GOARNOSD technique integrates the concepts of DL and metaheuristic algorithms for detecting hate speech.In the presented GOARN-OSD technique,the primary stage involves the data pre-processing and word embedding processes.Then,this study utilizes the Attentive Recurrent Network(ARN)model for hate speech recognition and classification.At last,the Grasshopper Optimization Algorithm(GOA)is exploited as a hyperparameter optimizer to boost the performance of the hate speech recognition process.To depict the promising performance of the proposed GOARN-OSD method,a widespread experimental analysis was conducted.The comparison study outcomes demonstrate the superior performance of the proposed GOARN-OSD model over other state-of-the-art approaches.
基金supported by Taif University Researchers Supporting Program(Project Number:TURSP-2020/195)Taif University,Saudi Arabia.Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R203)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,working environments,topologies,and so on.The existing multi-core technology unlocks additional research opportunities for energy minimization by the use of effective task scheduling.At the same time,the task scheduling process is yet to be explored in the multi-core systems.This paper presents a new hybrid genetic algorithm(GA)with a krill herd(KH)based energy-efficient scheduling techni-que for multi-core systems(GAKH-SMCS).The goal of the GAKH-SMCS tech-nique is to derive scheduling tasks in such a way to achieve faster completion time and minimum energy dissipation.The GAKH-SMCS model involves a multi-objectivefitness function using four parameters such as makespan,processor utilization,speedup,and energy consumption to schedule tasks proficiently.The performance of the GAKH-SMCS model has been validated against two datasets namely random dataset and benchmark dataset.The experimental outcome ensured the effectiveness of the GAKH-SMCS model interms of makespan,pro-cessor utilization,speedup,and energy consumption.The overall simulation results depicted that the presented GAKH-SMCS model achieves energy effi-ciency by optimal task scheduling process in MCS.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R203)Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR03.
文摘Biomedical data classification has become a hot research topic in recent years,thanks to the latest technological advancements made in healthcare.Biome-dical data is usually examined by physicians for decision making process in patient treatment.Since manual diagnosis is a tedious and time consuming task,numerous automated models,using Artificial Intelligence(AI)techniques,have been presented so far.With this motivation,the current research work presents a novel Biomedical Data Classification using Cat and Mouse Based Optimizer with AI(BDC-CMBOAI)technique.The aim of the proposed BDC-CMBOAI technique is to determine the occurrence of diseases using biomedical data.Besides,the proposed BDC-CMBOAI technique involves the design of Cat and Mouse Optimizer-based Feature Selection(CMBO-FS)technique to derive a useful subset of features.In addition,Ridge Regression(RR)model is also utilized as a classifier to identify the existence of disease.The novelty of the current work is its designing of CMBO-FS model for data classification.Moreover,CMBO-FS technique is used to get rid of unwanted features and boosts the classification accuracy.The results of the experimental analysis accomplished by BDC-CMBOAI technique on benchmark medical dataset established the supremacy of the proposed technique under different evaluation measures.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP 2/158/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R114),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Presently,smart cities play a vital role to enhance the quality of living among human beings in several ways such as online shopping,e-learning,ehealthcare,etc.Despite the benefits of advanced technologies,issues are also existed from the transformation of the physical word into digital word,particularly in online social networks(OSN).Cyberbullying(CB)is a major problem in OSN which needs to be addressed by the use of automated natural language processing(NLP)and machine learning(ML)approaches.This article devises a novel search and rescue optimization with machine learning enabled cybersecurity model for online social networks,named SRO-MLCOSN model.The presented SRO-MLCOSN model focuses on the identification of CB that occurred in social networking sites.The SRO-MLCOSN model initially employs Glove technique for word embedding process.Besides,a multiclass-weighted kernel extreme learning machine(M-WKELM)model is utilized for effectual identification and categorization of CB.Finally,Search and Rescue Optimization(SRO)algorithm is exploited to fine tune the parameters involved in the M-WKELM model.The experimental validation of the SRO-MLCOSN model on the benchmark dataset reported significant outcomes over the other approaches with precision,recall,and F1-score of 96.24%,98.71%,and 97.46%respectively.