Colorectal cancer is one of the most commonly diagnosed cancers and it develops in the colon region of large intestine.The histopathologist generally investigates the colon biopsy at the time of colonoscopy or surgery...Colorectal cancer is one of the most commonly diagnosed cancers and it develops in the colon region of large intestine.The histopathologist generally investigates the colon biopsy at the time of colonoscopy or surgery.Early detection of colorectal cancer is helpful to maintain the concept of accumulating cancer cells.In medical practices,histopathological investigation of tissue specimens generally takes place in a conventional way,whereas automated tools that use Artificial Intelligence(AI)techniques can produce effective results in disease detection performance.In this background,the current study presents an Automated AI-empowered Colorectal Cancer Detection and Classification(AAI-CCDC)technique.The proposed AAICCDC technique focuses on the examination of histopathological images to diagnose colorectal cancer.Initially,AAI-CCDC technique performs preprocessing in three levels such as gray scale transformation,Median Filtering(MF)-based noise removal,and contrast improvement.In addition,Nadam optimizer with EfficientNet model is also utilized to produce meaningful feature vectors.Furthermore,Glowworm Swarm Optimization(GSO)with Stacked Gated Recurrent Unit(SGRU)model is used for the detection and classification of colorectal cancer.The proposed AAI-CCDC technique was experimentally validated using benchmark dataset and the experimental results established the supremacy of the proposed AAI-CCDC technique over conventional approaches.展开更多
Education 4.0 is being authorized more and more by the design of artificial intelligence(AI)techniques.Higher education institutions(HEI)have started to utilize Internet technologies to improve the quality of the serv...Education 4.0 is being authorized more and more by the design of artificial intelligence(AI)techniques.Higher education institutions(HEI)have started to utilize Internet technologies to improve the quality of the service and boost knowledge.Due to the unavailability of information technology(IT)infrastructures,HEI is vulnerable to cyberattacks.Biometric authentication can be used to authenticate a person based on biological features such as face,fingerprint,iris,and so on.This study designs a novel search and rescue optimization with deep learning based learning authentication technique for cybersecurity in higher education institutions,named SRODLLAC technique.The proposed SRODL-LAC technique aims to authenticate the learner/student in HEI using fingerprint biometrics.Besides,the SRODLLACtechnique designs a median filtering(MF)based preprocessing approach to improving the quality of the image.In addition,the Densely Connected Networks(DenseNet-77)model is applied for the extraction of features.Moreover,search and rescue optimization(SRO)algorithm with deep neural network(DNN)model is utilized for the classification process.Lastly,template matching process is done for fingerprint identification.A wide range of simulation analyses is carried out and the results are inspected under several aspects.The experimental results reported the effective performance of the SRODL-LAC technique over the other methodologies.展开更多
As higher education institutions(HEIs)go online,several benefits are attained,and also it is vulnerable to several kinds of attacks.To accomplish security,this paper presents artificial intelligence based cybersecurit...As higher education institutions(HEIs)go online,several benefits are attained,and also it is vulnerable to several kinds of attacks.To accomplish security,this paper presents artificial intelligence based cybersecurity intrusion detection models to accomplish security.The incorporation of the strategies into business is a tendency among several distinct industries,comprising education,have recognized as game changer.Consequently,the HEIs are highly related to the requirement and knowledge of the learner,making the education procedure highly effective.Thus,artificial intelligence(AI)and machine learning(ML)models have shown significant interest in HEIs.This study designs a novel Artificial Intelligence based Cybersecurity Intrusion Detection Model for Higher Education Institutions named AICIDHEI technique.The goal of the AICID-HEI technique is to determine the occurrence of distinct kinds of intrusions in higher education institutes.The AICID-HEI technique encompassesmin-max normalization approach to preprocess the data.Besides,the AICID-HEI technique involves the design of improved differential evolution algorithm based feature selection(IDEA-FS)technique is applied to choose the feature subsets.Moreover,the bidirectional long short-term memory(BiLSTM)model is utilized for the detection and classification of intrusions in the network.Furthermore,the Adam optimizer is applied for hyperparameter tuning to properly adjust the hyperparameters in higher educational institutions.In order to validate the experimental results of the proposed AICID-HEI technique,the simulation results of the AICIDHEI technique take place by the use of benchmark dataset.The experimental results reported the betterment of the AICID-HEI technique over the other methods interms of different measures.展开更多
Education acts as an important part of economic growth and improvement in human welfare.The educational sectors have transformed a lot in recent days,and Information and Communication Technology(ICT)is an effective pa...Education acts as an important part of economic growth and improvement in human welfare.The educational sectors have transformed a lot in recent days,and Information and Communication Technology(ICT)is an effective part of the education field.Almost every action in university and college,right from the process fromcounselling to admissions and fee deposits has been automated.Attendance records,quiz,evaluation,mark,and grade submissions involved the utilization of the ICT.Therefore,security is essential to accomplish cybersecurity in higher security institutions(HEIs).In this view,this study develops an Automated Outlier Detection for CyberSecurity in Higher Education Institutions(AOD-CSHEI)technique.The AOD-CSHEI technique intends to determine the presence of intrusions or attacks in the HEIs.The AOD-CSHEI technique initially performs data pre-processing in two stages namely data conversion and class labelling.In addition,the Adaptive Synthetic(ADASYN)technique is exploited for the removal of outliers in the data.Besides,the sparrow search algorithm(SSA)with deep neural network(DNN)model is used for the classification of data into the existence or absence of intrusions in the HEIs network.Finally,the SSA is utilized to effectually adjust the hyper parameters of the DNN approach.In order to showcase the enhanced performance of the AOD-CSHEI technique,a set of simulations take place on three benchmark datasets and the results reported the enhanced efficiency of the AOD-CSHEI technique over its compared methods with higher accuracy of 0.9997.展开更多
Signature verification is regarded as the most beneficial behavioral characteristic-based biometric feature in security and fraud protection.It is also a popular biometric authentication technology in forensic and com...Signature verification is regarded as the most beneficial behavioral characteristic-based biometric feature in security and fraud protection.It is also a popular biometric authentication technology in forensic and commercial transactions due to its various advantages,including noninvasiveness,user-friendliness,and social and legal acceptability.According to the literature,extensive research has been conducted on signature verification systems in a variety of languages,including English,Hindi,Bangla,and Chinese.However,the Arabic Offline Signature Verification(OSV)system is still a challenging issue that has not been investigated as much by researchers due to the Arabic script being distinguished by changing letter shapes,diacritics,ligatures,and overlapping,making verification more difficult.Recently,signature verification systems have shown promising results for recognizing signatures that are genuine or forgeries;however,performance on skilled forgery detection is still unsatisfactory.Most existing methods require many learning samples to improve verification accuracy,which is a major drawback because the number of available signature samples is often limited in the practical application of signature verification systems.This study addresses these issues by presenting an OSV system based on multifeature fusion and discriminant feature selection using a genetic algorithm(GA).In contrast to existing methods,which use multiclass learning approaches,this study uses a oneclass learning strategy to address imbalanced signature data in the practical application of a signature verification system.The proposed approach is tested on three signature databases(SID)-Arabic handwriting signatures,CEDAR(Center of Excellence for Document Analysis and Recognition),and UTSIG(University of Tehran Persian Signature),and experimental results show that the proposed system outperforms existing systems in terms of reducing the False Acceptance Rate(FAR),False Rejection Rate(FRR),and Equal Error Rate(ERR).The proposed system achieved 5%improvement.展开更多
The advancements made in Internet of Things(IoT)is projected to alter the functioning of healthcare industry in addition to increased penetration of different applications.However,data security and private are challen...The advancements made in Internet of Things(IoT)is projected to alter the functioning of healthcare industry in addition to increased penetration of different applications.However,data security and private are challenging tasks to accomplish in IoT and necessary measures to be taken to ensure secure operation.With this background,the current paper proposes a novel lightweight cryptography method for enhance the security in IoT.The proposed encryption algorithm is a blend of Cross Correlation Coefficient(CCC)and Black Widow Optimization(BWO)algorithm.In the presented encryption technique,CCC operation is utilized to optimize the encryption process of cryptography method.The projected encryption algorithm works in line with encryption and decryption processes.Optimal key selection is performed with the help of Artificial Intelligence(AI)tool named BWO algorithm.With the combination of AI technique and CCC operation,optimal security operation is improved in IoT.Using different sets of images collected from databases,the projected technique was validated in MATLAB on the basis of few performance metrics such as encryption time,decryption time,Peak Signal to Noise Ratio(PSNR),CC,Error,encryption time and decryption time.The results were compared with existing methods such as Elliptical Curve cryptography(ECC)and Rivest-Shamir-Adleman(RSA)and the supremacy of the projected method is established.展开更多
This study presents a novelmethod to detect themedical application based on Quantum Computing(QC)and a few Machine Learning(ML)systems.QC has a primary advantage i.e.,it uses the impact of quantum parallelism to provi...This study presents a novelmethod to detect themedical application based on Quantum Computing(QC)and a few Machine Learning(ML)systems.QC has a primary advantage i.e.,it uses the impact of quantum parallelism to provide the consequences of prime factorization issue in a matter of seconds.So,this model is suggested for medical application only by recent researchers.A novel strategy i.e.,Quantum KernelMethod(QKM)is proposed in this paper for data prediction.In this QKM process,Linear Tunicate Swarm Algorithm(LTSA),the optimization technique is used to calculate the loss function initially and is aimed at medical data.The output of optimization is either 0 or 1 i.e.,odd or even in QC.From this output value,the data is identified according to the class.Meanwhile,the method also reduces time,saves cost and improves the efficiency by feature selection process i.e.,Filter method.After the features are extracted,QKM is deployed as a classification model,while the loss function is minimized by LTSA.The motivation of the minimal objective is to remain faster.However,some computations can be performed more efficiently by the proposed model.In testing,the test data was evaluated by minimal loss function.The outcomes were assessed in terms of accuracy,computational time,and so on.For this,databases like Lymphography,Dermatology,and Arrhythmia were used.展开更多
Liver cancer is one of the major diseases with increased mortality in recent years,across the globe.Manual detection of liver cancer is a tedious and laborious task due to which Computer Aided Diagnosis(CAD)models hav...Liver cancer is one of the major diseases with increased mortality in recent years,across the globe.Manual detection of liver cancer is a tedious and laborious task due to which Computer Aided Diagnosis(CAD)models have been developed to detect the presence of liver cancer accurately and classify its stages.Besides,liver cancer segmentation outcome,using medical images,is employed in the assessment of tumor volume,further treatment plans,and response moni-toring.Hence,there is a need exists to develop automated tools for liver cancer detection in a precise manner.With this motivation,the current study introduces an Intelligent Artificial Intelligence with Equilibrium Optimizer based Liver cancer Classification(IAIEO-LCC)model.The proposed IAIEO-LCC technique initially performs Median Filtering(MF)-based pre-processing and data augmentation process.Besides,Kapur’s entropy-based segmentation technique is used to identify the affected regions in liver.Moreover,VGG-19 based feature extractor and Equilibrium Optimizer(EO)-based hyperparameter tuning processes are also involved to derive the feature vectors.At last,Stacked Gated Recurrent Unit(SGRU)classifier is exploited to detect and classify the liver cancer effectively.In order to demonstrate the superiority of the proposed IAIEO-LCC technique in terms of performance,a wide range of simulations was conducted and the results were inspected under different measures.The comparison study results infer that the proposed IAIEO-LCC technique achieved an improved accuracy of 98.52%.展开更多
Atrial fibrillation is the most common persistent form of arrhythmia.A method based on wavelet transform combined with deep convolutional neural network is applied for automatic classification of electrocardiograms.Si...Atrial fibrillation is the most common persistent form of arrhythmia.A method based on wavelet transform combined with deep convolutional neural network is applied for automatic classification of electrocardiograms.Since the ECG signal is easily inferred,the ECG signal is decomposed into 9 kinds of subsignals with different frequency scales by wavelet function,and then wavelet reconstruction is carried out after segmented filtering to eliminate the influence of noise.A 24-layer convolution neural network is used to extract the hierarchical features by convolution kernels of different sizes,and finally the softmax classifier is used to classify them.This paper applies this method of the ECG data set provided by the 2017 PhysioNet/CINC challenge.After cross validation,this method can obtain 87.1%accuracy and the F1 score is 86.46%.Compared with the existing classification method,our proposed algorithm has higher accuracy and generalization ability for ECG signal data classification.展开更多
Sentence semantic matching(SSM)is a fundamental research in solving natural language processing tasks such as question answering and machine translation.The latest SSM research benefits from deep learning techniques b...Sentence semantic matching(SSM)is a fundamental research in solving natural language processing tasks such as question answering and machine translation.The latest SSM research benefits from deep learning techniques by incorporating attention mechanism to semantically match given sentences.However,how to fully capture the semantic context without losing significant features for sentence encoding is still a challenge.To address this challenge,we propose a deep feature fusion model and integrate it into the most popular deep learning architecture for sentence matching task.The integrated architecture mainly consists of embedding layer,deep feature fusion layer,matching layer and prediction layer.In addition,we also compare the commonly used loss function,and propose a novel hybrid loss function integrating MSE and cross entropy together,considering confidence interval and threshold setting to preserve the indistinguishable instances in training process.To evaluate our model performance,we experiment on two real world public data sets:LCQMC and Quora.The experiment results demonstrate that our model outperforms the most existing advanced deep learning models for sentence matching,benefited from our enhanced loss function and deep feature fusion model for capturing semantic context.展开更多
Rainfall prediction becomes popular in real time environment due to the developments of recent technologies.Accurate and fast rainfall predictive models can be designed by the use of machine learning(ML),statistical m...Rainfall prediction becomes popular in real time environment due to the developments of recent technologies.Accurate and fast rainfall predictive models can be designed by the use of machine learning(ML),statistical models,etc.Besides,feature selection approaches can be derived for eliminating the curse of dimensionality problems.In this aspect,this paper presents a novel chaotic spider money optimization with optimal kernel ridge regression(CSMO-OKRR)model for accurate rainfall prediction.The goal of the CSMO-OKRR technique is to properly predict the rainfall using the weather data.The proposed CSMO-OKRR technique encompasses three major processes namely feature selection,prediction,and parameter tuning.Initially,the CSMO algorithm is employed to derive a useful subset of features and reduce the computational complexity.In addition,the KRR model is used for the prediction of rainfall based on weather data.Lastly,the symbiotic organism search(SOS)algorithm is employed to properly tune the parameters involved in it.A series of simulations are performed to demonstrate the better performance of the CSMO-OKRR technique with respect to different measures.The simulation results reported the enhanced outcomes of the CSMO-OKRR technique with existing techniques.展开更多
Rapid advancements of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)pose serious security issues by revealing secret data.Therefore,security data becomes a crucial issue in IIoT communication w...Rapid advancements of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)pose serious security issues by revealing secret data.Therefore,security data becomes a crucial issue in IIoT communication where secrecy needs to be guaranteed in real time.Practically,AI techniques can be utilized to design image steganographic techniques in IIoT.In addition,encryption techniques act as an important role to save the actual information generated from the IIoT devices to avoid unauthorized access.In order to accomplish secure data transmission in IIoT environment,this study presents novel encryption with image steganography based data hiding technique(EISDHT)for IIoT environment.The proposed EIS-DHT technique involves a new quantum black widow optimization(QBWO)to competently choose the pixel values for hiding secrete data in the cover image.In addition,the multi-level discrete wavelet transform(DWT)based transformation process takes place.Besides,the secret image is divided into three R,G,and B bands which are then individually encrypted using Blowfish,Twofish,and Lorenz Hyperchaotic System.At last,the stego image gets generated by placing the encrypted images into the optimum pixel locations of the cover image.In order to validate the enhanced data hiding performance of the EIS-DHT technique,a set of simulation analyses take place and the results are inspected interms of different measures.The experimental outcomes stated the supremacy of the EIS-DHT technique over the other existing techniques and ensure maximum security.展开更多
Routine immunization(RI)of children is the most effective and timely public health intervention for decreasing child mortality rates around the globe.Pakistan being a low-and-middle-income-country(LMIC)has one of the ...Routine immunization(RI)of children is the most effective and timely public health intervention for decreasing child mortality rates around the globe.Pakistan being a low-and-middle-income-country(LMIC)has one of the highest child mortality rates in the world occurring mainly due to vaccine-preventable diseases(VPDs).For improving RI coverage,a critical need is to establish potential RI defaulters at an early stage,so that appropriate interventions can be targeted towards such populationwho are identified to be at risk of missing on their scheduled vaccine uptakes.In this paper,a machine learning(ML)based predictivemodel has been proposed to predict defaulting and non-defaulting children on upcoming immunization visits and examine the effect of its underlying contributing factors.The predictivemodel uses data obtained from Paigham-e-Sehat study having immunization records of 3,113 children.The design of predictive model is based on obtaining optimal results across accuracy,specificity,and sensitivity,to ensure model outcomes remain practically relevant to the problem addressed.Further optimization of predictive model is obtained through selection of significant features and removing data bias.Nine machine learning algorithms were applied for prediction of defaulting children for the next immunization visit.The results showed that the random forest model achieves the optimal accuracy of 81.9%with 83.6%sensitivity and 80.3%specificity.The main determinants of vaccination coverage were found to be vaccine coverage at birth,parental education,and socioeconomic conditions of the defaulting group.This information can assist relevant policy makers to take proactive and effective measures for developing evidence based targeted and timely interventions for defaulting children.展开更多
Due to the advanced development in the multimedia-on-demandtraffic in different forms of audio, video, and images, has extremely movedon the vision of the Internet of Things (IoT) from scalar to Internet ofMultimedia ...Due to the advanced development in the multimedia-on-demandtraffic in different forms of audio, video, and images, has extremely movedon the vision of the Internet of Things (IoT) from scalar to Internet ofMultimedia Things (IoMT). Since Unmanned Aerial Vehicles (UAVs) generates a massive quantity of the multimedia data, it becomes a part of IoMT,which are commonly employed in diverse application areas, especially forcapturing remote sensing (RS) images. At the same time, the interpretationof the captured RS image also plays a crucial issue, which can be addressedby the multi-label classification and Computational Linguistics based imagecaptioning techniques. To achieve this, this paper presents an efficient lowcomplexity encoding technique with multi-label classification and image captioning for UAV based RS images. The presented model primarily involves thelow complexity encoder using the Neighborhood Correlation Sequence (NCS)with a burrows wheeler transform (BWT) technique called LCE-BWT forencoding the RS images captured by the UAV. The application of NCS greatlyreduces the computation complexity and requires fewer resources for imagetransmission. Secondly, deep learning (DL) based shallow convolutional neural network for RS image classification (SCNN-RSIC) technique is presentedto determine the multiple class labels of the RS image, shows the novelty ofthe work. Finally, the Computational Linguistics based Bidirectional EncoderRepresentations from Transformers (BERT) technique is applied for imagecaptioning, to provide a proficient textual description of the RS image. Theperformance of the presented technique is tested using the UCM dataset. Thesimulation outcome implied that the presented model has obtained effectivecompression performance, reconstructed image quality, classification results,and image captioning outcome.展开更多
In recent years,Smart City Infrastructures(SCI)have become familiar whereas intelligent models have been designed to improve the quality of living in smart cities.Simultaneously,anomaly detection in SCI has become a h...In recent years,Smart City Infrastructures(SCI)have become familiar whereas intelligent models have been designed to improve the quality of living in smart cities.Simultaneously,anomaly detection in SCI has become a hot research topic and is widely explored to enhance the safety of pedestrians.The increasing popularity of video surveillance system and drastic increase in the amount of collected videos make the conventional physical investigation method to identify abnormal actions,a laborious process.In this background,Deep Learning(DL)models can be used in the detection of anomalies found through video surveillance systems.The current research paper develops an Internet of Things Assisted Deep Learning Enabled Anomaly Detection Technique for Smart City Infrastructures,named(IoTAD-SCI)technique.The aim of the proposed IoTAD-SCI technique is to mainly identify the existence of anomalies in smart city environment.Besides,IoTAD-SCI technique involves Deep Consensus Network(DCN)model design to detect the anomalies in input video frames.In addition,Arithmetic Optimization Algorithm(AOA)is executed to tune the hyperparameters of the DCN model.Moreover,ID3 classifier is also utilized to classify the identified objects in different classes.The experimental analysis was conducted for the proposed IoTADSCI technique upon benchmark UCSD anomaly detection dataset and the results were inspected under different measures.The simulation results infer the superiority of the proposed IoTAD-SCI technique under different metrics.展开更多
Medical data classification becomes a hot research topic in the healthcare sector to aid physicians in the healthcare sector for decision making.Besides,the advances of machine learning(ML)techniques assist to perform...Medical data classification becomes a hot research topic in the healthcare sector to aid physicians in the healthcare sector for decision making.Besides,the advances of machine learning(ML)techniques assist to perform the effective classification task.With this motivation,this paper presents a Fuzzy Clustering Approach Based on Breadth-first Search Algorithm(FCA-BFS)with optimal support vector machine(OSVM)model,named FCABFS-OSVM for medical data classification.The proposed FCABFS-OSVM technique intends to classify the healthcare data by the use of clustering and classification models.Besides,the proposed FCABFSOSVM technique involves the design of FCABFS technique to cluster the medical data which helps to boost the classification performance.Moreover,the OSVM model investigates the clustered medical data to perform classification process.Furthermore,Archimedes optimization algorithm(AOA)is utilized to the SVM parameters and boost the medical data classification results.A wide range of simulations takes place to highlight the promising performance of the FCABFS-OSVM technique.Extensive comparison studies reported the enhanced outcomes of the FCABFS-OSVM technique over the recent state of art approaches.展开更多
The accuracy of the statistical learning model depends on the learning technique used which in turn depends on the dataset’s values.In most research studies,the existence of missing values(MVs)is a vital problem.In a...The accuracy of the statistical learning model depends on the learning technique used which in turn depends on the dataset’s values.In most research studies,the existence of missing values(MVs)is a vital problem.In addition,any dataset with MVs cannot be used for further analysis or with any data driven tool especially when the percentage of MVs are high.In this paper,the authors propose a novel algorithm for dealing with MVs depending on the feature selec-tion(FS)of similarity classifier with fuzzy entropy measure.The proposed algo-rithm imputes MVs in cumulative order.The candidate feature to be manipulated is selected using similarity classifier with Parkash’s fuzzy entropy measure.The predictive model to predict MVs within the candidate feature is the Bayesian Ridge Regression(BRR)technique.Furthermore,any imputed features will be incorporated within the BRR equation to impute the MVs in the next chosen incomplete feature.The proposed algorithm was compared against some practical state-of-the-art imputation methods by conducting an experiment on four medical datasets which were gathered from several databases repository with MVs gener-ated from the three missingness mechanisms.The evaluation metrics of mean abso-lute error(MAE),root mean square error(RMSE)and coefficient of determination(R2 score)were used to measure the performance.The results exhibited that perfor-mance vary depending on the size of the dataset,amount of MVs and the missing-ness mechanism type.Moreover,compared to other methods,the results showed that the proposed method gives better accuracy and less error in most cases.展开更多
Industrial Control Systems(ICS)can be employed on the industrial processes in order to reduce the manual labor and handle the complicated industrial system processes as well as communicate effectively.Internet of Thin...Industrial Control Systems(ICS)can be employed on the industrial processes in order to reduce the manual labor and handle the complicated industrial system processes as well as communicate effectively.Internet of Things(IoT)integrates numerous sets of sensors and devices via a data network enabling independent processes.The incorporation of the IoT in the industrial sector leads to the design of Industrial Internet of Things(IIoT),which find use in water distribution system,power plants,etc.Since the IIoT is susceptible to different kinds of attacks due to the utilization of Internet connection,an effective forensic investigation process becomes essential.This study offers the design of an intelligent forensic investigation using optimal stacked autoencoder for critical industrial infrastructures.The proposed strategy involves the design of manta ray foraging optimization(MRFO)based feature selection with optimal stacked autoencoder(OSAE)model,named MFROFS-OSAE approach.The primary objective of the MFROFS-OSAE technique is to determine the presence of abnormal events in critical industrial infrastructures.TheMFROFS-OSAE approach involves several subprocesses namely data gathering,data handling,feature selection,classification,and parameter tuning.Besides,the MRFO based feature selection approach is designed for the optimal selection of feature subsets.Moreover,the OSAE based classifier is derived to detect abnormal events and the parameter tuning process is carried out via the coyote optimization algorithm(COA).The performance validation of the MFROFS-OSAE technique takes place using the benchmark dataset and the experimental results reported the betterment of the MFROFS-OSAE technique over the recent approaches interms of different measures.展开更多
基金This work was funded by the Deanship of Scientific Research(DSR),King Abdulaziz University,Jeddah,under Grant No.(D-398–247–1443).
文摘Colorectal cancer is one of the most commonly diagnosed cancers and it develops in the colon region of large intestine.The histopathologist generally investigates the colon biopsy at the time of colonoscopy or surgery.Early detection of colorectal cancer is helpful to maintain the concept of accumulating cancer cells.In medical practices,histopathological investigation of tissue specimens generally takes place in a conventional way,whereas automated tools that use Artificial Intelligence(AI)techniques can produce effective results in disease detection performance.In this background,the current study presents an Automated AI-empowered Colorectal Cancer Detection and Classification(AAI-CCDC)technique.The proposed AAICCDC technique focuses on the examination of histopathological images to diagnose colorectal cancer.Initially,AAI-CCDC technique performs preprocessing in three levels such as gray scale transformation,Median Filtering(MF)-based noise removal,and contrast improvement.In addition,Nadam optimizer with EfficientNet model is also utilized to produce meaningful feature vectors.Furthermore,Glowworm Swarm Optimization(GSO)with Stacked Gated Recurrent Unit(SGRU)model is used for the detection and classification of colorectal cancer.The proposed AAI-CCDC technique was experimentally validated using benchmark dataset and the experimental results established the supremacy of the proposed AAI-CCDC technique over conventional approaches.
基金The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(IFPRC-154-611-2020)and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Education 4.0 is being authorized more and more by the design of artificial intelligence(AI)techniques.Higher education institutions(HEI)have started to utilize Internet technologies to improve the quality of the service and boost knowledge.Due to the unavailability of information technology(IT)infrastructures,HEI is vulnerable to cyberattacks.Biometric authentication can be used to authenticate a person based on biological features such as face,fingerprint,iris,and so on.This study designs a novel search and rescue optimization with deep learning based learning authentication technique for cybersecurity in higher education institutions,named SRODLLAC technique.The proposed SRODL-LAC technique aims to authenticate the learner/student in HEI using fingerprint biometrics.Besides,the SRODLLACtechnique designs a median filtering(MF)based preprocessing approach to improving the quality of the image.In addition,the Densely Connected Networks(DenseNet-77)model is applied for the extraction of features.Moreover,search and rescue optimization(SRO)algorithm with deep neural network(DNN)model is utilized for the classification process.Lastly,template matching process is done for fingerprint identification.A wide range of simulation analyses is carried out and the results are inspected under several aspects.The experimental results reported the effective performance of the SRODL-LAC technique over the other methodologies.
基金The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFPRC-154-611-2020)and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘As higher education institutions(HEIs)go online,several benefits are attained,and also it is vulnerable to several kinds of attacks.To accomplish security,this paper presents artificial intelligence based cybersecurity intrusion detection models to accomplish security.The incorporation of the strategies into business is a tendency among several distinct industries,comprising education,have recognized as game changer.Consequently,the HEIs are highly related to the requirement and knowledge of the learner,making the education procedure highly effective.Thus,artificial intelligence(AI)and machine learning(ML)models have shown significant interest in HEIs.This study designs a novel Artificial Intelligence based Cybersecurity Intrusion Detection Model for Higher Education Institutions named AICIDHEI technique.The goal of the AICID-HEI technique is to determine the occurrence of distinct kinds of intrusions in higher education institutes.The AICID-HEI technique encompassesmin-max normalization approach to preprocess the data.Besides,the AICID-HEI technique involves the design of improved differential evolution algorithm based feature selection(IDEA-FS)technique is applied to choose the feature subsets.Moreover,the bidirectional long short-term memory(BiLSTM)model is utilized for the detection and classification of intrusions in the network.Furthermore,the Adam optimizer is applied for hyperparameter tuning to properly adjust the hyperparameters in higher educational institutions.In order to validate the experimental results of the proposed AICID-HEI technique,the simulation results of the AICIDHEI technique take place by the use of benchmark dataset.The experimental results reported the betterment of the AICID-HEI technique over the other methods interms of different measures.
基金The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(IFPRC-154-611-2020)and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Education acts as an important part of economic growth and improvement in human welfare.The educational sectors have transformed a lot in recent days,and Information and Communication Technology(ICT)is an effective part of the education field.Almost every action in university and college,right from the process fromcounselling to admissions and fee deposits has been automated.Attendance records,quiz,evaluation,mark,and grade submissions involved the utilization of the ICT.Therefore,security is essential to accomplish cybersecurity in higher security institutions(HEIs).In this view,this study develops an Automated Outlier Detection for CyberSecurity in Higher Education Institutions(AOD-CSHEI)technique.The AOD-CSHEI technique intends to determine the presence of intrusions or attacks in the HEIs.The AOD-CSHEI technique initially performs data pre-processing in two stages namely data conversion and class labelling.In addition,the Adaptive Synthetic(ADASYN)technique is exploited for the removal of outliers in the data.Besides,the sparrow search algorithm(SSA)with deep neural network(DNN)model is used for the classification of data into the existence or absence of intrusions in the HEIs network.Finally,the SSA is utilized to effectually adjust the hyper parameters of the DNN approach.In order to showcase the enhanced performance of the AOD-CSHEI technique,a set of simulations take place on three benchmark datasets and the results reported the enhanced efficiency of the AOD-CSHEI technique over its compared methods with higher accuracy of 0.9997.
文摘Signature verification is regarded as the most beneficial behavioral characteristic-based biometric feature in security and fraud protection.It is also a popular biometric authentication technology in forensic and commercial transactions due to its various advantages,including noninvasiveness,user-friendliness,and social and legal acceptability.According to the literature,extensive research has been conducted on signature verification systems in a variety of languages,including English,Hindi,Bangla,and Chinese.However,the Arabic Offline Signature Verification(OSV)system is still a challenging issue that has not been investigated as much by researchers due to the Arabic script being distinguished by changing letter shapes,diacritics,ligatures,and overlapping,making verification more difficult.Recently,signature verification systems have shown promising results for recognizing signatures that are genuine or forgeries;however,performance on skilled forgery detection is still unsatisfactory.Most existing methods require many learning samples to improve verification accuracy,which is a major drawback because the number of available signature samples is often limited in the practical application of signature verification systems.This study addresses these issues by presenting an OSV system based on multifeature fusion and discriminant feature selection using a genetic algorithm(GA).In contrast to existing methods,which use multiclass learning approaches,this study uses a oneclass learning strategy to address imbalanced signature data in the practical application of a signature verification system.The proposed approach is tested on three signature databases(SID)-Arabic handwriting signatures,CEDAR(Center of Excellence for Document Analysis and Recognition),and UTSIG(University of Tehran Persian Signature),and experimental results show that the proposed system outperforms existing systems in terms of reducing the False Acceptance Rate(FAR),False Rejection Rate(FRR),and Equal Error Rate(ERR).The proposed system achieved 5%improvement.
基金This work was funded by the Deanship of Scientific Research(DSR),King Abdulaziz University,Jeddah,under Grant No.(D-387-135-1443).
文摘The advancements made in Internet of Things(IoT)is projected to alter the functioning of healthcare industry in addition to increased penetration of different applications.However,data security and private are challenging tasks to accomplish in IoT and necessary measures to be taken to ensure secure operation.With this background,the current paper proposes a novel lightweight cryptography method for enhance the security in IoT.The proposed encryption algorithm is a blend of Cross Correlation Coefficient(CCC)and Black Widow Optimization(BWO)algorithm.In the presented encryption technique,CCC operation is utilized to optimize the encryption process of cryptography method.The projected encryption algorithm works in line with encryption and decryption processes.Optimal key selection is performed with the help of Artificial Intelligence(AI)tool named BWO algorithm.With the combination of AI technique and CCC operation,optimal security operation is improved in IoT.Using different sets of images collected from databases,the projected technique was validated in MATLAB on the basis of few performance metrics such as encryption time,decryption time,Peak Signal to Noise Ratio(PSNR),CC,Error,encryption time and decryption time.The results were compared with existing methods such as Elliptical Curve cryptography(ECC)and Rivest-Shamir-Adleman(RSA)and the supremacy of the projected method is established.
基金This research work was funded by Institutional fund projects under Grant No.(IFPHI-038-156-2020)Therefore,authors gratefully acknowledge technical and financial support from Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘This study presents a novelmethod to detect themedical application based on Quantum Computing(QC)and a few Machine Learning(ML)systems.QC has a primary advantage i.e.,it uses the impact of quantum parallelism to provide the consequences of prime factorization issue in a matter of seconds.So,this model is suggested for medical application only by recent researchers.A novel strategy i.e.,Quantum KernelMethod(QKM)is proposed in this paper for data prediction.In this QKM process,Linear Tunicate Swarm Algorithm(LTSA),the optimization technique is used to calculate the loss function initially and is aimed at medical data.The output of optimization is either 0 or 1 i.e.,odd or even in QC.From this output value,the data is identified according to the class.Meanwhile,the method also reduces time,saves cost and improves the efficiency by feature selection process i.e.,Filter method.After the features are extracted,QKM is deployed as a classification model,while the loss function is minimized by LTSA.The motivation of the minimal objective is to remain faster.However,some computations can be performed more efficiently by the proposed model.In testing,the test data was evaluated by minimal loss function.The outcomes were assessed in terms of accuracy,computational time,and so on.For this,databases like Lymphography,Dermatology,and Arrhythmia were used.
基金The Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,Saudi Arabia has funded this project,under grant no.(FP-206-43).
文摘Liver cancer is one of the major diseases with increased mortality in recent years,across the globe.Manual detection of liver cancer is a tedious and laborious task due to which Computer Aided Diagnosis(CAD)models have been developed to detect the presence of liver cancer accurately and classify its stages.Besides,liver cancer segmentation outcome,using medical images,is employed in the assessment of tumor volume,further treatment plans,and response moni-toring.Hence,there is a need exists to develop automated tools for liver cancer detection in a precise manner.With this motivation,the current study introduces an Intelligent Artificial Intelligence with Equilibrium Optimizer based Liver cancer Classification(IAIEO-LCC)model.The proposed IAIEO-LCC technique initially performs Median Filtering(MF)-based pre-processing and data augmentation process.Besides,Kapur’s entropy-based segmentation technique is used to identify the affected regions in liver.Moreover,VGG-19 based feature extractor and Equilibrium Optimizer(EO)-based hyperparameter tuning processes are also involved to derive the feature vectors.At last,Stacked Gated Recurrent Unit(SGRU)classifier is exploited to detect and classify the liver cancer effectively.In order to demonstrate the superiority of the proposed IAIEO-LCC technique in terms of performance,a wide range of simulations was conducted and the results were inspected under different measures.The comparison study results infer that the proposed IAIEO-LCC technique achieved an improved accuracy of 98.52%.
基金This work is supported by Key Research and Development Project of Shandong Province(2019JZZY020124),ChinaNatural Science Foundation of Shandong Province(23170807),China.
文摘Atrial fibrillation is the most common persistent form of arrhythmia.A method based on wavelet transform combined with deep convolutional neural network is applied for automatic classification of electrocardiograms.Since the ECG signal is easily inferred,the ECG signal is decomposed into 9 kinds of subsignals with different frequency scales by wavelet function,and then wavelet reconstruction is carried out after segmented filtering to eliminate the influence of noise.A 24-layer convolution neural network is used to extract the hierarchical features by convolution kernels of different sizes,and finally the softmax classifier is used to classify them.This paper applies this method of the ECG data set provided by the 2017 PhysioNet/CINC challenge.After cross validation,this method can obtain 87.1%accuracy and the F1 score is 86.46%.Compared with the existing classification method,our proposed algorithm has higher accuracy and generalization ability for ECG signal data classification.
基金supported by National Nature Science Foundation of China under Grant No.61502259National Key R&D Program of China under Grant No.2018YFC0831704Natural Science Foundation of Shandong Province under Grant No.ZR2017MF056.
文摘Sentence semantic matching(SSM)is a fundamental research in solving natural language processing tasks such as question answering and machine translation.The latest SSM research benefits from deep learning techniques by incorporating attention mechanism to semantically match given sentences.However,how to fully capture the semantic context without losing significant features for sentence encoding is still a challenge.To address this challenge,we propose a deep feature fusion model and integrate it into the most popular deep learning architecture for sentence matching task.The integrated architecture mainly consists of embedding layer,deep feature fusion layer,matching layer and prediction layer.In addition,we also compare the commonly used loss function,and propose a novel hybrid loss function integrating MSE and cross entropy together,considering confidence interval and threshold setting to preserve the indistinguishable instances in training process.To evaluate our model performance,we experiment on two real world public data sets:LCQMC and Quora.The experiment results demonstrate that our model outperforms the most existing advanced deep learning models for sentence matching,benefited from our enhanced loss function and deep feature fusion model for capturing semantic context.
基金This work was funded by the Deanship of Scientific Research(DSR),King Abdulaziz University,Jeddah,under Grant No.(D-356-611-1443).
文摘Rainfall prediction becomes popular in real time environment due to the developments of recent technologies.Accurate and fast rainfall predictive models can be designed by the use of machine learning(ML),statistical models,etc.Besides,feature selection approaches can be derived for eliminating the curse of dimensionality problems.In this aspect,this paper presents a novel chaotic spider money optimization with optimal kernel ridge regression(CSMO-OKRR)model for accurate rainfall prediction.The goal of the CSMO-OKRR technique is to properly predict the rainfall using the weather data.The proposed CSMO-OKRR technique encompasses three major processes namely feature selection,prediction,and parameter tuning.Initially,the CSMO algorithm is employed to derive a useful subset of features and reduce the computational complexity.In addition,the KRR model is used for the prediction of rainfall based on weather data.Lastly,the symbiotic organism search(SOS)algorithm is employed to properly tune the parameters involved in it.A series of simulations are performed to demonstrate the better performance of the CSMO-OKRR technique with respect to different measures.The simulation results reported the enhanced outcomes of the CSMO-OKRR technique with existing techniques.
基金This research work was funded by Institution Fund projects under Grant No.(IFPRC-215-249-2020)Therefore,authors gratefully acknowledge technical and financial support from the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Rapid advancements of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)pose serious security issues by revealing secret data.Therefore,security data becomes a crucial issue in IIoT communication where secrecy needs to be guaranteed in real time.Practically,AI techniques can be utilized to design image steganographic techniques in IIoT.In addition,encryption techniques act as an important role to save the actual information generated from the IIoT devices to avoid unauthorized access.In order to accomplish secure data transmission in IIoT environment,this study presents novel encryption with image steganography based data hiding technique(EISDHT)for IIoT environment.The proposed EIS-DHT technique involves a new quantum black widow optimization(QBWO)to competently choose the pixel values for hiding secrete data in the cover image.In addition,the multi-level discrete wavelet transform(DWT)based transformation process takes place.Besides,the secret image is divided into three R,G,and B bands which are then individually encrypted using Blowfish,Twofish,and Lorenz Hyperchaotic System.At last,the stego image gets generated by placing the encrypted images into the optimum pixel locations of the cover image.In order to validate the enhanced data hiding performance of the EIS-DHT technique,a set of simulation analyses take place and the results are inspected interms of different measures.The experimental outcomes stated the supremacy of the EIS-DHT technique over the other existing techniques and ensure maximum security.
基金This study was funded by GCRF UK and was carried out as part of project CoNTINuE-Capacity building in technology-driven innovation in healthcare.
文摘Routine immunization(RI)of children is the most effective and timely public health intervention for decreasing child mortality rates around the globe.Pakistan being a low-and-middle-income-country(LMIC)has one of the highest child mortality rates in the world occurring mainly due to vaccine-preventable diseases(VPDs).For improving RI coverage,a critical need is to establish potential RI defaulters at an early stage,so that appropriate interventions can be targeted towards such populationwho are identified to be at risk of missing on their scheduled vaccine uptakes.In this paper,a machine learning(ML)based predictivemodel has been proposed to predict defaulting and non-defaulting children on upcoming immunization visits and examine the effect of its underlying contributing factors.The predictivemodel uses data obtained from Paigham-e-Sehat study having immunization records of 3,113 children.The design of predictive model is based on obtaining optimal results across accuracy,specificity,and sensitivity,to ensure model outcomes remain practically relevant to the problem addressed.Further optimization of predictive model is obtained through selection of significant features and removing data bias.Nine machine learning algorithms were applied for prediction of defaulting children for the next immunization visit.The results showed that the random forest model achieves the optimal accuracy of 81.9%with 83.6%sensitivity and 80.3%specificity.The main determinants of vaccination coverage were found to be vaccine coverage at birth,parental education,and socioeconomic conditions of the defaulting group.This information can assist relevant policy makers to take proactive and effective measures for developing evidence based targeted and timely interventions for defaulting children.
基金The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFPIP-941-137-1442)and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Due to the advanced development in the multimedia-on-demandtraffic in different forms of audio, video, and images, has extremely movedon the vision of the Internet of Things (IoT) from scalar to Internet ofMultimedia Things (IoMT). Since Unmanned Aerial Vehicles (UAVs) generates a massive quantity of the multimedia data, it becomes a part of IoMT,which are commonly employed in diverse application areas, especially forcapturing remote sensing (RS) images. At the same time, the interpretationof the captured RS image also plays a crucial issue, which can be addressedby the multi-label classification and Computational Linguistics based imagecaptioning techniques. To achieve this, this paper presents an efficient lowcomplexity encoding technique with multi-label classification and image captioning for UAV based RS images. The presented model primarily involves thelow complexity encoder using the Neighborhood Correlation Sequence (NCS)with a burrows wheeler transform (BWT) technique called LCE-BWT forencoding the RS images captured by the UAV. The application of NCS greatlyreduces the computation complexity and requires fewer resources for imagetransmission. Secondly, deep learning (DL) based shallow convolutional neural network for RS image classification (SCNN-RSIC) technique is presentedto determine the multiple class labels of the RS image, shows the novelty ofthe work. Finally, the Computational Linguistics based Bidirectional EncoderRepresentations from Transformers (BERT) technique is applied for imagecaptioning, to provide a proficient textual description of the RS image. Theperformance of the presented technique is tested using the UCM dataset. Thesimulation outcome implied that the presented model has obtained effectivecompression performance, reconstructed image quality, classification results,and image captioning outcome.
基金This project was supported financially by Institution Fund projects under grant no.(IFPIP-1308-612-1442).
文摘In recent years,Smart City Infrastructures(SCI)have become familiar whereas intelligent models have been designed to improve the quality of living in smart cities.Simultaneously,anomaly detection in SCI has become a hot research topic and is widely explored to enhance the safety of pedestrians.The increasing popularity of video surveillance system and drastic increase in the amount of collected videos make the conventional physical investigation method to identify abnormal actions,a laborious process.In this background,Deep Learning(DL)models can be used in the detection of anomalies found through video surveillance systems.The current research paper develops an Internet of Things Assisted Deep Learning Enabled Anomaly Detection Technique for Smart City Infrastructures,named(IoTAD-SCI)technique.The aim of the proposed IoTAD-SCI technique is to mainly identify the existence of anomalies in smart city environment.Besides,IoTAD-SCI technique involves Deep Consensus Network(DCN)model design to detect the anomalies in input video frames.In addition,Arithmetic Optimization Algorithm(AOA)is executed to tune the hyperparameters of the DCN model.Moreover,ID3 classifier is also utilized to classify the identified objects in different classes.The experimental analysis was conducted for the proposed IoTADSCI technique upon benchmark UCSD anomaly detection dataset and the results were inspected under different measures.The simulation results infer the superiority of the proposed IoTAD-SCI technique under different metrics.
基金This project was supported financially by Institution Fund projects under Grant No.(IFPIP-249-145-1442).
文摘Medical data classification becomes a hot research topic in the healthcare sector to aid physicians in the healthcare sector for decision making.Besides,the advances of machine learning(ML)techniques assist to perform the effective classification task.With this motivation,this paper presents a Fuzzy Clustering Approach Based on Breadth-first Search Algorithm(FCA-BFS)with optimal support vector machine(OSVM)model,named FCABFS-OSVM for medical data classification.The proposed FCABFS-OSVM technique intends to classify the healthcare data by the use of clustering and classification models.Besides,the proposed FCABFSOSVM technique involves the design of FCABFS technique to cluster the medical data which helps to boost the classification performance.Moreover,the OSVM model investigates the clustered medical data to perform classification process.Furthermore,Archimedes optimization algorithm(AOA)is utilized to the SVM parameters and boost the medical data classification results.A wide range of simulations takes place to highlight the promising performance of the FCABFS-OSVM technique.Extensive comparison studies reported the enhanced outcomes of the FCABFS-OSVM technique over the recent state of art approaches.
基金funded by the Deanship of Scientific Research(DSR)at King Abdulaziz University(KAU)Jeddah,Saudi Arabia,under grant No.(PH:13-130-1442).
文摘The accuracy of the statistical learning model depends on the learning technique used which in turn depends on the dataset’s values.In most research studies,the existence of missing values(MVs)is a vital problem.In addition,any dataset with MVs cannot be used for further analysis or with any data driven tool especially when the percentage of MVs are high.In this paper,the authors propose a novel algorithm for dealing with MVs depending on the feature selec-tion(FS)of similarity classifier with fuzzy entropy measure.The proposed algo-rithm imputes MVs in cumulative order.The candidate feature to be manipulated is selected using similarity classifier with Parkash’s fuzzy entropy measure.The predictive model to predict MVs within the candidate feature is the Bayesian Ridge Regression(BRR)technique.Furthermore,any imputed features will be incorporated within the BRR equation to impute the MVs in the next chosen incomplete feature.The proposed algorithm was compared against some practical state-of-the-art imputation methods by conducting an experiment on four medical datasets which were gathered from several databases repository with MVs gener-ated from the three missingness mechanisms.The evaluation metrics of mean abso-lute error(MAE),root mean square error(RMSE)and coefficient of determination(R2 score)were used to measure the performance.The results exhibited that perfor-mance vary depending on the size of the dataset,amount of MVs and the missing-ness mechanism type.Moreover,compared to other methods,the results showed that the proposed method gives better accuracy and less error in most cases.
基金The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFPIP-153-611-1442)and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Industrial Control Systems(ICS)can be employed on the industrial processes in order to reduce the manual labor and handle the complicated industrial system processes as well as communicate effectively.Internet of Things(IoT)integrates numerous sets of sensors and devices via a data network enabling independent processes.The incorporation of the IoT in the industrial sector leads to the design of Industrial Internet of Things(IIoT),which find use in water distribution system,power plants,etc.Since the IIoT is susceptible to different kinds of attacks due to the utilization of Internet connection,an effective forensic investigation process becomes essential.This study offers the design of an intelligent forensic investigation using optimal stacked autoencoder for critical industrial infrastructures.The proposed strategy involves the design of manta ray foraging optimization(MRFO)based feature selection with optimal stacked autoencoder(OSAE)model,named MFROFS-OSAE approach.The primary objective of the MFROFS-OSAE technique is to determine the presence of abnormal events in critical industrial infrastructures.TheMFROFS-OSAE approach involves several subprocesses namely data gathering,data handling,feature selection,classification,and parameter tuning.Besides,the MRFO based feature selection approach is designed for the optimal selection of feature subsets.Moreover,the OSAE based classifier is derived to detect abnormal events and the parameter tuning process is carried out via the coyote optimization algorithm(COA).The performance validation of the MFROFS-OSAE technique takes place using the benchmark dataset and the experimental results reported the betterment of the MFROFS-OSAE technique over the recent approaches interms of different measures.