In the domain ofmedical imaging,the accurate detection and classification of brain tumors is very important.This study introduces an advanced method for identifying camouflaged brain tumors within images.Our proposed ...In the domain ofmedical imaging,the accurate detection and classification of brain tumors is very important.This study introduces an advanced method for identifying camouflaged brain tumors within images.Our proposed model consists of three steps:Feature extraction,feature fusion,and then classification.The core of this model revolves around a feature extraction framework that combines color-transformed images with deep learning techniques,using the ResNet50 Convolutional Neural Network(CNN)architecture.So the focus is to extract robust feature fromMRI images,particularly emphasizingweighted average features extracted fromthe first convolutional layer renowned for their discriminative power.To enhance model robustness,we introduced a novel feature fusion technique based on the Marine Predator Algorithm(MPA),inspired by the hunting behavior of marine predators and has shown promise in optimizing complex problems.The proposed methodology can accurately classify and detect brain tumors in camouflage images by combining the power of color transformations,deep learning,and feature fusion via MPA,and achieved an accuracy of 98.72%on a more complex dataset surpassing the existing state-of-the-art methods,highlighting the effectiveness of the proposed model.The importance of this research is in its potential to advance the field ofmedical image analysis,particularly in brain tumor diagnosis,where diagnoses early,and accurate classification are critical for improved patient results.展开更多
Cloud computing has become one of the most projecting words in the IT world due to its design for providing computing service as a utility. The typical use of cloud computing as a resource has changed the scenery of c...Cloud computing has become one of the most projecting words in the IT world due to its design for providing computing service as a utility. The typical use of cloud computing as a resource has changed the scenery of computing. Due to the increased flexibility, better reliability, great scalability, and decreased costs have captivated businesses and individuals alike because of the pay-per-use form of the cloud environment. Cloud computing is a completely internet dependent technology where client data are stored and maintained in the data center of a cloud provider like Google, Amazon, Apple Inc., Microsoft etc. The Anomaly Detection System is one of the Intrusion Detection techniques. It’s an area in the cloud environment that is been developed in the detection of unusual activities in the cloud networks. Although, there are a variety of Intrusion Detection techniques available in the cloud environment, this review paper exposes and focuses on different IDS in cloud networks through different categorizations and conducts comparative study on the security measures of Dropbox, Google Drive and iCloud, to illuminate their strength and weakness in terms of security.展开更多
In order to forecast promising technologies in the field of next generation mobile communication, various patent indicators were analyzed such as citation per patent, patent family information, patent share, increase ...In order to forecast promising technologies in the field of next generation mobile communication, various patent indicators were analyzed such as citation per patent, patent family information, patent share, increase rate, and patent activity. These indicators were quantified into several indexes and then integrated into an evaluation score to provide promising technologies. As a result of the suggested patent analysis, four technologies out of twenty two in details classification were selected, which showed outstanding technology competitiveness, high patent share and increasing rates as well as high recent-patent-ratios and triad-patent-family-ratios. Each of the selected technologies scored more than 10 points in total, and the following four technologies were suggested as promising ones in the field of next generation mobile communication: 1) 3GPP based mobile communication, 2) beyond 4G mobile communication, 3) IEEE 802.16 based mobile communication, which are in medium classification of broadband mobile communication system, and 4) testing/certification system of mobile communication, which is in medium classification of mobile communication testing/certification system.展开更多
There are numerous information technology solutions including hardware and software. A company that provides the solution should have knowledge of the customer needs in the purpose of sailing strategy or upgrade polic...There are numerous information technology solutions including hardware and software. A company that provides the solution should have knowledge of the customer needs in the purpose of sailing strategy or upgrade policy. The needs are also directly connected to the user satisfaction. However, the users have respective points of view in the needs as well as they may not identify the requirements to improve the solution. SERVQUAL can be an appropriate method to define and measure the customer satisfaction for the information technology solutions. As a case study of the customer satisfaction, the modified SERVQUAL items and scoring method are applied to a cyber-infrastructure system named CyberL ab in Korea. The measurement results of user satisfaction for CyberL ab are provided to confirm that our proposed method performs as we intended. From the results, we can score the satisfaction level of users and identify their needs in the various aspects. The total user satisfaction level for CyberL ab is scored by 88.3.展开更多
This study examines the key factors that have impact on the successful adoption of Human Resource Information System (HRIS) within the Aqaba Special Economic Zone Authority (ASEZA)/Jordan. In order to accomplish the p...This study examines the key factors that have impact on the successful adoption of Human Resource Information System (HRIS) within the Aqaba Special Economic Zone Authority (ASEZA)/Jordan. In order to accomplish the purpose of the study four critical factors are inquired. So, four critical factors are inquired: First, TAM Model (Perceived Ease of Use (PEOU) and Perceived Usefulness (PU)). Second, Information Technology Infrastructure (ITI). Third, Top Management Support (TMS). Finally, Individual Experience with Computer (IEC). The research model was applied to collect data from the questionnaires answered by 45 users of HRIS as a source of primary data, based on a convenience sample the response rate was about 91%. In addition, the results were analyzed by utilizing the Statistical Package for Social Software (SPSS). Furthermore, the findings were analyzed;multiple Regression analysis indicated that all research variables have significant relationship on successful adoption of HRIS. The findings indicated IT infrastructures have a positive and significant effect on the successful adoption of HRIS. But there is no significant of PU, PEOU, TMS, and IEC on the successful adoption of HRIS. Finally, the results indicated that no significant statistical differences of demographic characteristics on HRIS adoption. Depending on the research’s findings;the researchers proposed a set of recommendations for better adoption of HRIS in SEZA.展开更多
Information Communication Technologies (ICT) has offered m-government applications as an intermediate technology to provide effective and efficient government services to the public. Due to high rate of corruptions in...Information Communication Technologies (ICT) has offered m-government applications as an intermediate technology to provide effective and efficient government services to the public. Due to high rate of corruptions in developing states, government policies diversified governmental services from offline to virtualized perspective to expose accessibility, transparency, accountability and accessibility through mobile government. Deployment of such ICT tool also exposed a unique opportunity for the recovery of the public confidence against government which has damaged due to corruption activities in country. Virtualization of the government services became compulsory due to high rate of corruption that occurred in the economic context and it became a serious obstacle for economic development of developing states. The virtualized services aimed to harmonize governmental services into mobile platform in order to become more transparent to the public. This research paper comparatively investigates the mobile government services that are located in Malta and Singapore which are classified as developing countries. The criteria of the comparison have done based on demographic structure of the country, M-government policies and ICT infrastructure of the country. The findings of this study exposed the impact of e-government practices and differences between them in terms of applicability and provide a specific point of view for m-government adoption policy.展开更多
The relation between the HRM and the firm performance is analyzed statistically by many researchers in the literature. However, there are very few nonlinear approaches in literature for finding the relation between Hu...The relation between the HRM and the firm performance is analyzed statistically by many researchers in the literature. However, there are very few nonlinear approaches in literature for finding the relation between Human Resource Management (FIRM) and firm performance. This paper exposes the relationship between human resource management and organizational performance through the use of nonlinear modeling technique. The modeling is proposed based on Radial Basis Function (RBF) which is nonlinear modeling technique in literature. The relation between 12 input and 9 output parameters is investigated in this research that is collected between 54 companies in Turkey which indicated that the relationship between organizational management performance and relationship management can be modelled through nonlinearly.展开更多
For a 5G wireless communication system,a convolutional deep neural network(CNN)is employed to synthesize a robust channel state estimator(CSE).The proposed CSE extracts channel information from transmit-and-receive pa...For a 5G wireless communication system,a convolutional deep neural network(CNN)is employed to synthesize a robust channel state estimator(CSE).The proposed CSE extracts channel information from transmit-and-receive pairs through offline training to estimate the channel state information.Also,it utilizes pilots to offer more helpful information about the communication channel.The proposedCNN-CSE performance is compared with previously published results for Bidirectional/long short-term memory(BiLSTM/LSTM)NNs-based CSEs.The CNN-CSE achieves outstanding performance using sufficient pilots only and loses its functionality at limited pilots compared with BiLSTM and LSTM-based estimators.Using three different loss function-based classification layers and the Adam optimization algorithm,a comparative study was conducted to assess the performance of the presented DNNs-based CSEs.The BiLSTM-CSE outperforms LSTM,CNN,conventional least squares(LS),and minimum mean square error(MMSE)CSEs.In addition,the computational and learning time complexities for DNN-CSEs are provided.These estimators are promising for 5G and future communication systems because they can analyze large amounts of data,discover statistical dependencies,learn correlations between features,and generalize the gotten knowledge.展开更多
Identifying fruit disease manually is time-consuming, expertrequired,and expensive;thus, a computer-based automated system is widelyrequired. Fruit diseases affect not only the quality but also the quantity.As a resul...Identifying fruit disease manually is time-consuming, expertrequired,and expensive;thus, a computer-based automated system is widelyrequired. Fruit diseases affect not only the quality but also the quantity.As a result, it is possible to detect the disease early on and cure the fruitsusing computer-based techniques. However, computer-based methods faceseveral challenges, including low contrast, a lack of dataset for training amodel, and inappropriate feature extraction for final classification. In thispaper, we proposed an automated framework for detecting apple fruit leafdiseases usingCNNand a hybrid optimization algorithm. Data augmentationis performed initially to balance the selected apple dataset. After that, twopre-trained deep models are fine-tuning and trained using transfer learning.Then, a fusion technique is proposed named Parallel Correlation Threshold(PCT). The fused feature vector is optimized in the next step using a hybridoptimization algorithm. The selected features are finally classified usingmachine learning algorithms. Four different experiments have been carriedout on the augmented Plant Village dataset and yielded the best accuracy of99.8%. The accuracy of the proposed framework is also compared to that ofseveral neural nets, and it outperforms them all.展开更多
Diagnosing gastrointestinal cancer by classical means is a hazardous procedure.Years have witnessed several computerized solutions for stomach disease detection and classification.However,the existing techniques faced...Diagnosing gastrointestinal cancer by classical means is a hazardous procedure.Years have witnessed several computerized solutions for stomach disease detection and classification.However,the existing techniques faced challenges,such as irrelevant feature extraction,high similarity among different disease symptoms,and the least-important features from a single source.This paper designed a new deep learning-based architecture based on the fusion of two models,Residual blocks and Auto Encoder.First,the Hyper-Kvasir dataset was employed to evaluate the proposed work.The research selected a pre-trained convolutional neural network(CNN)model and improved it with several residual blocks.This process aims to improve the learning capability of deep models and lessen the number of parameters.Besides,this article designed an Auto-Encoder-based network consisting of five convolutional layers in the encoder stage and five in the decoder phase.The research selected the global average pooling and convolutional layers for the feature extraction optimized by a hybrid Marine Predator optimization and Slime Mould optimization algorithm.These features of both models are fused using a novel fusion technique that is later classified using the Artificial Neural Network classifier.The experiment worked on the HyperKvasir dataset,which consists of 23 stomach-infected classes.At last,the proposed method obtained an improved accuracy of 93.90%on this dataset.Comparison is also conducted with some recent techniques and shows that the proposed method’s accuracy is improved.展开更多
Aspect-based sentiment analysis aims to detect and classify the sentiment polarities as negative,positive,or neutral while associating them with their identified aspects from the corresponding context.In this regard,p...Aspect-based sentiment analysis aims to detect and classify the sentiment polarities as negative,positive,or neutral while associating them with their identified aspects from the corresponding context.In this regard,prior methodologies widely utilize either word embedding or tree-based rep-resentations.Meanwhile,the separate use of those deep features such as word embedding and tree-based dependencies has become a significant cause of information loss.Generally,word embedding preserves the syntactic and semantic relations between a couple of terms lying in a sentence.Besides,the tree-based structure conserves the grammatical and logical dependencies of context.In addition,the sentence-oriented word position describes a critical factor that influences the contextual information of a targeted sentence.Therefore,knowledge of the position-oriented information of words in a sentence has been considered significant.In this study,we propose to use word embedding,tree-based representation,and contextual position information in combination to evaluate whether their combination will improve the result’s effectiveness or not.In the meantime,their joint utilization enhances the accurate identification and extraction of targeted aspect terms,which also influences their classification process.In this research paper,we propose a method named Attention Based Multi-Channel Convolutional Neural Net-work(Att-MC-CNN)that jointly utilizes these three deep features such as word embedding with tree-based structure and contextual position informa-tion.These three parameters deliver to Multi-Channel Convolutional Neural Network(MC-CNN)that identifies and extracts the potential terms and classifies their polarities.In addition,these terms have been further filtered with the attention mechanism,which determines the most significant words.The empirical analysis proves the proposed approach’s effectiveness compared to existing techniques when evaluated on standard datasets.The experimental results represent our approach outperforms in the F1 measure with an overall achievement of 94%in identifying aspects and 92%in the task of sentiment classification.展开更多
Segmenting brain tumors in Magnetic Resonance Imaging(MRI)volumes is challenging due to their diffuse and irregular shapes.Recently,2D and 3D deep neural networks have become famous for medical image segmentation beca...Segmenting brain tumors in Magnetic Resonance Imaging(MRI)volumes is challenging due to their diffuse and irregular shapes.Recently,2D and 3D deep neural networks have become famous for medical image segmentation because of the availability of labelled datasets.However,3D networks can be computationally expensive and require significant training resources.This research proposes a 3D deep learning model for brain tumor segmentation that uses lightweight feature extraction modules to improve performance without compromising contextual information or accuracy.The proposed model,called Hybrid Attention-Based Residual Unet(HA-RUnet),is based on the Unet architecture and utilizes residual blocks to extract low-and high-level features from MRI volumes.Attention and Squeeze-Excitation(SE)modules are also integrated at different levels to learn attention-aware features adaptively within local and global receptive fields.The proposed model was trained on the BraTS-2020 dataset and achieved a dice score of 0.867,0.813,and 0.787,as well as a sensitivity of 0.93,0.88,and 0.83 for Whole Tumor,Tumor Core,and Enhancing Tumor,on test dataset respectively.Experimental results show that the proposed HA-RUnet model outperforms the ResUnet and AResUnet base models while having a smaller number of parameters than other state-of-the-art models.Overall,the proposed HA-RUnet model can improve brain tumor segmentation accuracy and facilitate appropriate diagnosis and treatment planning for medical practitioners.展开更多
This study proposes a new flexible family of distributions called the Lambert-G family.The Lambert family is very flexible and exhibits desirable properties.Its three-parameter special sub-models provide all significa...This study proposes a new flexible family of distributions called the Lambert-G family.The Lambert family is very flexible and exhibits desirable properties.Its three-parameter special sub-models provide all significantmonotonic and non-monotonic failure rates.A special sub-model of the Lambert family called the Lambert-Lomax(LL)distribution is investigated.General expressions for the LL statistical properties are established.Characterizations of the LL distribution are addressed mathematically based on its hazard function.The estimation of the LL parameters is discussed using six estimation methods.The performance of this estimation method is explored through simulation experiments.The usefulness and flexibility of the LL distribution are demonstrated empirically using two real-life data sets.The LL model better fits the exponentiated Lomax,inverse power Lomax,Lomax-Rayleigh,power Lomax,and Lomax distributions.展开更多
Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression ...Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression microarrays have made it possible to find genetic biomarkers for cancer diagnosis and prediction in a high-throughput manner.Machine Learning(ML)has been widely used to diagnose and classify lung cancer where the performance of ML methods is evaluated to identify the appropriate technique.Identifying and selecting the gene expression patterns can help in lung cancer diagnoses and classification.Normally,microarrays include several genes and may cause confusion or false prediction.Therefore,the Arithmetic Optimization Algorithm(AOA)is used to identify the optimal gene subset to reduce the number of selected genes.Which can allow the classifiers to yield the best performance for lung cancer classification.In addition,we proposed a modified version of AOA which can work effectively on the high dimensional dataset.In the modified AOA,the features are ranked by their weights and are used to initialize the AOA population.The exploitation process of AOA is then enhanced by developing a local search algorithm based on two neighborhood strategies.Finally,the efficiency of the proposed methods was evaluated on gene expression datasets related to Lung cancer using stratified 4-fold cross-validation.The method’s efficacy in selecting the optimal gene subset is underscored by its ability to maintain feature proportions between 10%to 25%.Moreover,the approach significantly enhances lung cancer prediction accuracy.For instance,Lung_Harvard1 achieved an accuracy of 97.5%,Lung_Harvard2 and Lung_Michigan datasets both achieved 100%,Lung_Adenocarcinoma obtained an accuracy of 88.2%,and Lung_Ontario achieved an accuracy of 87.5%.In conclusion,the results indicate the potential promise of the proposed modified AOA approach in classifying microarray cancer data.展开更多
Phishing,an Internet fraudwhere individuals are deceived into revealing critical personal and account information,poses a significant risk to both consumers and web-based institutions.Data indicates a persistent rise ...Phishing,an Internet fraudwhere individuals are deceived into revealing critical personal and account information,poses a significant risk to both consumers and web-based institutions.Data indicates a persistent rise in phishing attacks.Moreover,these fraudulent schemes are progressively becoming more intricate,thereby rendering them more challenging to identify.Hence,it is imperative to utilize sophisticated algorithms to address this issue.Machine learning is a highly effective approach for identifying and uncovering these harmful behaviors.Machine learning(ML)approaches can identify common characteristics in most phishing assaults.In this paper,we propose an ensemble approach and compare it with six machine learning techniques to determine the type of website and whether it is normal or not based on two phishing datasets.After that,we used the normalization technique on the dataset to transform the range of all the features into the same range.The findings of this paper for all algorithms are as follows in the first dataset based on accuracy,precision,recall,and F1-score,respectively:Decision Tree(DT)(0.964,0.961,0.976,0.968),Random Forest(RF)(0.970,0.964,0.984,0.974),Gradient Boosting(GB)(0.960,0.959,0.971,0.965),XGBoost(XGB)(0.973,0.976,0.976,0.976),AdaBoost(0.934,0.934,0.950,0.942),Multi Layer Perceptron(MLP)(0.970,0.971,0.976,0.974)and Voting(0.978,0.975,0.987,0.981).So,the Voting classifier gave the best results.While in the second dataset,all the algorithms gave the same results in four evaluation metrics,which indicates that each of them can effectively accomplish the prediction process.Also,this approach outperformed the previous work in detecting phishing websites with high accuracy,a lower false negative rate,a shorter prediction time,and a lower false positive rate.展开更多
Centralized storage and identity identification methods pose many risks,including hacker attacks,data misuse,and single points of failure.Additionally,existing centralized identity management methods face interoperabi...Centralized storage and identity identification methods pose many risks,including hacker attacks,data misuse,and single points of failure.Additionally,existing centralized identity management methods face interoperability issues and rely on a single identity provider,leaving users without control over their identities.Therefore,this paper proposes a mechanism for identity identification and data sharing based on decentralized identifiers.The scheme utilizes blockchain technology to store the identifiers and data hashed on the chain to ensure permanent identity recognition and data integrity.Data is stored on InterPlanetary File System(IPFS)to avoid the risk of single points of failure and to enhance data persistence and availability.At the same time,compliance with World Wide Web Consortium(W3C)standards for decentralized identifiers and verifiable credentials increases the mechanism’s scalability and interoperability.展开更多
Myelin damage and a wide range of symptoms are caused by the immune system targeting the central nervous system in Multiple Sclerosis(MS),a chronic autoimmune neurological condition.It disrupts signals between the bra...Myelin damage and a wide range of symptoms are caused by the immune system targeting the central nervous system in Multiple Sclerosis(MS),a chronic autoimmune neurological condition.It disrupts signals between the brain and body,causing symptoms including tiredness,muscle weakness,and difficulty with memory and balance.Traditional methods for detecting MS are less precise and time-consuming,which is a major gap in addressing this problem.This gap has motivated the investigation of new methods to improve MS detection consistency and accuracy.This paper proposed a novel approach named FAD consisting of Deep Neural Network(DNN)fused with an Artificial Neural Network(ANN)to detect MS with more efficiency and accuracy,utilizing regularization and combat over-fitting.We use gene expression data for MS research in the GEO GSE17048 dataset.The dataset is preprocessed by performing encoding,standardization using min-max-scaler,and feature selection using Recursive Feature Elimination with Cross-Validation(RFECV)to optimize and refine the dataset.Meanwhile,for experimenting with the dataset,another deep-learning hybrid model is integrated with different ML models,including Random Forest(RF),Gradient Boosting(GB),XGBoost(XGB),K-Nearest Neighbors(KNN)and Decision Tree(DT).Results reveal that FAD performed exceptionally well on the dataset,which was evident with an accuracy of 96.55%and an F1-score of 96.71%.The use of the proposed FAD approach helps in achieving remarkable results with better accuracy than previous studies.展开更多
As the Internet of Things(IoT)continues to expand,incorporating a vast array of devices into a digital ecosystem also increases the risk of cyber threats,necessitating robust defense mechanisms.This paper presents an ...As the Internet of Things(IoT)continues to expand,incorporating a vast array of devices into a digital ecosystem also increases the risk of cyber threats,necessitating robust defense mechanisms.This paper presents an innovative hybrid deep learning architecture that excels at detecting IoT threats in real-world settings.Our proposed model combines Convolutional Neural Networks(CNN),Bidirectional Long Short-Term Memory(BLSTM),Gated Recurrent Units(GRU),and Attention mechanisms into a cohesive framework.This integrated structure aims to enhance the detection and classification of complex cyber threats while accommodating the operational constraints of diverse IoT systems.We evaluated our model using the RT-IoT2022 dataset,which includes various devices,standard operations,and simulated attacks.Our research’s significance lies in the comprehensive evaluation metrics,including Cohen Kappa and Matthews Correlation Coefficient(MCC),which underscore the model’s reliability and predictive quality.Our model surpassed traditional machine learning algorithms and the state-of-the-art,achieving over 99.6%precision,recall,F1-score,False Positive Rate(FPR),Detection Time,and accuracy,effectively identifying specific threats such as Message Queuing Telemetry Transport(MQTT)Publish,Denial of Service Synchronize network packet crafting tool(DOS SYN Hping),and Network Mapper Operating System Detection(NMAP OS DETECTION).The experimental analysis reveals a significant improvement over existing detection systems,significantly enhancing IoT security paradigms.Through our experimental analysis,we have demonstrated a remarkable enhancement in comparison to existing detection systems,which significantly strength-ens the security standards of IoT.Our model effectively addresses the need for advanced,dependable,and adaptable security solutions,serving as a symbol of the power of deep learning in strengthening IoT ecosystems amidst the constantly evolving cyber threat landscape.This achievement marks a significant stride towards protecting the integrity of IoT infrastructure,ensuring operational resilience,and building privacy in this groundbreaking technology.展开更多
Machine-to-machine (M2M) communication plays a fundamental role in autonomous IoT (Internet of Things)-based infrastructure, a vital part of the fourth industrial revolution. Machine-type communication devices(MTCDs) ...Machine-to-machine (M2M) communication plays a fundamental role in autonomous IoT (Internet of Things)-based infrastructure, a vital part of the fourth industrial revolution. Machine-type communication devices(MTCDs) regularly share extensive data without human intervention while making all types of decisions. Thesedecisions may involve controlling sensitive ventilation systems maintaining uniform temperature, live heartbeatmonitoring, and several different alert systems. Many of these devices simultaneously share data to form anautomated system. The data shared between machine-type communication devices (MTCDs) is prone to risk dueto limited computational power, internal memory, and energy capacity. Therefore, securing the data and devicesbecomes challenging due to factors such as dynamic operational environments, remoteness, harsh conditions,and areas where human physical access is difficult. One of the crucial parts of securing MTCDs and data isauthentication, where each devicemust be verified before data transmission. SeveralM2Mauthentication schemeshave been proposed in the literature, however, the literature lacks a comprehensive overview of current M2Mauthentication techniques and the challenges associated with them. To utilize a suitable authentication schemefor specific scenarios, it is important to understand the challenges associated with it. Therefore, this article fillsthis gap by reviewing the state-of-the-art research on authentication schemes in MTCDs specifically concerningapplication categories, security provisions, and performance efficiency.展开更多
Neuroimaging has emerged over the last few decades as a crucial tool in diagnosing Alzheimer’s disease(AD).Mild cognitive impairment(MCI)is a condition that falls between the spectrum of normal cognitive function and...Neuroimaging has emerged over the last few decades as a crucial tool in diagnosing Alzheimer’s disease(AD).Mild cognitive impairment(MCI)is a condition that falls between the spectrum of normal cognitive function and AD.However,previous studies have mainly used handcrafted features to classify MCI,AD,and normal control(NC)individuals.This paper focuses on using gray matter(GM)scans obtained through magnetic resonance imaging(MRI)for the diagnosis of individuals with MCI,AD,and NC.To improve classification performance,we developed two transfer learning strategies with data augmentation(i.e.,shear range,rotation,zoom range,channel shift).The first approach is a deep Siamese network(DSN),and the second approach involves using a cross-domain strategy with customized VGG-16.We performed experiments on the Alzheimer’s Disease Neuroimaging Initiative(ADNI)dataset to evaluate the performance of our proposed models.Our experimental results demonstrate superior performance in classifying the three binary classification tasks:NC vs.AD,NC vs.MCI,and MCI vs.AD.Specifically,we achieved a classification accuracy of 97.68%,94.25%,and 92.18%for the three cases,respectively.Our study proposes two transfer learning strategies with data augmentation to accurately diagnose MCI,AD,and normal control individuals using GM scans.Our findings provide promising results for future research and clinical applications in the early detection and diagnosis of AD.展开更多
基金funding from Prince Sattam bin Abdulaziz University through the Project Number(PSAU/2023/01/24607).
文摘In the domain ofmedical imaging,the accurate detection and classification of brain tumors is very important.This study introduces an advanced method for identifying camouflaged brain tumors within images.Our proposed model consists of three steps:Feature extraction,feature fusion,and then classification.The core of this model revolves around a feature extraction framework that combines color-transformed images with deep learning techniques,using the ResNet50 Convolutional Neural Network(CNN)architecture.So the focus is to extract robust feature fromMRI images,particularly emphasizingweighted average features extracted fromthe first convolutional layer renowned for their discriminative power.To enhance model robustness,we introduced a novel feature fusion technique based on the Marine Predator Algorithm(MPA),inspired by the hunting behavior of marine predators and has shown promise in optimizing complex problems.The proposed methodology can accurately classify and detect brain tumors in camouflage images by combining the power of color transformations,deep learning,and feature fusion via MPA,and achieved an accuracy of 98.72%on a more complex dataset surpassing the existing state-of-the-art methods,highlighting the effectiveness of the proposed model.The importance of this research is in its potential to advance the field ofmedical image analysis,particularly in brain tumor diagnosis,where diagnoses early,and accurate classification are critical for improved patient results.
文摘Cloud computing has become one of the most projecting words in the IT world due to its design for providing computing service as a utility. The typical use of cloud computing as a resource has changed the scenery of computing. Due to the increased flexibility, better reliability, great scalability, and decreased costs have captivated businesses and individuals alike because of the pay-per-use form of the cloud environment. Cloud computing is a completely internet dependent technology where client data are stored and maintained in the data center of a cloud provider like Google, Amazon, Apple Inc., Microsoft etc. The Anomaly Detection System is one of the Intrusion Detection techniques. It’s an area in the cloud environment that is been developed in the detection of unusual activities in the cloud networks. Although, there are a variety of Intrusion Detection techniques available in the cloud environment, this review paper exposes and focuses on different IDS in cloud networks through different categorizations and conducts comparative study on the security measures of Dropbox, Google Drive and iCloud, to illuminate their strength and weakness in terms of security.
文摘In order to forecast promising technologies in the field of next generation mobile communication, various patent indicators were analyzed such as citation per patent, patent family information, patent share, increase rate, and patent activity. These indicators were quantified into several indexes and then integrated into an evaluation score to provide promising technologies. As a result of the suggested patent analysis, four technologies out of twenty two in details classification were selected, which showed outstanding technology competitiveness, high patent share and increasing rates as well as high recent-patent-ratios and triad-patent-family-ratios. Each of the selected technologies scored more than 10 points in total, and the following four technologies were suggested as promising ones in the field of next generation mobile communication: 1) 3GPP based mobile communication, 2) beyond 4G mobile communication, 3) IEEE 802.16 based mobile communication, which are in medium classification of broadband mobile communication system, and 4) testing/certification system of mobile communication, which is in medium classification of mobile communication testing/certification system.
基金supported by the National Research Foundation in Korea (NRF) through contract N-12-NM-IR19
文摘There are numerous information technology solutions including hardware and software. A company that provides the solution should have knowledge of the customer needs in the purpose of sailing strategy or upgrade policy. The needs are also directly connected to the user satisfaction. However, the users have respective points of view in the needs as well as they may not identify the requirements to improve the solution. SERVQUAL can be an appropriate method to define and measure the customer satisfaction for the information technology solutions. As a case study of the customer satisfaction, the modified SERVQUAL items and scoring method are applied to a cyber-infrastructure system named CyberL ab in Korea. The measurement results of user satisfaction for CyberL ab are provided to confirm that our proposed method performs as we intended. From the results, we can score the satisfaction level of users and identify their needs in the various aspects. The total user satisfaction level for CyberL ab is scored by 88.3.
文摘This study examines the key factors that have impact on the successful adoption of Human Resource Information System (HRIS) within the Aqaba Special Economic Zone Authority (ASEZA)/Jordan. In order to accomplish the purpose of the study four critical factors are inquired. So, four critical factors are inquired: First, TAM Model (Perceived Ease of Use (PEOU) and Perceived Usefulness (PU)). Second, Information Technology Infrastructure (ITI). Third, Top Management Support (TMS). Finally, Individual Experience with Computer (IEC). The research model was applied to collect data from the questionnaires answered by 45 users of HRIS as a source of primary data, based on a convenience sample the response rate was about 91%. In addition, the results were analyzed by utilizing the Statistical Package for Social Software (SPSS). Furthermore, the findings were analyzed;multiple Regression analysis indicated that all research variables have significant relationship on successful adoption of HRIS. The findings indicated IT infrastructures have a positive and significant effect on the successful adoption of HRIS. But there is no significant of PU, PEOU, TMS, and IEC on the successful adoption of HRIS. Finally, the results indicated that no significant statistical differences of demographic characteristics on HRIS adoption. Depending on the research’s findings;the researchers proposed a set of recommendations for better adoption of HRIS in SEZA.
文摘Information Communication Technologies (ICT) has offered m-government applications as an intermediate technology to provide effective and efficient government services to the public. Due to high rate of corruptions in developing states, government policies diversified governmental services from offline to virtualized perspective to expose accessibility, transparency, accountability and accessibility through mobile government. Deployment of such ICT tool also exposed a unique opportunity for the recovery of the public confidence against government which has damaged due to corruption activities in country. Virtualization of the government services became compulsory due to high rate of corruption that occurred in the economic context and it became a serious obstacle for economic development of developing states. The virtualized services aimed to harmonize governmental services into mobile platform in order to become more transparent to the public. This research paper comparatively investigates the mobile government services that are located in Malta and Singapore which are classified as developing countries. The criteria of the comparison have done based on demographic structure of the country, M-government policies and ICT infrastructure of the country. The findings of this study exposed the impact of e-government practices and differences between them in terms of applicability and provide a specific point of view for m-government adoption policy.
文摘The relation between the HRM and the firm performance is analyzed statistically by many researchers in the literature. However, there are very few nonlinear approaches in literature for finding the relation between Human Resource Management (FIRM) and firm performance. This paper exposes the relationship between human resource management and organizational performance through the use of nonlinear modeling technique. The modeling is proposed based on Radial Basis Function (RBF) which is nonlinear modeling technique in literature. The relation between 12 input and 9 output parameters is investigated in this research that is collected between 54 companies in Turkey which indicated that the relationship between organizational management performance and relationship management can be modelled through nonlinearly.
基金funded by Taif University Researchers Supporting Project No.(TURSP-2020/214),Taif University,Taif,Saudi Arabia。
文摘For a 5G wireless communication system,a convolutional deep neural network(CNN)is employed to synthesize a robust channel state estimator(CSE).The proposed CSE extracts channel information from transmit-and-receive pairs through offline training to estimate the channel state information.Also,it utilizes pilots to offer more helpful information about the communication channel.The proposedCNN-CSE performance is compared with previously published results for Bidirectional/long short-term memory(BiLSTM/LSTM)NNs-based CSEs.The CNN-CSE achieves outstanding performance using sufficient pilots only and loses its functionality at limited pilots compared with BiLSTM and LSTM-based estimators.Using three different loss function-based classification layers and the Adam optimization algorithm,a comparative study was conducted to assess the performance of the presented DNNs-based CSEs.The BiLSTM-CSE outperforms LSTM,CNN,conventional least squares(LS),and minimum mean square error(MMSE)CSEs.In addition,the computational and learning time complexities for DNN-CSEs are provided.These estimators are promising for 5G and future communication systems because they can analyze large amounts of data,discover statistical dependencies,learn correlations between features,and generalize the gotten knowledge.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning (KETEP)granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea. (No.20204010600090).
文摘Identifying fruit disease manually is time-consuming, expertrequired,and expensive;thus, a computer-based automated system is widelyrequired. Fruit diseases affect not only the quality but also the quantity.As a result, it is possible to detect the disease early on and cure the fruitsusing computer-based techniques. However, computer-based methods faceseveral challenges, including low contrast, a lack of dataset for training amodel, and inappropriate feature extraction for final classification. In thispaper, we proposed an automated framework for detecting apple fruit leafdiseases usingCNNand a hybrid optimization algorithm. Data augmentationis performed initially to balance the selected apple dataset. After that, twopre-trained deep models are fine-tuning and trained using transfer learning.Then, a fusion technique is proposed named Parallel Correlation Threshold(PCT). The fused feature vector is optimized in the next step using a hybridoptimization algorithm. The selected features are finally classified usingmachine learning algorithms. Four different experiments have been carriedout on the augmented Plant Village dataset and yielded the best accuracy of99.8%. The accuracy of the proposed framework is also compared to that ofseveral neural nets, and it outperforms them all.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP),granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea(No.20204010600090)Supporting Project Number(PNURSP2023R387),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Diagnosing gastrointestinal cancer by classical means is a hazardous procedure.Years have witnessed several computerized solutions for stomach disease detection and classification.However,the existing techniques faced challenges,such as irrelevant feature extraction,high similarity among different disease symptoms,and the least-important features from a single source.This paper designed a new deep learning-based architecture based on the fusion of two models,Residual blocks and Auto Encoder.First,the Hyper-Kvasir dataset was employed to evaluate the proposed work.The research selected a pre-trained convolutional neural network(CNN)model and improved it with several residual blocks.This process aims to improve the learning capability of deep models and lessen the number of parameters.Besides,this article designed an Auto-Encoder-based network consisting of five convolutional layers in the encoder stage and five in the decoder phase.The research selected the global average pooling and convolutional layers for the feature extraction optimized by a hybrid Marine Predator optimization and Slime Mould optimization algorithm.These features of both models are fused using a novel fusion technique that is later classified using the Artificial Neural Network classifier.The experiment worked on the HyperKvasir dataset,which consists of 23 stomach-infected classes.At last,the proposed method obtained an improved accuracy of 93.90%on this dataset.Comparison is also conducted with some recent techniques and shows that the proposed method’s accuracy is improved.
基金supported by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia[Grant No.3418].
文摘Aspect-based sentiment analysis aims to detect and classify the sentiment polarities as negative,positive,or neutral while associating them with their identified aspects from the corresponding context.In this regard,prior methodologies widely utilize either word embedding or tree-based rep-resentations.Meanwhile,the separate use of those deep features such as word embedding and tree-based dependencies has become a significant cause of information loss.Generally,word embedding preserves the syntactic and semantic relations between a couple of terms lying in a sentence.Besides,the tree-based structure conserves the grammatical and logical dependencies of context.In addition,the sentence-oriented word position describes a critical factor that influences the contextual information of a targeted sentence.Therefore,knowledge of the position-oriented information of words in a sentence has been considered significant.In this study,we propose to use word embedding,tree-based representation,and contextual position information in combination to evaluate whether their combination will improve the result’s effectiveness or not.In the meantime,their joint utilization enhances the accurate identification and extraction of targeted aspect terms,which also influences their classification process.In this research paper,we propose a method named Attention Based Multi-Channel Convolutional Neural Net-work(Att-MC-CNN)that jointly utilizes these three deep features such as word embedding with tree-based structure and contextual position informa-tion.These three parameters deliver to Multi-Channel Convolutional Neural Network(MC-CNN)that identifies and extracts the potential terms and classifies their polarities.In addition,these terms have been further filtered with the attention mechanism,which determines the most significant words.The empirical analysis proves the proposed approach’s effectiveness compared to existing techniques when evaluated on standard datasets.The experimental results represent our approach outperforms in the F1 measure with an overall achievement of 94%in identifying aspects and 92%in the task of sentiment classification.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP),granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea.(No.20204010600090).
文摘Segmenting brain tumors in Magnetic Resonance Imaging(MRI)volumes is challenging due to their diffuse and irregular shapes.Recently,2D and 3D deep neural networks have become famous for medical image segmentation because of the availability of labelled datasets.However,3D networks can be computationally expensive and require significant training resources.This research proposes a 3D deep learning model for brain tumor segmentation that uses lightweight feature extraction modules to improve performance without compromising contextual information or accuracy.The proposed model,called Hybrid Attention-Based Residual Unet(HA-RUnet),is based on the Unet architecture and utilizes residual blocks to extract low-and high-level features from MRI volumes.Attention and Squeeze-Excitation(SE)modules are also integrated at different levels to learn attention-aware features adaptively within local and global receptive fields.The proposed model was trained on the BraTS-2020 dataset and achieved a dice score of 0.867,0.813,and 0.787,as well as a sensitivity of 0.93,0.88,and 0.83 for Whole Tumor,Tumor Core,and Enhancing Tumor,on test dataset respectively.Experimental results show that the proposed HA-RUnet model outperforms the ResUnet and AResUnet base models while having a smaller number of parameters than other state-of-the-art models.Overall,the proposed HA-RUnet model can improve brain tumor segmentation accuracy and facilitate appropriate diagnosis and treatment planning for medical practitioners.
文摘This study proposes a new flexible family of distributions called the Lambert-G family.The Lambert family is very flexible and exhibits desirable properties.Its three-parameter special sub-models provide all significantmonotonic and non-monotonic failure rates.A special sub-model of the Lambert family called the Lambert-Lomax(LL)distribution is investigated.General expressions for the LL statistical properties are established.Characterizations of the LL distribution are addressed mathematically based on its hazard function.The estimation of the LL parameters is discussed using six estimation methods.The performance of this estimation method is explored through simulation experiments.The usefulness and flexibility of the LL distribution are demonstrated empirically using two real-life data sets.The LL model better fits the exponentiated Lomax,inverse power Lomax,Lomax-Rayleigh,power Lomax,and Lomax distributions.
基金supported by the Deanship of Scientific Research,at Imam Abdulrahman Bin Faisal University.Grant Number:2019-416-ASCS.
文摘Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression microarrays have made it possible to find genetic biomarkers for cancer diagnosis and prediction in a high-throughput manner.Machine Learning(ML)has been widely used to diagnose and classify lung cancer where the performance of ML methods is evaluated to identify the appropriate technique.Identifying and selecting the gene expression patterns can help in lung cancer diagnoses and classification.Normally,microarrays include several genes and may cause confusion or false prediction.Therefore,the Arithmetic Optimization Algorithm(AOA)is used to identify the optimal gene subset to reduce the number of selected genes.Which can allow the classifiers to yield the best performance for lung cancer classification.In addition,we proposed a modified version of AOA which can work effectively on the high dimensional dataset.In the modified AOA,the features are ranked by their weights and are used to initialize the AOA population.The exploitation process of AOA is then enhanced by developing a local search algorithm based on two neighborhood strategies.Finally,the efficiency of the proposed methods was evaluated on gene expression datasets related to Lung cancer using stratified 4-fold cross-validation.The method’s efficacy in selecting the optimal gene subset is underscored by its ability to maintain feature proportions between 10%to 25%.Moreover,the approach significantly enhances lung cancer prediction accuracy.For instance,Lung_Harvard1 achieved an accuracy of 97.5%,Lung_Harvard2 and Lung_Michigan datasets both achieved 100%,Lung_Adenocarcinoma obtained an accuracy of 88.2%,and Lung_Ontario achieved an accuracy of 87.5%.In conclusion,the results indicate the potential promise of the proposed modified AOA approach in classifying microarray cancer data.
基金funding from Deanship of Scientific Research in King Faisal University with Grant Number KFU 241085.
文摘Phishing,an Internet fraudwhere individuals are deceived into revealing critical personal and account information,poses a significant risk to both consumers and web-based institutions.Data indicates a persistent rise in phishing attacks.Moreover,these fraudulent schemes are progressively becoming more intricate,thereby rendering them more challenging to identify.Hence,it is imperative to utilize sophisticated algorithms to address this issue.Machine learning is a highly effective approach for identifying and uncovering these harmful behaviors.Machine learning(ML)approaches can identify common characteristics in most phishing assaults.In this paper,we propose an ensemble approach and compare it with six machine learning techniques to determine the type of website and whether it is normal or not based on two phishing datasets.After that,we used the normalization technique on the dataset to transform the range of all the features into the same range.The findings of this paper for all algorithms are as follows in the first dataset based on accuracy,precision,recall,and F1-score,respectively:Decision Tree(DT)(0.964,0.961,0.976,0.968),Random Forest(RF)(0.970,0.964,0.984,0.974),Gradient Boosting(GB)(0.960,0.959,0.971,0.965),XGBoost(XGB)(0.973,0.976,0.976,0.976),AdaBoost(0.934,0.934,0.950,0.942),Multi Layer Perceptron(MLP)(0.970,0.971,0.976,0.974)and Voting(0.978,0.975,0.987,0.981).So,the Voting classifier gave the best results.While in the second dataset,all the algorithms gave the same results in four evaluation metrics,which indicates that each of them can effectively accomplish the prediction process.Also,this approach outperformed the previous work in detecting phishing websites with high accuracy,a lower false negative rate,a shorter prediction time,and a lower false positive rate.
文摘Centralized storage and identity identification methods pose many risks,including hacker attacks,data misuse,and single points of failure.Additionally,existing centralized identity management methods face interoperability issues and rely on a single identity provider,leaving users without control over their identities.Therefore,this paper proposes a mechanism for identity identification and data sharing based on decentralized identifiers.The scheme utilizes blockchain technology to store the identifiers and data hashed on the chain to ensure permanent identity recognition and data integrity.Data is stored on InterPlanetary File System(IPFS)to avoid the risk of single points of failure and to enhance data persistence and availability.At the same time,compliance with World Wide Web Consortium(W3C)standards for decentralized identifiers and verifiable credentials increases the mechanism’s scalability and interoperability.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R503),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Myelin damage and a wide range of symptoms are caused by the immune system targeting the central nervous system in Multiple Sclerosis(MS),a chronic autoimmune neurological condition.It disrupts signals between the brain and body,causing symptoms including tiredness,muscle weakness,and difficulty with memory and balance.Traditional methods for detecting MS are less precise and time-consuming,which is a major gap in addressing this problem.This gap has motivated the investigation of new methods to improve MS detection consistency and accuracy.This paper proposed a novel approach named FAD consisting of Deep Neural Network(DNN)fused with an Artificial Neural Network(ANN)to detect MS with more efficiency and accuracy,utilizing regularization and combat over-fitting.We use gene expression data for MS research in the GEO GSE17048 dataset.The dataset is preprocessed by performing encoding,standardization using min-max-scaler,and feature selection using Recursive Feature Elimination with Cross-Validation(RFECV)to optimize and refine the dataset.Meanwhile,for experimenting with the dataset,another deep-learning hybrid model is integrated with different ML models,including Random Forest(RF),Gradient Boosting(GB),XGBoost(XGB),K-Nearest Neighbors(KNN)and Decision Tree(DT).Results reveal that FAD performed exceptionally well on the dataset,which was evident with an accuracy of 96.55%and an F1-score of 96.71%.The use of the proposed FAD approach helps in achieving remarkable results with better accuracy than previous studies.
基金funding from Deanship of Scientific Research in King Faisal University with Grant Number KFU241648.
文摘As the Internet of Things(IoT)continues to expand,incorporating a vast array of devices into a digital ecosystem also increases the risk of cyber threats,necessitating robust defense mechanisms.This paper presents an innovative hybrid deep learning architecture that excels at detecting IoT threats in real-world settings.Our proposed model combines Convolutional Neural Networks(CNN),Bidirectional Long Short-Term Memory(BLSTM),Gated Recurrent Units(GRU),and Attention mechanisms into a cohesive framework.This integrated structure aims to enhance the detection and classification of complex cyber threats while accommodating the operational constraints of diverse IoT systems.We evaluated our model using the RT-IoT2022 dataset,which includes various devices,standard operations,and simulated attacks.Our research’s significance lies in the comprehensive evaluation metrics,including Cohen Kappa and Matthews Correlation Coefficient(MCC),which underscore the model’s reliability and predictive quality.Our model surpassed traditional machine learning algorithms and the state-of-the-art,achieving over 99.6%precision,recall,F1-score,False Positive Rate(FPR),Detection Time,and accuracy,effectively identifying specific threats such as Message Queuing Telemetry Transport(MQTT)Publish,Denial of Service Synchronize network packet crafting tool(DOS SYN Hping),and Network Mapper Operating System Detection(NMAP OS DETECTION).The experimental analysis reveals a significant improvement over existing detection systems,significantly enhancing IoT security paradigms.Through our experimental analysis,we have demonstrated a remarkable enhancement in comparison to existing detection systems,which significantly strength-ens the security standards of IoT.Our model effectively addresses the need for advanced,dependable,and adaptable security solutions,serving as a symbol of the power of deep learning in strengthening IoT ecosystems amidst the constantly evolving cyber threat landscape.This achievement marks a significant stride towards protecting the integrity of IoT infrastructure,ensuring operational resilience,and building privacy in this groundbreaking technology.
基金the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia(Grant No.GRANT5,208).
文摘Machine-to-machine (M2M) communication plays a fundamental role in autonomous IoT (Internet of Things)-based infrastructure, a vital part of the fourth industrial revolution. Machine-type communication devices(MTCDs) regularly share extensive data without human intervention while making all types of decisions. Thesedecisions may involve controlling sensitive ventilation systems maintaining uniform temperature, live heartbeatmonitoring, and several different alert systems. Many of these devices simultaneously share data to form anautomated system. The data shared between machine-type communication devices (MTCDs) is prone to risk dueto limited computational power, internal memory, and energy capacity. Therefore, securing the data and devicesbecomes challenging due to factors such as dynamic operational environments, remoteness, harsh conditions,and areas where human physical access is difficult. One of the crucial parts of securing MTCDs and data isauthentication, where each devicemust be verified before data transmission. SeveralM2Mauthentication schemeshave been proposed in the literature, however, the literature lacks a comprehensive overview of current M2Mauthentication techniques and the challenges associated with them. To utilize a suitable authentication schemefor specific scenarios, it is important to understand the challenges associated with it. Therefore, this article fillsthis gap by reviewing the state-of-the-art research on authentication schemes in MTCDs specifically concerningapplication categories, security provisions, and performance efficiency.
基金Research work funded by Zhejiang Normal University Research Fund YS304023947 and YS304023948.
文摘Neuroimaging has emerged over the last few decades as a crucial tool in diagnosing Alzheimer’s disease(AD).Mild cognitive impairment(MCI)is a condition that falls between the spectrum of normal cognitive function and AD.However,previous studies have mainly used handcrafted features to classify MCI,AD,and normal control(NC)individuals.This paper focuses on using gray matter(GM)scans obtained through magnetic resonance imaging(MRI)for the diagnosis of individuals with MCI,AD,and NC.To improve classification performance,we developed two transfer learning strategies with data augmentation(i.e.,shear range,rotation,zoom range,channel shift).The first approach is a deep Siamese network(DSN),and the second approach involves using a cross-domain strategy with customized VGG-16.We performed experiments on the Alzheimer’s Disease Neuroimaging Initiative(ADNI)dataset to evaluate the performance of our proposed models.Our experimental results demonstrate superior performance in classifying the three binary classification tasks:NC vs.AD,NC vs.MCI,and MCI vs.AD.Specifically,we achieved a classification accuracy of 97.68%,94.25%,and 92.18%for the three cases,respectively.Our study proposes two transfer learning strategies with data augmentation to accurately diagnose MCI,AD,and normal control individuals using GM scans.Our findings provide promising results for future research and clinical applications in the early detection and diagnosis of AD.