Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique...Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.展开更多
With the rapid growth of internet usage,a new situation has been created that enables practicing bullying.Cyberbullying has increased over the past decade,and it has the same adverse effects as face-to-face bullying,l...With the rapid growth of internet usage,a new situation has been created that enables practicing bullying.Cyberbullying has increased over the past decade,and it has the same adverse effects as face-to-face bullying,like anger,sadness,anxiety,and fear.With the anonymity people get on the internet,they tend to bemore aggressive and express their emotions freely without considering the effects,which can be a reason for the increase in cyberbullying and it is the main motive behind the current study.This study presents a thorough background of cyberbullying and the techniques used to collect,preprocess,and analyze the datasets.Moreover,a comprehensive review of the literature has been conducted to figure out research gaps and effective techniques and practices in cyberbullying detection in various languages,and it was deduced that there is significant room for improvement in the Arabic language.As a result,the current study focuses on the investigation of shortlisted machine learning algorithms in natural language processing(NLP)for the classification of Arabic datasets duly collected from Twitter(also known as X).In this regard,support vector machine(SVM),Naive Bayes(NB),Random Forest(RF),Logistic regression(LR),Bootstrap aggregating(Bagging),Gradient Boosting(GBoost),Light Gradient Boosting Machine(LightGBM),Adaptive Boosting(AdaBoost),and eXtreme Gradient Boosting(XGBoost)were shortlisted and investigated due to their effectiveness in the similar problems.Finally,the scheme was evaluated by well-known performance measures like accuracy,precision,Recall,and F1-score.Consequently,XGBoost exhibited the best performance with 89.95%accuracy,which is promising compared to the state-of-the-art.展开更多
In many fields, particularly that of health, the diagnosis of diseases is a very difficult task to carry out. Therefore, early detection of diseases using artificial intelligence tools can be of paramount importance i...In many fields, particularly that of health, the diagnosis of diseases is a very difficult task to carry out. Therefore, early detection of diseases using artificial intelligence tools can be of paramount importance in the medical field. In this study, we proposed an intelligent system capable of performing diagnoses for radiologists. The support system is designed to evaluate mammographic images, thereby classifying normal and abnormal patients. The proposed method (DiagBC for Breast Cancer Diagnosis) combines two (2) intelligent unsupervised learning algorithms (the C-Means clustering algorithm and the Gaussian Mixture Model) for the segmentation of medical images and an algorithm for supervised learning (a modified DenseNet) for the diagnosis of breast images. Ultimately, a prototype of the proposed system was implemented for the Magori Polyclinic in Niamey (Niger) making it possible to diagnose (or classify) breast cancer into two (2) classes: the normal class and the abnormal class.展开更多
The COVID-19 pandemic has had a widespread negative impact globally. It shares symptoms with other respiratory illnesses such as pneumonia and influenza, making rapid and accurate diagnosis essential to treat individu...The COVID-19 pandemic has had a widespread negative impact globally. It shares symptoms with other respiratory illnesses such as pneumonia and influenza, making rapid and accurate diagnosis essential to treat individuals and halt further transmission. X-ray imaging of the lungs is one of the most reliable diagnostic tools. Utilizing deep learning, we can train models to recognize the signs of infection, thus aiding in the identification of COVID-19 cases. For our project, we developed a deep learning model utilizing the ResNet50 architecture, pre-trained with ImageNet and CheXNet datasets. We tackled the challenge of an imbalanced dataset, the CoronaHack Chest X-Ray dataset provided by Kaggle, through both binary and multi-class classification approaches. Additionally, we evaluated the performance impact of using Focal loss versus Cross-entropy loss in our model.展开更多
In the era of an energy revolution,grid decentralization has emerged as a viable solution to meet the increasing global energy demand by incorporating renewables at the distributed level.Microgrids are considered a dr...In the era of an energy revolution,grid decentralization has emerged as a viable solution to meet the increasing global energy demand by incorporating renewables at the distributed level.Microgrids are considered a driving component for accelerating grid decentralization.To optimally utilize the available resources and address potential challenges,there is a need to have an intelligent and reliable energy management system(EMS)for the microgrid.The artificial intelligence field has the potential to address the problems in EMS and can provide resilient,efficient,reliable,and scalable solutions.This paper presents an overview of existing conventional and AI-based techniques for energy management systems in microgrids.We analyze EMS methods for centralized,decentralized,and distributed microgrids separately.Then,we summarize machine learning techniques such as ANNs,federated learning,LSTMs,RNNs,and reinforcement learning for EMS objectives such as economic dispatch,optimal power flow,and scheduling.With the incorporation of AI,microgrids can achieve greater performance efficiency and more reliability for managing a large number of energy resources.However,challenges such as data privacy,security,scalability,explainability,etc.,need to be addressed.To conclude,the authors state the possible future research directions to explore AI-based EMS's potential in real-world applications.展开更多
Android devices are popularly available in the commercial market at different price levels for various levels of customers.The Android stack is more vulnerable compared to other platforms because of its open-source na...Android devices are popularly available in the commercial market at different price levels for various levels of customers.The Android stack is more vulnerable compared to other platforms because of its open-source nature.There are many android malware detection techniques available to exploit the source code andfind associated components during execution time.To obtain a better result we create a hybrid technique merging static and dynamic processes.In this paper,in thefirst part,we have proposed a technique to check for correlation between features and classify using a supervised learning approach to avoid Mul-ticollinearity problem is one of the drawbacks in the existing system.In the proposed work,a novel PCA(Principal Component Analysis)based feature reduction technique is implemented with conditional dependency features by gathering the functionalities of the application which adds novelty for the given approach.The Android Sensitive Permission is one major key point to be considered while detecting malware.We select vulnerable columns based on features like sensitive permissions,application program interface calls,services requested through the kernel,and the relationship between the variables henceforth build the model using machine learning classifiers and identify whether the given application is malicious or benign.Thefinal goal of this paper is to check benchmarking datasets collected from various repositories like virus share,Github,and the Canadian Institute of cyber security,compare with models ensuring zero-day exploits can be monitored and detected with better accuracy rate.展开更多
In recent years, spiking neural networks(SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spa...In recent years, spiking neural networks(SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spatio-temporal information.However, the non-differential spike activity makes SNNs more difficult to train in supervised training. Most existing methods focusing on introducing an approximated derivative to replace it, while they are often based on static surrogate functions. In this paper, we propose a progressive surrogate gradient learning for backpropagation of SNNs, which is able to approximate the step function gradually and to reduce information loss. Furthermore, memristor cross arrays are used for speeding up calculation and reducing system energy consumption for their hardware advantage. The proposed algorithm is evaluated on both static and neuromorphic datasets using fully connected and convolutional network architecture, and the experimental results indicate that our approach has a high performance compared with previous research.展开更多
Rare labeled data are difficult to recognize by using conventional methods in the process of radar emitter recogni-tion.To solve this problem,an optimized cooperative semi-supervised learning radar emitter recognition...Rare labeled data are difficult to recognize by using conventional methods in the process of radar emitter recogni-tion.To solve this problem,an optimized cooperative semi-supervised learning radar emitter recognition method based on a small amount of labeled data is developed.First,a small amount of labeled data are randomly sampled by using the bootstrap method,loss functions for three common deep learning net-works are improved,the uniform distribution and cross-entropy function are combined to reduce the overconfidence of softmax classification.Subsequently,the dataset obtained after sam-pling is adopted to train three improved networks so as to build the initial model.In addition,the unlabeled data are preliminarily screened through dynamic time warping(DTW)and then input into the initial model trained previously for judgment.If the judg-ment results of two or more networks are consistent,the unla-beled data are labeled and put into the labeled data set.Lastly,the three network models are input into the labeled dataset for training,and the final model is built.As revealed by the simula-tion results,the semi-supervised learning method adopted in this paper is capable of exploiting a small amount of labeled data and basically achieving the accuracy of labeled data recognition.展开更多
Machine learning(ML) models provide great opportunities to accelerate novel material development, offering a virtual alternative to laborious and resource-intensive empirical methods. In this work, the second of a two...Machine learning(ML) models provide great opportunities to accelerate novel material development, offering a virtual alternative to laborious and resource-intensive empirical methods. In this work, the second of a two-part study, an ML approach is presented that offers accelerated digital design of Mg alloys. A systematic evaluation of four ML regression algorithms was explored to rationalise the complex relationships in Mg-alloy data and to capture the composition-processing-property patterns. Cross-validation and hold-out set validation techniques were utilised for unbiased estimation of model performance. Using atomic and thermodynamic properties of the alloys, feature augmentation was examined to define the most descriptive representation spaces for the alloy data. Additionally, a graphical user interface(GUI) webtool was developed to facilitate the use of the proposed models in predicting the mechanical properties of new Mg alloys. The results demonstrate that random forest regression model and neural network are robust models for predicting the ultimate tensile strength and ductility of Mg alloys, with accuracies of ~80% and 70% respectively. The developed models in this work are a step towards high-throughput screening of novel candidates for target mechanical properties and provide ML-guided alloy design.展开更多
Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of t...Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of the real‐word system are multiple relations,where entities are linked by different types of relations,and each relation is a view of the graph network.Second,the rich multi‐scale information(structure‐level and feature‐level)of the graph network can be seen as self‐supervised signals,which are not fully exploited.A novel contrastive self‐supervised representation learning framework on attributed multiplex graph networks with multi‐scale(named CoLM^(2)S)information is presented in this study.It mainly contains two components:intra‐relation contrast learning and interrelation contrastive learning.Specifically,the contrastive self‐supervised representation learning framework on attributed single‐layer graph networks with multi‐scale information(CoLMS)framework with the graph convolutional network as encoder to capture the intra‐relation information with multi‐scale structure‐level and feature‐level selfsupervised signals is introduced first.The structure‐level information includes the edge structure and sub‐graph structure,and the feature‐level information represents the output of different graph convolutional layer.Second,according to the consensus assumption among inter‐relations,the CoLM^(2)S framework is proposed to jointly learn various graph relations in attributed multiplex graph network to achieve global consensus node embedding.The proposed method can fully distil the graph information.Extensive experiments on unsupervised node clustering and graph visualisation tasks demonstrate the effectiveness of our methods,and it outperforms existing competitive baselines.展开更多
To meet the high-performance requirements of fifth-generation(5G)and sixth-generation(6G)wireless networks,in particular,ultra-reliable and low-latency communication(URLLC)is considered to be one of the most important...To meet the high-performance requirements of fifth-generation(5G)and sixth-generation(6G)wireless networks,in particular,ultra-reliable and low-latency communication(URLLC)is considered to be one of the most important communication scenarios in a wireless network.In this paper,we consider the effects of the Rician fading channel on the performance of cooperative device-to-device(D2D)communication with URLLC.For better performance,we maximize and examine the system’s minimal rate of D2D communication.Due to the interference in D2D communication,the problem of maximizing the minimum rate becomes non-convex and difficult to solve.To solve this problem,a learning-to-optimize-based algorithm is proposed to find the optimal power allocation.The conventional branch and bound(BB)algorithm are used to learn the optimal pruning policy with supervised learning.Ensemble learning is used to train the multiple classifiers.To address the imbalanced problem,we used the supervised undersampling technique.Comparisons are made with the conventional BB algorithm and the heuristic algorithm.The outcome of the simulation demonstrates a notable performance improvement in power consumption.The proposed algorithm has significantly low computational complexity and runs faster as compared to the conventional BB algorithm and a heuristic algorithm.展开更多
Coronavirus has infected more than 753 million people,ranging in severity from one person to another,where more than six million infected people died worldwide.Computer-aided diagnostic(CAD)with artificial intelligenc...Coronavirus has infected more than 753 million people,ranging in severity from one person to another,where more than six million infected people died worldwide.Computer-aided diagnostic(CAD)with artificial intelligence(AI)showed outstanding performance in effectively diagnosing this virus in real-time.Computed tomography is a complementary diagnostic tool to clarify the damage of COVID-19 in the lungs even before symptoms appear in patients.This paper conducts a systematic literature review of deep learning methods for classifying the segmentation of COVID-19 infection in the lungs.We used the methodology of systematic reviews and meta-analyses(PRISMA)flow method.This research aims to systematically analyze the supervised deep learning methods,open resource datasets,data augmentation methods,and loss functions used for various segment shapes of COVID-19 infection from computerized tomography(CT)chest images.We have selected 56 primary studies relevant to the topic of the paper.We have compared different aspects of the algorithms used to segment infected areas in the CT images.Limitations to deep learning in the segmentation of infected areas still need to be developed to predict smaller regions of infection at the beginning of their appearance.展开更多
N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning m...N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning methods (PCA, HCA, KNN, SIMCA, and SDA). The optimization of molecular structures was performed using the B3LYP/6-31G* approach. MEP maps and ligand-receptor interactions were used to investigate key structural features required for biological activities and likely interactions between N-11-azaartemisinins and heme, respectively. The supervised machine learning methods allowed the separation of the investigated compounds into two classes: cha and cla, with the properties ε<sub>LUMO+1</sub> (one level above lowest unoccupied molecular orbital energy), d(C<sub>6</sub>-C<sub>5</sub>) (distance between C<sub>6</sub> and C<sub>5</sub> atoms in ligands), and TSA (total surface area) responsible for the classification. The insights extracted from the investigation developed and the chemical intuition enabled the design of sixteen new N-11-azaartemisinins (prediction set), moreover, models built with supervised machine learning methods were applied to this prediction set. The result of this application showed twelve new promising N-11-azaartemisinins for synthesis and biological evaluation.展开更多
Nowadays, in data science, supervised learning algorithms are frequently used to perform text classification. However, African textual data, in general, have been studied very little using these methods. This article ...Nowadays, in data science, supervised learning algorithms are frequently used to perform text classification. However, African textual data, in general, have been studied very little using these methods. This article notes the particularity of the data and measures the level of precision of predictions of naive Bayes algorithms, decision tree, and SVM (Support Vector Machine) on a corpus of computer jobs taken on the internet. This is due to the data imbalance problem in machine learning. However, this problem essentially focuses on the distribution of the number of documents in each class or subclass. Here, we delve deeper into the problem to the word count distribution in a set of documents. The results are compared with those obtained on a set of French IT offers. It appears that the precision of the classification varies between 88% and 90% for French offers against 67%, at most, for Cameroonian offers. The contribution of this study is twofold. Indeed, it clearly shows that, in a similar job category, job offers on the internet in Cameroon are more unstructured compared to those available in France, for example. Moreover, it makes it possible to emit a strong hypothesis according to which sets of texts having a symmetrical distribution of the number of words obtain better results with supervised learning algorithms.展开更多
Satellite image classification is crucial in various applications such as urban planning,environmental monitoring,and land use analysis.In this study,the authors present a comparative analysis of different supervised ...Satellite image classification is crucial in various applications such as urban planning,environmental monitoring,and land use analysis.In this study,the authors present a comparative analysis of different supervised and unsupervised learning methods for satellite image classification,focusing on a case study in Casablanca using Landsat 8 imagery.This research aims to identify the most effective machine-learning approach for accurately classifying land cover in an urban environment.The methodology used consists of the pre-processing of Landsat imagery data from Casablanca city,the authors extract relevant features and partition them into training and test sets,and then use random forest(RF),SVM(support vector machine),classification,and regression tree(CART),gradient tree boost(GTB),decision tree(DT),and minimum distance(MD)algorithms.Through a series of experiments,the authors evaluate the performance of each machine learning method in terms of accuracy,and Kappa coefficient.This work shows that random forest is the best-performing algorithm,with an accuracy of 95.42%and 0.94 Kappa coefficient.The authors discuss the factors of their performance,including data characteristics,accurate selection,and model influencing.展开更多
Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous human...Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.展开更多
Hydrological models are developed to simulate river flows over a watershed for many practical applications in the field of water resource management. The present paper compares the performance of two recurrent neural ...Hydrological models are developed to simulate river flows over a watershed for many practical applications in the field of water resource management. The present paper compares the performance of two recurrent neural networks for rainfall-runoff modeling in the Zou River basin at Atchérigbé outlet. To this end, we used daily precipitation data over the period 1988-2010 as input of the models, such as the Long Short-Term Memory (LSTM) and Recurrent Gate Networks (GRU) to simulate river discharge in the study area. The investigated models give good results in calibration (R2 = 0.888, NSE = 0.886, and RMSE = 0.42 for LSTM;R2 = 0.9, NSE = 0.9 and RMSE = 0.397 for GRU) and in validation (R2 = 0.865, NSE = 0.851, and RMSE = 0.329 for LSTM;R2 = 0.9, NSE = 0.865 and RMSE = 0.301 for GRU). This good performance of LSTM and GRU models confirms the importance of models based on machine learning in modeling hydrological phenomena for better decision-making.展开更多
Human action recognition under complex environment is a challenging work.Recently,sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions.The ma...Human action recognition under complex environment is a challenging work.Recently,sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions.The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class,and the minimal reconstruction error indicates its corresponding class.However,how to learn a discriminative dictionary is still a difficult work.In this work,we make two contributions.First,we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network(CNN)features.Secondly,we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term.Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models.展开更多
Artificial intelligence and machine learning in orthopaedic surgery has gained mass interest over the last decade or so.In prior studies,researchers have demonstrated that machine learning in orthopaedics can be used ...Artificial intelligence and machine learning in orthopaedic surgery has gained mass interest over the last decade or so.In prior studies,researchers have demonstrated that machine learning in orthopaedics can be used for different applications such as fracture detection,bone tumor diagnosis,detecting hip implant mechanical loosening,and grading osteoarthritis.As time goes on,the utility of artificial intelligence and machine learning algorithms,such as deep learning,continues to grow and expand in orthopaedic surgery.The purpose of this review is to provide an understanding of the concepts of machine learning and a background of current and future orthopaedic applications of machine learning in risk assessment,outcomes assessment,imaging,and basic science fields.In most cases,machine learning has proven to be just as effective,if not more effective,than prior methods such as logistic regression in assessment and prediction.With the help of deep learning algorithms,such as artificial neural networks and convolutional neural networks,artificial intelligence in orthopaedics has been able to improve diagnostic accuracy and speed,flag the most critical and urgent patients for immediate attention,reduce the amount of human error,reduce the strain on medical professionals,and improve care.Because machine learning has shown diagnostic and prognostic uses in orthopaedic surgery,physicians should continue to research these techniques and be trained to use these methods effectively in order to improve orthopaedic treatment.展开更多
BACKGROUND It is important to diagnose depression in Parkinson’s disease(DPD)as soon as possible and identify the predictors of depression to improve quality of life in Parkinson’s disease(PD)patients.AIM To develop...BACKGROUND It is important to diagnose depression in Parkinson’s disease(DPD)as soon as possible and identify the predictors of depression to improve quality of life in Parkinson’s disease(PD)patients.AIM To develop a model for predicting DPD based on the support vector machine,while considering sociodemographic factors,health habits,Parkinson's symptoms,sleep behavior disorders,and neuropsychiatric indicators as predictors and provide baseline data for identifying DPD.METHODS This study analyzed 223 of 335 patients who were 60 years or older with PD.Depression was measured using the 30 items of the Geriatric Depression Scale,and the explanatory variables included PD-related motor signs,rapid eye movement sleep behavior disorders,and neuropsychological tests.The support vector machine was used to develop a DPD prediction model.RESULTS When the effects of PD motor symptoms were compared using“functional weight”,late motor complications(occurrence of levodopa-induced dyskinesia)were the most influential risk factors for Parkinson's symptoms.CONCLUSION It is necessary to develop customized screening tests that can detect DPD in the early stage and continuously monitor high-risk groups based on the factors related to DPD derived from this predictive model in order to maintain the emotional health of PD patients.展开更多
文摘Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.
文摘With the rapid growth of internet usage,a new situation has been created that enables practicing bullying.Cyberbullying has increased over the past decade,and it has the same adverse effects as face-to-face bullying,like anger,sadness,anxiety,and fear.With the anonymity people get on the internet,they tend to bemore aggressive and express their emotions freely without considering the effects,which can be a reason for the increase in cyberbullying and it is the main motive behind the current study.This study presents a thorough background of cyberbullying and the techniques used to collect,preprocess,and analyze the datasets.Moreover,a comprehensive review of the literature has been conducted to figure out research gaps and effective techniques and practices in cyberbullying detection in various languages,and it was deduced that there is significant room for improvement in the Arabic language.As a result,the current study focuses on the investigation of shortlisted machine learning algorithms in natural language processing(NLP)for the classification of Arabic datasets duly collected from Twitter(also known as X).In this regard,support vector machine(SVM),Naive Bayes(NB),Random Forest(RF),Logistic regression(LR),Bootstrap aggregating(Bagging),Gradient Boosting(GBoost),Light Gradient Boosting Machine(LightGBM),Adaptive Boosting(AdaBoost),and eXtreme Gradient Boosting(XGBoost)were shortlisted and investigated due to their effectiveness in the similar problems.Finally,the scheme was evaluated by well-known performance measures like accuracy,precision,Recall,and F1-score.Consequently,XGBoost exhibited the best performance with 89.95%accuracy,which is promising compared to the state-of-the-art.
文摘In many fields, particularly that of health, the diagnosis of diseases is a very difficult task to carry out. Therefore, early detection of diseases using artificial intelligence tools can be of paramount importance in the medical field. In this study, we proposed an intelligent system capable of performing diagnoses for radiologists. The support system is designed to evaluate mammographic images, thereby classifying normal and abnormal patients. The proposed method (DiagBC for Breast Cancer Diagnosis) combines two (2) intelligent unsupervised learning algorithms (the C-Means clustering algorithm and the Gaussian Mixture Model) for the segmentation of medical images and an algorithm for supervised learning (a modified DenseNet) for the diagnosis of breast images. Ultimately, a prototype of the proposed system was implemented for the Magori Polyclinic in Niamey (Niger) making it possible to diagnose (or classify) breast cancer into two (2) classes: the normal class and the abnormal class.
文摘The COVID-19 pandemic has had a widespread negative impact globally. It shares symptoms with other respiratory illnesses such as pneumonia and influenza, making rapid and accurate diagnosis essential to treat individuals and halt further transmission. X-ray imaging of the lungs is one of the most reliable diagnostic tools. Utilizing deep learning, we can train models to recognize the signs of infection, thus aiding in the identification of COVID-19 cases. For our project, we developed a deep learning model utilizing the ResNet50 architecture, pre-trained with ImageNet and CheXNet datasets. We tackled the challenge of an imbalanced dataset, the CoronaHack Chest X-Ray dataset provided by Kaggle, through both binary and multi-class classification approaches. Additionally, we evaluated the performance impact of using Focal loss versus Cross-entropy loss in our model.
文摘In the era of an energy revolution,grid decentralization has emerged as a viable solution to meet the increasing global energy demand by incorporating renewables at the distributed level.Microgrids are considered a driving component for accelerating grid decentralization.To optimally utilize the available resources and address potential challenges,there is a need to have an intelligent and reliable energy management system(EMS)for the microgrid.The artificial intelligence field has the potential to address the problems in EMS and can provide resilient,efficient,reliable,and scalable solutions.This paper presents an overview of existing conventional and AI-based techniques for energy management systems in microgrids.We analyze EMS methods for centralized,decentralized,and distributed microgrids separately.Then,we summarize machine learning techniques such as ANNs,federated learning,LSTMs,RNNs,and reinforcement learning for EMS objectives such as economic dispatch,optimal power flow,and scheduling.With the incorporation of AI,microgrids can achieve greater performance efficiency and more reliability for managing a large number of energy resources.However,challenges such as data privacy,security,scalability,explainability,etc.,need to be addressed.To conclude,the authors state the possible future research directions to explore AI-based EMS's potential in real-world applications.
文摘Android devices are popularly available in the commercial market at different price levels for various levels of customers.The Android stack is more vulnerable compared to other platforms because of its open-source nature.There are many android malware detection techniques available to exploit the source code andfind associated components during execution time.To obtain a better result we create a hybrid technique merging static and dynamic processes.In this paper,in thefirst part,we have proposed a technique to check for correlation between features and classify using a supervised learning approach to avoid Mul-ticollinearity problem is one of the drawbacks in the existing system.In the proposed work,a novel PCA(Principal Component Analysis)based feature reduction technique is implemented with conditional dependency features by gathering the functionalities of the application which adds novelty for the given approach.The Android Sensitive Permission is one major key point to be considered while detecting malware.We select vulnerable columns based on features like sensitive permissions,application program interface calls,services requested through the kernel,and the relationship between the variables henceforth build the model using machine learning classifiers and identify whether the given application is malicious or benign.Thefinal goal of this paper is to check benchmarking datasets collected from various repositories like virus share,Github,and the Canadian Institute of cyber security,compare with models ensuring zero-day exploits can be monitored and detected with better accuracy rate.
基金Project supported by the Natural Science Foundation of Chongqing(Grant No.cstc2021jcyj-msxmX0565)the Fundamental Research Funds for the Central Universities(Grant No.SWU021002)the Graduate Research Innovation Project of Chongqing(Grant No.CYS22242)。
文摘In recent years, spiking neural networks(SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spatio-temporal information.However, the non-differential spike activity makes SNNs more difficult to train in supervised training. Most existing methods focusing on introducing an approximated derivative to replace it, while they are often based on static surrogate functions. In this paper, we propose a progressive surrogate gradient learning for backpropagation of SNNs, which is able to approximate the step function gradually and to reduce information loss. Furthermore, memristor cross arrays are used for speeding up calculation and reducing system energy consumption for their hardware advantage. The proposed algorithm is evaluated on both static and neuromorphic datasets using fully connected and convolutional network architecture, and the experimental results indicate that our approach has a high performance compared with previous research.
文摘Rare labeled data are difficult to recognize by using conventional methods in the process of radar emitter recogni-tion.To solve this problem,an optimized cooperative semi-supervised learning radar emitter recognition method based on a small amount of labeled data is developed.First,a small amount of labeled data are randomly sampled by using the bootstrap method,loss functions for three common deep learning net-works are improved,the uniform distribution and cross-entropy function are combined to reduce the overconfidence of softmax classification.Subsequently,the dataset obtained after sam-pling is adopted to train three improved networks so as to build the initial model.In addition,the unlabeled data are preliminarily screened through dynamic time warping(DTW)and then input into the initial model trained previously for judgment.If the judg-ment results of two or more networks are consistent,the unla-beled data are labeled and put into the labeled data set.Lastly,the three network models are input into the labeled dataset for training,and the final model is built.As revealed by the simula-tion results,the semi-supervised learning method adopted in this paper is capable of exploiting a small amount of labeled data and basically achieving the accuracy of labeled data recognition.
基金the support of the Monash-IITB Academy Scholarshipthe Australian Research Council for funding the present research (DP190103592)。
文摘Machine learning(ML) models provide great opportunities to accelerate novel material development, offering a virtual alternative to laborious and resource-intensive empirical methods. In this work, the second of a two-part study, an ML approach is presented that offers accelerated digital design of Mg alloys. A systematic evaluation of four ML regression algorithms was explored to rationalise the complex relationships in Mg-alloy data and to capture the composition-processing-property patterns. Cross-validation and hold-out set validation techniques were utilised for unbiased estimation of model performance. Using atomic and thermodynamic properties of the alloys, feature augmentation was examined to define the most descriptive representation spaces for the alloy data. Additionally, a graphical user interface(GUI) webtool was developed to facilitate the use of the proposed models in predicting the mechanical properties of new Mg alloys. The results demonstrate that random forest regression model and neural network are robust models for predicting the ultimate tensile strength and ductility of Mg alloys, with accuracies of ~80% and 70% respectively. The developed models in this work are a step towards high-throughput screening of novel candidates for target mechanical properties and provide ML-guided alloy design.
基金support by the National Natural Science Foundation of China(NSFC)under grant number 61873274.
文摘Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of the real‐word system are multiple relations,where entities are linked by different types of relations,and each relation is a view of the graph network.Second,the rich multi‐scale information(structure‐level and feature‐level)of the graph network can be seen as self‐supervised signals,which are not fully exploited.A novel contrastive self‐supervised representation learning framework on attributed multiplex graph networks with multi‐scale(named CoLM^(2)S)information is presented in this study.It mainly contains two components:intra‐relation contrast learning and interrelation contrastive learning.Specifically,the contrastive self‐supervised representation learning framework on attributed single‐layer graph networks with multi‐scale information(CoLMS)framework with the graph convolutional network as encoder to capture the intra‐relation information with multi‐scale structure‐level and feature‐level selfsupervised signals is introduced first.The structure‐level information includes the edge structure and sub‐graph structure,and the feature‐level information represents the output of different graph convolutional layer.Second,according to the consensus assumption among inter‐relations,the CoLM^(2)S framework is proposed to jointly learn various graph relations in attributed multiplex graph network to achieve global consensus node embedding.The proposed method can fully distil the graph information.Extensive experiments on unsupervised node clustering and graph visualisation tasks demonstrate the effectiveness of our methods,and it outperforms existing competitive baselines.
基金supported in part by the National Natural Science Foundation of China under Grant 61771410in part by the Sichuan Science and Technology Program 2023NSFSC1373in part by Postgraduate Innovation Fund Project of SWUST 23zx7101.
文摘To meet the high-performance requirements of fifth-generation(5G)and sixth-generation(6G)wireless networks,in particular,ultra-reliable and low-latency communication(URLLC)is considered to be one of the most important communication scenarios in a wireless network.In this paper,we consider the effects of the Rician fading channel on the performance of cooperative device-to-device(D2D)communication with URLLC.For better performance,we maximize and examine the system’s minimal rate of D2D communication.Due to the interference in D2D communication,the problem of maximizing the minimum rate becomes non-convex and difficult to solve.To solve this problem,a learning-to-optimize-based algorithm is proposed to find the optimal power allocation.The conventional branch and bound(BB)algorithm are used to learn the optimal pruning policy with supervised learning.Ensemble learning is used to train the multiple classifiers.To address the imbalanced problem,we used the supervised undersampling technique.Comparisons are made with the conventional BB algorithm and the heuristic algorithm.The outcome of the simulation demonstrates a notable performance improvement in power consumption.The proposed algorithm has significantly low computational complexity and runs faster as compared to the conventional BB algorithm and a heuristic algorithm.
文摘Coronavirus has infected more than 753 million people,ranging in severity from one person to another,where more than six million infected people died worldwide.Computer-aided diagnostic(CAD)with artificial intelligence(AI)showed outstanding performance in effectively diagnosing this virus in real-time.Computed tomography is a complementary diagnostic tool to clarify the damage of COVID-19 in the lungs even before symptoms appear in patients.This paper conducts a systematic literature review of deep learning methods for classifying the segmentation of COVID-19 infection in the lungs.We used the methodology of systematic reviews and meta-analyses(PRISMA)flow method.This research aims to systematically analyze the supervised deep learning methods,open resource datasets,data augmentation methods,and loss functions used for various segment shapes of COVID-19 infection from computerized tomography(CT)chest images.We have selected 56 primary studies relevant to the topic of the paper.We have compared different aspects of the algorithms used to segment infected areas in the CT images.Limitations to deep learning in the segmentation of infected areas still need to be developed to predict smaller regions of infection at the beginning of their appearance.
文摘N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning methods (PCA, HCA, KNN, SIMCA, and SDA). The optimization of molecular structures was performed using the B3LYP/6-31G* approach. MEP maps and ligand-receptor interactions were used to investigate key structural features required for biological activities and likely interactions between N-11-azaartemisinins and heme, respectively. The supervised machine learning methods allowed the separation of the investigated compounds into two classes: cha and cla, with the properties ε<sub>LUMO+1</sub> (one level above lowest unoccupied molecular orbital energy), d(C<sub>6</sub>-C<sub>5</sub>) (distance between C<sub>6</sub> and C<sub>5</sub> atoms in ligands), and TSA (total surface area) responsible for the classification. The insights extracted from the investigation developed and the chemical intuition enabled the design of sixteen new N-11-azaartemisinins (prediction set), moreover, models built with supervised machine learning methods were applied to this prediction set. The result of this application showed twelve new promising N-11-azaartemisinins for synthesis and biological evaluation.
文摘Nowadays, in data science, supervised learning algorithms are frequently used to perform text classification. However, African textual data, in general, have been studied very little using these methods. This article notes the particularity of the data and measures the level of precision of predictions of naive Bayes algorithms, decision tree, and SVM (Support Vector Machine) on a corpus of computer jobs taken on the internet. This is due to the data imbalance problem in machine learning. However, this problem essentially focuses on the distribution of the number of documents in each class or subclass. Here, we delve deeper into the problem to the word count distribution in a set of documents. The results are compared with those obtained on a set of French IT offers. It appears that the precision of the classification varies between 88% and 90% for French offers against 67%, at most, for Cameroonian offers. The contribution of this study is twofold. Indeed, it clearly shows that, in a similar job category, job offers on the internet in Cameroon are more unstructured compared to those available in France, for example. Moreover, it makes it possible to emit a strong hypothesis according to which sets of texts having a symmetrical distribution of the number of words obtain better results with supervised learning algorithms.
文摘Satellite image classification is crucial in various applications such as urban planning,environmental monitoring,and land use analysis.In this study,the authors present a comparative analysis of different supervised and unsupervised learning methods for satellite image classification,focusing on a case study in Casablanca using Landsat 8 imagery.This research aims to identify the most effective machine-learning approach for accurately classifying land cover in an urban environment.The methodology used consists of the pre-processing of Landsat imagery data from Casablanca city,the authors extract relevant features and partition them into training and test sets,and then use random forest(RF),SVM(support vector machine),classification,and regression tree(CART),gradient tree boost(GTB),decision tree(DT),and minimum distance(MD)algorithms.Through a series of experiments,the authors evaluate the performance of each machine learning method in terms of accuracy,and Kappa coefficient.This work shows that random forest is the best-performing algorithm,with an accuracy of 95.42%and 0.94 Kappa coefficient.The authors discuss the factors of their performance,including data characteristics,accurate selection,and model influencing.
基金the National Natural Science Foundation of China(42001408,61806097).
文摘Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.
文摘Hydrological models are developed to simulate river flows over a watershed for many practical applications in the field of water resource management. The present paper compares the performance of two recurrent neural networks for rainfall-runoff modeling in the Zou River basin at Atchérigbé outlet. To this end, we used daily precipitation data over the period 1988-2010 as input of the models, such as the Long Short-Term Memory (LSTM) and Recurrent Gate Networks (GRU) to simulate river discharge in the study area. The investigated models give good results in calibration (R2 = 0.888, NSE = 0.886, and RMSE = 0.42 for LSTM;R2 = 0.9, NSE = 0.9 and RMSE = 0.397 for GRU) and in validation (R2 = 0.865, NSE = 0.851, and RMSE = 0.329 for LSTM;R2 = 0.9, NSE = 0.865 and RMSE = 0.301 for GRU). This good performance of LSTM and GRU models confirms the importance of models based on machine learning in modeling hydrological phenomena for better decision-making.
基金This research was funded by the National Natural Science Foundation of China(21878124,31771680 and 61773182).
文摘Human action recognition under complex environment is a challenging work.Recently,sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions.The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class,and the minimal reconstruction error indicates its corresponding class.However,how to learn a discriminative dictionary is still a difficult work.In this work,we make two contributions.First,we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network(CNN)features.Secondly,we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term.Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models.
文摘Artificial intelligence and machine learning in orthopaedic surgery has gained mass interest over the last decade or so.In prior studies,researchers have demonstrated that machine learning in orthopaedics can be used for different applications such as fracture detection,bone tumor diagnosis,detecting hip implant mechanical loosening,and grading osteoarthritis.As time goes on,the utility of artificial intelligence and machine learning algorithms,such as deep learning,continues to grow and expand in orthopaedic surgery.The purpose of this review is to provide an understanding of the concepts of machine learning and a background of current and future orthopaedic applications of machine learning in risk assessment,outcomes assessment,imaging,and basic science fields.In most cases,machine learning has proven to be just as effective,if not more effective,than prior methods such as logistic regression in assessment and prediction.With the help of deep learning algorithms,such as artificial neural networks and convolutional neural networks,artificial intelligence in orthopaedics has been able to improve diagnostic accuracy and speed,flag the most critical and urgent patients for immediate attention,reduce the amount of human error,reduce the strain on medical professionals,and improve care.Because machine learning has shown diagnostic and prognostic uses in orthopaedic surgery,physicians should continue to research these techniques and be trained to use these methods effectively in order to improve orthopaedic treatment.
基金the National Research Foundation of Korea,No.NRF-2019S1A5A8034211the National Research Foundation of Korea,No.NRF-2018R1D1A1B07041091.
文摘BACKGROUND It is important to diagnose depression in Parkinson’s disease(DPD)as soon as possible and identify the predictors of depression to improve quality of life in Parkinson’s disease(PD)patients.AIM To develop a model for predicting DPD based on the support vector machine,while considering sociodemographic factors,health habits,Parkinson's symptoms,sleep behavior disorders,and neuropsychiatric indicators as predictors and provide baseline data for identifying DPD.METHODS This study analyzed 223 of 335 patients who were 60 years or older with PD.Depression was measured using the 30 items of the Geriatric Depression Scale,and the explanatory variables included PD-related motor signs,rapid eye movement sleep behavior disorders,and neuropsychological tests.The support vector machine was used to develop a DPD prediction model.RESULTS When the effects of PD motor symptoms were compared using“functional weight”,late motor complications(occurrence of levodopa-induced dyskinesia)were the most influential risk factors for Parkinson's symptoms.CONCLUSION It is necessary to develop customized screening tests that can detect DPD in the early stage and continuously monitor high-risk groups based on the factors related to DPD derived from this predictive model in order to maintain the emotional health of PD patients.