Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique...Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.展开更多
Rare labeled data are difficult to recognize by using conventional methods in the process of radar emitter recogni-tion.To solve this problem,an optimized cooperative semi-supervised learning radar emitter recognition...Rare labeled data are difficult to recognize by using conventional methods in the process of radar emitter recogni-tion.To solve this problem,an optimized cooperative semi-supervised learning radar emitter recognition method based on a small amount of labeled data is developed.First,a small amount of labeled data are randomly sampled by using the bootstrap method,loss functions for three common deep learning net-works are improved,the uniform distribution and cross-entropy function are combined to reduce the overconfidence of softmax classification.Subsequently,the dataset obtained after sam-pling is adopted to train three improved networks so as to build the initial model.In addition,the unlabeled data are preliminarily screened through dynamic time warping(DTW)and then input into the initial model trained previously for judgment.If the judg-ment results of two or more networks are consistent,the unla-beled data are labeled and put into the labeled data set.Lastly,the three network models are input into the labeled dataset for training,and the final model is built.As revealed by the simula-tion results,the semi-supervised learning method adopted in this paper is capable of exploiting a small amount of labeled data and basically achieving the accuracy of labeled data recognition.展开更多
Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of t...Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of the real‐word system are multiple relations,where entities are linked by different types of relations,and each relation is a view of the graph network.Second,the rich multi‐scale information(structure‐level and feature‐level)of the graph network can be seen as self‐supervised signals,which are not fully exploited.A novel contrastive self‐supervised representation learning framework on attributed multiplex graph networks with multi‐scale(named CoLM^(2)S)information is presented in this study.It mainly contains two components:intra‐relation contrast learning and interrelation contrastive learning.Specifically,the contrastive self‐supervised representation learning framework on attributed single‐layer graph networks with multi‐scale information(CoLMS)framework with the graph convolutional network as encoder to capture the intra‐relation information with multi‐scale structure‐level and feature‐level selfsupervised signals is introduced first.The structure‐level information includes the edge structure and sub‐graph structure,and the feature‐level information represents the output of different graph convolutional layer.Second,according to the consensus assumption among inter‐relations,the CoLM^(2)S framework is proposed to jointly learn various graph relations in attributed multiplex graph network to achieve global consensus node embedding.The proposed method can fully distil the graph information.Extensive experiments on unsupervised node clustering and graph visualisation tasks demonstrate the effectiveness of our methods,and it outperforms existing competitive baselines.展开更多
Nowadays, in data science, supervised learning algorithms are frequently used to perform text classification. However, African textual data, in general, have been studied very little using these methods. This article ...Nowadays, in data science, supervised learning algorithms are frequently used to perform text classification. However, African textual data, in general, have been studied very little using these methods. This article notes the particularity of the data and measures the level of precision of predictions of naive Bayes algorithms, decision tree, and SVM (Support Vector Machine) on a corpus of computer jobs taken on the internet. This is due to the data imbalance problem in machine learning. However, this problem essentially focuses on the distribution of the number of documents in each class or subclass. Here, we delve deeper into the problem to the word count distribution in a set of documents. The results are compared with those obtained on a set of French IT offers. It appears that the precision of the classification varies between 88% and 90% for French offers against 67%, at most, for Cameroonian offers. The contribution of this study is twofold. Indeed, it clearly shows that, in a similar job category, job offers on the internet in Cameroon are more unstructured compared to those available in France, for example. Moreover, it makes it possible to emit a strong hypothesis according to which sets of texts having a symmetrical distribution of the number of words obtain better results with supervised learning algorithms.展开更多
N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning m...N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning methods (PCA, HCA, KNN, SIMCA, and SDA). The optimization of molecular structures was performed using the B3LYP/6-31G* approach. MEP maps and ligand-receptor interactions were used to investigate key structural features required for biological activities and likely interactions between N-11-azaartemisinins and heme, respectively. The supervised machine learning methods allowed the separation of the investigated compounds into two classes: cha and cla, with the properties ε<sub>LUMO+1</sub> (one level above lowest unoccupied molecular orbital energy), d(C<sub>6</sub>-C<sub>5</sub>) (distance between C<sub>6</sub> and C<sub>5</sub> atoms in ligands), and TSA (total surface area) responsible for the classification. The insights extracted from the investigation developed and the chemical intuition enabled the design of sixteen new N-11-azaartemisinins (prediction set), moreover, models built with supervised machine learning methods were applied to this prediction set. The result of this application showed twelve new promising N-11-azaartemisinins for synthesis and biological evaluation.展开更多
Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous human...Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.展开更多
This study proposes a supervised learning method that does not rely on labels.We use variables associated with the label as indirect labels,and construct an indirect physics-constrained loss based on the physical mech...This study proposes a supervised learning method that does not rely on labels.We use variables associated with the label as indirect labels,and construct an indirect physics-constrained loss based on the physical mechanism to train the model.In the training process,the model prediction is mapped to the space of value that conforms to the physical mechanism through the projection matrix,and then the model is trained based on the indirect labels.The final prediction result of the model conforms to the physical mechanism between indirect label and label,and also meets the constraints of the indirect label.The present study also develops projection matrix normalization and prediction covariance analysis to ensure that the model can be fully trained.Finally,the effect of the physics-constrained indirect supervised learning is verified based on a well log generation problem.展开更多
In the era of an energy revolution,grid decentralization has emerged as a viable solution to meet the increasing global energy demand by incorporating renewables at the distributed level.Microgrids are considered a dr...In the era of an energy revolution,grid decentralization has emerged as a viable solution to meet the increasing global energy demand by incorporating renewables at the distributed level.Microgrids are considered a driving component for accelerating grid decentralization.To optimally utilize the available resources and address potential challenges,there is a need to have an intelligent and reliable energy management system(EMS)for the microgrid.The artificial intelligence field has the potential to address the problems in EMS and can provide resilient,efficient,reliable,and scalable solutions.This paper presents an overview of existing conventional and AI-based techniques for energy management systems in microgrids.We analyze EMS methods for centralized,decentralized,and distributed microgrids separately.Then,we summarize machine learning techniques such as ANNs,federated learning,LSTMs,RNNs,and reinforcement learning for EMS objectives such as economic dispatch,optimal power flow,and scheduling.With the incorporation of AI,microgrids can achieve greater performance efficiency and more reliability for managing a large number of energy resources.However,challenges such as data privacy,security,scalability,explainability,etc.,need to be addressed.To conclude,the authors state the possible future research directions to explore AI-based EMS's potential in real-world applications.展开更多
A method that applies clustering technique to reduce the number of samples of large data sets using input-output clustering is proposed.The proposed method clusters the output data into groups and clusters the input d...A method that applies clustering technique to reduce the number of samples of large data sets using input-output clustering is proposed.The proposed method clusters the output data into groups and clusters the input data in accordance with the groups of output data.Then,a set of prototypes are selected from the clustered input data.The inessential data can be ultimately discarded from the data set.The proposed method can reduce the effect from outliers because only the prototypes are used.This method is applied to reduce the data set in regression problems.Two standard synthetic data sets and three standard real-world data sets are used for evaluation.The root-mean-square errors are compared from support vector regression models trained with the original data sets and the corresponding instance-reduced data sets.From the experiments,the proposed method provides good results on the reduction and the reconstruction of the standard synthetic and real-world data sets.The numbers of instances of the synthetic data sets are decreased by 25%-69%.The reduction rates for the real-world data sets of the automobile miles per gallon and the 1990 census in CA are 46% and 57%,respectively.The reduction rate of 96% is very good for the electrocardiogram(ECG) data set because of the redundant and periodic nature of ECG signals.For all of the data sets,the regression results are similar to those from the corresponding original data sets.Therefore,the regression performance of the proposed method is good while only a fraction of the data is needed in the training process.展开更多
The motivation for this article is to propose new damage classifiers based on a supervised learning problem for locating and quantifying damage.A new feature extraction approach using time series analysis is introduce...The motivation for this article is to propose new damage classifiers based on a supervised learning problem for locating and quantifying damage.A new feature extraction approach using time series analysis is introduced to extract damage-sensitive features from auto-regressive models.This approach sets out to improve current feature extraction techniques in the context of time series modeling.The coefficients and residuals of the AR model obtained from the proposed approach are selected as the main features and are applied to the proposed supervised learning classifiers that are categorized as coefficient-based and residual-based classifiers.These classifiers compute the relative errors in the extracted features between the undamaged and damaged states.Eventually,the abilities of the proposed methods to localize and quantify single and multiple damage scenarios are verified by applying experimental data for a laboratory frame and a four-story steel structure.Comparative analyses are performed to validate the superiority of the proposed methods over some existing techniques.Results show that the proposed classifiers,with the aid of extracted features from the proposed feature extraction approach,are able to locate and quantify damage;however,the residual-based classifiers yield better results than the coefficient-based classifiers.Moreover,these methods are superior to some classical techniques.展开更多
This study proposes an architecture for the prediction of extremist human behaviour from projected suicide bombings.By linking‘dots’of police data comprising scattered information of people,groups,logistics,location...This study proposes an architecture for the prediction of extremist human behaviour from projected suicide bombings.By linking‘dots’of police data comprising scattered information of people,groups,logistics,locations,communication,and spatiotemporal characters on different social media groups,the proposed architecture will spawn beneficial information.This useful information will,in turn,help the police both in predicting potential terrorist events and in investigating previous events.Furthermore,this architecture will aid in the identification of criminals and their associates and handlers.Terrorism is psychological warfare,which,in the broadest sense,can be defined as the utilisation of deliberate violence for economic,political or religious purposes.In this study,a supervised learning-based approach was adopted to develop the proposed architecture.The dataset was prepared from the suicide bomb blast data of Pakistan obtained from the South Asia Terrorism Portal(SATP).As the proposed architecture was simulated,the supervised learning-based classifiers na飗e Bayes and Hoeffding Tree reached 72.17%accuracy.One of the additional benefits this study offers is the ability to predict the target audience of potential suicide bomb blasts,which may be used to eliminate future threats or,at least,minimise the number of casualties and other property losses.展开更多
Human action recognition under complex environment is a challenging work.Recently,sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions.The ma...Human action recognition under complex environment is a challenging work.Recently,sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions.The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class,and the minimal reconstruction error indicates its corresponding class.However,how to learn a discriminative dictionary is still a difficult work.In this work,we make two contributions.First,we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network(CNN)features.Secondly,we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term.Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models.展开更多
Log-linear models and more recently neural network models used forsupervised relation extraction requires substantial amounts of training data andtime, limiting the portability to new relations and domains. To this en...Log-linear models and more recently neural network models used forsupervised relation extraction requires substantial amounts of training data andtime, limiting the portability to new relations and domains. To this end, we propose a training representation based on the dependency paths between entities in adependency tree which we call lexicalized dependency paths (LDPs). We showthat this representation is fast, efficient and transparent. We further propose representations utilizing entity types and its subtypes to refine our model and alleviatethe data sparsity problem. We apply lexicalized dependency paths to supervisedlearning using the ACE corpus and show that it can achieve similar performancelevel to other state-of-the-art methods and even surpass them on severalcategories.展开更多
In order to solve the problem of automatic defect detection and process control in the welding and arc additive process,the paper monitors the current,voltage,audio,and other data during the welding process and extrac...In order to solve the problem of automatic defect detection and process control in the welding and arc additive process,the paper monitors the current,voltage,audio,and other data during the welding process and extracts the minimum value,standard deviation,deviation from the voltage and current data.It extracts spectral features such as root mean square,spectral centroid,and zero-crossing rate from audio data,fuses the features extracted from multiple sensor signals,and establishes multiple machine learning supervised and unsupervised models.They are used to detect abnormalities in the welding process.The experimental results show that the established multiple machine learning models have high accuracy,among which the supervised learning model,the balanced accuracy of Ada boost is 0.957,and the unsupervised learning model Isolation Forest has a balanced accuracy of 0.909.展开更多
As the fundamental infrastructure of the Internet,the optical network carries a great amount of Internet traffic.There would be great financial losses if some faults happen.Therefore,fault location is very important f...As the fundamental infrastructure of the Internet,the optical network carries a great amount of Internet traffic.There would be great financial losses if some faults happen.Therefore,fault location is very important for the operation and maintenance in optical networks.Due to complex relationships among each network element in topology level,each board in network element level,and each component in board level,the con-crete fault location is hard for traditional method.In recent years,machine learning,es-pecially deep learning,has been applied to many complex problems,because machine learning can find potential non-linear mapping from some inputs to the output.In this paper,we introduce supervised machine learning to propose a complete process for fault location.Firstly,we use data preprocessing,data annotation,and data augmenta-tion in order to process original collected data to build a high-quality dataset.Then,two machine learning algorithms(convolutional neural networks and deep neural networks)are applied on the dataset.The evaluation on commercial optical networks shows that this process helps improve the quality of dataset,and two algorithms perform well on fault location.展开更多
The DNA sequences of an organism play an important influence on its transcription and translation process, thus affecting its protein production and growth rate. Due to the com-plexity of DNA, it was extremely difficu...The DNA sequences of an organism play an important influence on its transcription and translation process, thus affecting its protein production and growth rate. Due to the com-plexity of DNA, it was extremely difficult to predict the macroscopic characteristics of or-ganisms. However, with the rapid development of machine learning in recent years, it be-comes possible to use powerful machine learning algorithms to process and analyze biolog-ical data. Based on the synthetic DNA sequences of a specific microbe, <em>E. coli</em>, I designed a process to predict its protein production and growth rate. By observing the properties of a data set constructed by previous work, I chose to use supervised learning regressors with encoded DNA sequences as input features to perform the predictions. After comparing different encoders and algorithms, I selected three encoders to encode the DNA sequences as inputs and trained seven different regressors to predict the outputs. The hy-per-parameters are optimized for three regressors which have the best potential prediction performance. Finally, I successfully predicted the protein production and growth rates, with the best <em>R</em><sup><em>2</em></sup> score 0.55 and 0.77, respectively, by using encoders to catch the potential fea-tures from the DNA sequences.展开更多
In recent years, spiking neural networks(SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spa...In recent years, spiking neural networks(SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spatio-temporal information.However, the non-differential spike activity makes SNNs more difficult to train in supervised training. Most existing methods focusing on introducing an approximated derivative to replace it, while they are often based on static surrogate functions. In this paper, we propose a progressive surrogate gradient learning for backpropagation of SNNs, which is able to approximate the step function gradually and to reduce information loss. Furthermore, memristor cross arrays are used for speeding up calculation and reducing system energy consumption for their hardware advantage. The proposed algorithm is evaluated on both static and neuromorphic datasets using fully connected and convolutional network architecture, and the experimental results indicate that our approach has a high performance compared with previous research.展开更多
Machine learning(ML) models provide great opportunities to accelerate novel material development, offering a virtual alternative to laborious and resource-intensive empirical methods. In this work, the second of a two...Machine learning(ML) models provide great opportunities to accelerate novel material development, offering a virtual alternative to laborious and resource-intensive empirical methods. In this work, the second of a two-part study, an ML approach is presented that offers accelerated digital design of Mg alloys. A systematic evaluation of four ML regression algorithms was explored to rationalise the complex relationships in Mg-alloy data and to capture the composition-processing-property patterns. Cross-validation and hold-out set validation techniques were utilised for unbiased estimation of model performance. Using atomic and thermodynamic properties of the alloys, feature augmentation was examined to define the most descriptive representation spaces for the alloy data. Additionally, a graphical user interface(GUI) webtool was developed to facilitate the use of the proposed models in predicting the mechanical properties of new Mg alloys. The results demonstrate that random forest regression model and neural network are robust models for predicting the ultimate tensile strength and ductility of Mg alloys, with accuracies of ~80% and 70% respectively. The developed models in this work are a step towards high-throughput screening of novel candidates for target mechanical properties and provide ML-guided alloy design.展开更多
Android devices are popularly available in the commercial market at different price levels for various levels of customers.The Android stack is more vulnerable compared to other platforms because of its open-source na...Android devices are popularly available in the commercial market at different price levels for various levels of customers.The Android stack is more vulnerable compared to other platforms because of its open-source nature.There are many android malware detection techniques available to exploit the source code andfind associated components during execution time.To obtain a better result we create a hybrid technique merging static and dynamic processes.In this paper,in thefirst part,we have proposed a technique to check for correlation between features and classify using a supervised learning approach to avoid Mul-ticollinearity problem is one of the drawbacks in the existing system.In the proposed work,a novel PCA(Principal Component Analysis)based feature reduction technique is implemented with conditional dependency features by gathering the functionalities of the application which adds novelty for the given approach.The Android Sensitive Permission is one major key point to be considered while detecting malware.We select vulnerable columns based on features like sensitive permissions,application program interface calls,services requested through the kernel,and the relationship between the variables henceforth build the model using machine learning classifiers and identify whether the given application is malicious or benign.Thefinal goal of this paper is to check benchmarking datasets collected from various repositories like virus share,Github,and the Canadian Institute of cyber security,compare with models ensuring zero-day exploits can be monitored and detected with better accuracy rate.展开更多
To meet the high-performance requirements of fifth-generation(5G)and sixth-generation(6G)wireless networks,in particular,ultra-reliable and low-latency communication(URLLC)is considered to be one of the most important...To meet the high-performance requirements of fifth-generation(5G)and sixth-generation(6G)wireless networks,in particular,ultra-reliable and low-latency communication(URLLC)is considered to be one of the most important communication scenarios in a wireless network.In this paper,we consider the effects of the Rician fading channel on the performance of cooperative device-to-device(D2D)communication with URLLC.For better performance,we maximize and examine the system’s minimal rate of D2D communication.Due to the interference in D2D communication,the problem of maximizing the minimum rate becomes non-convex and difficult to solve.To solve this problem,a learning-to-optimize-based algorithm is proposed to find the optimal power allocation.The conventional branch and bound(BB)algorithm are used to learn the optimal pruning policy with supervised learning.Ensemble learning is used to train the multiple classifiers.To address the imbalanced problem,we used the supervised undersampling technique.Comparisons are made with the conventional BB algorithm and the heuristic algorithm.The outcome of the simulation demonstrates a notable performance improvement in power consumption.The proposed algorithm has significantly low computational complexity and runs faster as compared to the conventional BB algorithm and a heuristic algorithm.展开更多
文摘Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.
文摘Rare labeled data are difficult to recognize by using conventional methods in the process of radar emitter recogni-tion.To solve this problem,an optimized cooperative semi-supervised learning radar emitter recognition method based on a small amount of labeled data is developed.First,a small amount of labeled data are randomly sampled by using the bootstrap method,loss functions for three common deep learning net-works are improved,the uniform distribution and cross-entropy function are combined to reduce the overconfidence of softmax classification.Subsequently,the dataset obtained after sam-pling is adopted to train three improved networks so as to build the initial model.In addition,the unlabeled data are preliminarily screened through dynamic time warping(DTW)and then input into the initial model trained previously for judgment.If the judg-ment results of two or more networks are consistent,the unla-beled data are labeled and put into the labeled data set.Lastly,the three network models are input into the labeled dataset for training,and the final model is built.As revealed by the simula-tion results,the semi-supervised learning method adopted in this paper is capable of exploiting a small amount of labeled data and basically achieving the accuracy of labeled data recognition.
基金support by the National Natural Science Foundation of China(NSFC)under grant number 61873274.
文摘Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of the real‐word system are multiple relations,where entities are linked by different types of relations,and each relation is a view of the graph network.Second,the rich multi‐scale information(structure‐level and feature‐level)of the graph network can be seen as self‐supervised signals,which are not fully exploited.A novel contrastive self‐supervised representation learning framework on attributed multiplex graph networks with multi‐scale(named CoLM^(2)S)information is presented in this study.It mainly contains two components:intra‐relation contrast learning and interrelation contrastive learning.Specifically,the contrastive self‐supervised representation learning framework on attributed single‐layer graph networks with multi‐scale information(CoLMS)framework with the graph convolutional network as encoder to capture the intra‐relation information with multi‐scale structure‐level and feature‐level selfsupervised signals is introduced first.The structure‐level information includes the edge structure and sub‐graph structure,and the feature‐level information represents the output of different graph convolutional layer.Second,according to the consensus assumption among inter‐relations,the CoLM^(2)S framework is proposed to jointly learn various graph relations in attributed multiplex graph network to achieve global consensus node embedding.The proposed method can fully distil the graph information.Extensive experiments on unsupervised node clustering and graph visualisation tasks demonstrate the effectiveness of our methods,and it outperforms existing competitive baselines.
文摘Nowadays, in data science, supervised learning algorithms are frequently used to perform text classification. However, African textual data, in general, have been studied very little using these methods. This article notes the particularity of the data and measures the level of precision of predictions of naive Bayes algorithms, decision tree, and SVM (Support Vector Machine) on a corpus of computer jobs taken on the internet. This is due to the data imbalance problem in machine learning. However, this problem essentially focuses on the distribution of the number of documents in each class or subclass. Here, we delve deeper into the problem to the word count distribution in a set of documents. The results are compared with those obtained on a set of French IT offers. It appears that the precision of the classification varies between 88% and 90% for French offers against 67%, at most, for Cameroonian offers. The contribution of this study is twofold. Indeed, it clearly shows that, in a similar job category, job offers on the internet in Cameroon are more unstructured compared to those available in France, for example. Moreover, it makes it possible to emit a strong hypothesis according to which sets of texts having a symmetrical distribution of the number of words obtain better results with supervised learning algorithms.
文摘N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning methods (PCA, HCA, KNN, SIMCA, and SDA). The optimization of molecular structures was performed using the B3LYP/6-31G* approach. MEP maps and ligand-receptor interactions were used to investigate key structural features required for biological activities and likely interactions between N-11-azaartemisinins and heme, respectively. The supervised machine learning methods allowed the separation of the investigated compounds into two classes: cha and cla, with the properties ε<sub>LUMO+1</sub> (one level above lowest unoccupied molecular orbital energy), d(C<sub>6</sub>-C<sub>5</sub>) (distance between C<sub>6</sub> and C<sub>5</sub> atoms in ligands), and TSA (total surface area) responsible for the classification. The insights extracted from the investigation developed and the chemical intuition enabled the design of sixteen new N-11-azaartemisinins (prediction set), moreover, models built with supervised machine learning methods were applied to this prediction set. The result of this application showed twelve new promising N-11-azaartemisinins for synthesis and biological evaluation.
基金the National Natural Science Foundation of China(42001408,61806097).
文摘Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.
基金partially funded by the National Natural Science Foundation of China (Grants 51520105005 and U1663208)
文摘This study proposes a supervised learning method that does not rely on labels.We use variables associated with the label as indirect labels,and construct an indirect physics-constrained loss based on the physical mechanism to train the model.In the training process,the model prediction is mapped to the space of value that conforms to the physical mechanism through the projection matrix,and then the model is trained based on the indirect labels.The final prediction result of the model conforms to the physical mechanism between indirect label and label,and also meets the constraints of the indirect label.The present study also develops projection matrix normalization and prediction covariance analysis to ensure that the model can be fully trained.Finally,the effect of the physics-constrained indirect supervised learning is verified based on a well log generation problem.
文摘In the era of an energy revolution,grid decentralization has emerged as a viable solution to meet the increasing global energy demand by incorporating renewables at the distributed level.Microgrids are considered a driving component for accelerating grid decentralization.To optimally utilize the available resources and address potential challenges,there is a need to have an intelligent and reliable energy management system(EMS)for the microgrid.The artificial intelligence field has the potential to address the problems in EMS and can provide resilient,efficient,reliable,and scalable solutions.This paper presents an overview of existing conventional and AI-based techniques for energy management systems in microgrids.We analyze EMS methods for centralized,decentralized,and distributed microgrids separately.Then,we summarize machine learning techniques such as ANNs,federated learning,LSTMs,RNNs,and reinforcement learning for EMS objectives such as economic dispatch,optimal power flow,and scheduling.With the incorporation of AI,microgrids can achieve greater performance efficiency and more reliability for managing a large number of energy resources.However,challenges such as data privacy,security,scalability,explainability,etc.,need to be addressed.To conclude,the authors state the possible future research directions to explore AI-based EMS's potential in real-world applications.
基金supported by Chiang Mai University Research Fund under the contract number T-M5744
文摘A method that applies clustering technique to reduce the number of samples of large data sets using input-output clustering is proposed.The proposed method clusters the output data into groups and clusters the input data in accordance with the groups of output data.Then,a set of prototypes are selected from the clustered input data.The inessential data can be ultimately discarded from the data set.The proposed method can reduce the effect from outliers because only the prototypes are used.This method is applied to reduce the data set in regression problems.Two standard synthetic data sets and three standard real-world data sets are used for evaluation.The root-mean-square errors are compared from support vector regression models trained with the original data sets and the corresponding instance-reduced data sets.From the experiments,the proposed method provides good results on the reduction and the reconstruction of the standard synthetic and real-world data sets.The numbers of instances of the synthetic data sets are decreased by 25%-69%.The reduction rates for the real-world data sets of the automobile miles per gallon and the 1990 census in CA are 46% and 57%,respectively.The reduction rate of 96% is very good for the electrocardiogram(ECG) data set because of the redundant and periodic nature of ECG signals.For all of the data sets,the regression results are similar to those from the corresponding original data sets.Therefore,the regression performance of the proposed method is good while only a fraction of the data is needed in the training process.
文摘The motivation for this article is to propose new damage classifiers based on a supervised learning problem for locating and quantifying damage.A new feature extraction approach using time series analysis is introduced to extract damage-sensitive features from auto-regressive models.This approach sets out to improve current feature extraction techniques in the context of time series modeling.The coefficients and residuals of the AR model obtained from the proposed approach are selected as the main features and are applied to the proposed supervised learning classifiers that are categorized as coefficient-based and residual-based classifiers.These classifiers compute the relative errors in the extracted features between the undamaged and damaged states.Eventually,the abilities of the proposed methods to localize and quantify single and multiple damage scenarios are verified by applying experimental data for a laboratory frame and a four-story steel structure.Comparative analyses are performed to validate the superiority of the proposed methods over some existing techniques.Results show that the proposed classifiers,with the aid of extracted features from the proposed feature extraction approach,are able to locate and quantify damage;however,the residual-based classifiers yield better results than the coefficient-based classifiers.Moreover,these methods are superior to some classical techniques.
文摘This study proposes an architecture for the prediction of extremist human behaviour from projected suicide bombings.By linking‘dots’of police data comprising scattered information of people,groups,logistics,locations,communication,and spatiotemporal characters on different social media groups,the proposed architecture will spawn beneficial information.This useful information will,in turn,help the police both in predicting potential terrorist events and in investigating previous events.Furthermore,this architecture will aid in the identification of criminals and their associates and handlers.Terrorism is psychological warfare,which,in the broadest sense,can be defined as the utilisation of deliberate violence for economic,political or religious purposes.In this study,a supervised learning-based approach was adopted to develop the proposed architecture.The dataset was prepared from the suicide bomb blast data of Pakistan obtained from the South Asia Terrorism Portal(SATP).As the proposed architecture was simulated,the supervised learning-based classifiers na飗e Bayes and Hoeffding Tree reached 72.17%accuracy.One of the additional benefits this study offers is the ability to predict the target audience of potential suicide bomb blasts,which may be used to eliminate future threats or,at least,minimise the number of casualties and other property losses.
基金This research was funded by the National Natural Science Foundation of China(21878124,31771680 and 61773182).
文摘Human action recognition under complex environment is a challenging work.Recently,sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions.The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class,and the minimal reconstruction error indicates its corresponding class.However,how to learn a discriminative dictionary is still a difficult work.In this work,we make two contributions.First,we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network(CNN)features.Secondly,we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term.Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models.
文摘Log-linear models and more recently neural network models used forsupervised relation extraction requires substantial amounts of training data andtime, limiting the portability to new relations and domains. To this end, we propose a training representation based on the dependency paths between entities in adependency tree which we call lexicalized dependency paths (LDPs). We showthat this representation is fast, efficient and transparent. We further propose representations utilizing entity types and its subtypes to refine our model and alleviatethe data sparsity problem. We apply lexicalized dependency paths to supervisedlearning using the ACE corpus and show that it can achieve similar performancelevel to other state-of-the-art methods and even surpass them on severalcategories.
文摘In order to solve the problem of automatic defect detection and process control in the welding and arc additive process,the paper monitors the current,voltage,audio,and other data during the welding process and extracts the minimum value,standard deviation,deviation from the voltage and current data.It extracts spectral features such as root mean square,spectral centroid,and zero-crossing rate from audio data,fuses the features extracted from multiple sensor signals,and establishes multiple machine learning supervised and unsupervised models.They are used to detect abnormalities in the welding process.The experimental results show that the established multiple machine learning models have high accuracy,among which the supervised learning model,the balanced accuracy of Ada boost is 0.957,and the unsupervised learning model Isolation Forest has a balanced accuracy of 0.909.
文摘As the fundamental infrastructure of the Internet,the optical network carries a great amount of Internet traffic.There would be great financial losses if some faults happen.Therefore,fault location is very important for the operation and maintenance in optical networks.Due to complex relationships among each network element in topology level,each board in network element level,and each component in board level,the con-crete fault location is hard for traditional method.In recent years,machine learning,es-pecially deep learning,has been applied to many complex problems,because machine learning can find potential non-linear mapping from some inputs to the output.In this paper,we introduce supervised machine learning to propose a complete process for fault location.Firstly,we use data preprocessing,data annotation,and data augmenta-tion in order to process original collected data to build a high-quality dataset.Then,two machine learning algorithms(convolutional neural networks and deep neural networks)are applied on the dataset.The evaluation on commercial optical networks shows that this process helps improve the quality of dataset,and two algorithms perform well on fault location.
文摘The DNA sequences of an organism play an important influence on its transcription and translation process, thus affecting its protein production and growth rate. Due to the com-plexity of DNA, it was extremely difficult to predict the macroscopic characteristics of or-ganisms. However, with the rapid development of machine learning in recent years, it be-comes possible to use powerful machine learning algorithms to process and analyze biolog-ical data. Based on the synthetic DNA sequences of a specific microbe, <em>E. coli</em>, I designed a process to predict its protein production and growth rate. By observing the properties of a data set constructed by previous work, I chose to use supervised learning regressors with encoded DNA sequences as input features to perform the predictions. After comparing different encoders and algorithms, I selected three encoders to encode the DNA sequences as inputs and trained seven different regressors to predict the outputs. The hy-per-parameters are optimized for three regressors which have the best potential prediction performance. Finally, I successfully predicted the protein production and growth rates, with the best <em>R</em><sup><em>2</em></sup> score 0.55 and 0.77, respectively, by using encoders to catch the potential fea-tures from the DNA sequences.
基金Project supported by the Natural Science Foundation of Chongqing(Grant No.cstc2021jcyj-msxmX0565)the Fundamental Research Funds for the Central Universities(Grant No.SWU021002)the Graduate Research Innovation Project of Chongqing(Grant No.CYS22242)。
文摘In recent years, spiking neural networks(SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spatio-temporal information.However, the non-differential spike activity makes SNNs more difficult to train in supervised training. Most existing methods focusing on introducing an approximated derivative to replace it, while they are often based on static surrogate functions. In this paper, we propose a progressive surrogate gradient learning for backpropagation of SNNs, which is able to approximate the step function gradually and to reduce information loss. Furthermore, memristor cross arrays are used for speeding up calculation and reducing system energy consumption for their hardware advantage. The proposed algorithm is evaluated on both static and neuromorphic datasets using fully connected and convolutional network architecture, and the experimental results indicate that our approach has a high performance compared with previous research.
基金the support of the Monash-IITB Academy Scholarshipthe Australian Research Council for funding the present research (DP190103592)。
文摘Machine learning(ML) models provide great opportunities to accelerate novel material development, offering a virtual alternative to laborious and resource-intensive empirical methods. In this work, the second of a two-part study, an ML approach is presented that offers accelerated digital design of Mg alloys. A systematic evaluation of four ML regression algorithms was explored to rationalise the complex relationships in Mg-alloy data and to capture the composition-processing-property patterns. Cross-validation and hold-out set validation techniques were utilised for unbiased estimation of model performance. Using atomic and thermodynamic properties of the alloys, feature augmentation was examined to define the most descriptive representation spaces for the alloy data. Additionally, a graphical user interface(GUI) webtool was developed to facilitate the use of the proposed models in predicting the mechanical properties of new Mg alloys. The results demonstrate that random forest regression model and neural network are robust models for predicting the ultimate tensile strength and ductility of Mg alloys, with accuracies of ~80% and 70% respectively. The developed models in this work are a step towards high-throughput screening of novel candidates for target mechanical properties and provide ML-guided alloy design.
文摘Android devices are popularly available in the commercial market at different price levels for various levels of customers.The Android stack is more vulnerable compared to other platforms because of its open-source nature.There are many android malware detection techniques available to exploit the source code andfind associated components during execution time.To obtain a better result we create a hybrid technique merging static and dynamic processes.In this paper,in thefirst part,we have proposed a technique to check for correlation between features and classify using a supervised learning approach to avoid Mul-ticollinearity problem is one of the drawbacks in the existing system.In the proposed work,a novel PCA(Principal Component Analysis)based feature reduction technique is implemented with conditional dependency features by gathering the functionalities of the application which adds novelty for the given approach.The Android Sensitive Permission is one major key point to be considered while detecting malware.We select vulnerable columns based on features like sensitive permissions,application program interface calls,services requested through the kernel,and the relationship between the variables henceforth build the model using machine learning classifiers and identify whether the given application is malicious or benign.Thefinal goal of this paper is to check benchmarking datasets collected from various repositories like virus share,Github,and the Canadian Institute of cyber security,compare with models ensuring zero-day exploits can be monitored and detected with better accuracy rate.
基金supported in part by the National Natural Science Foundation of China under Grant 61771410in part by the Sichuan Science and Technology Program 2023NSFSC1373in part by Postgraduate Innovation Fund Project of SWUST 23zx7101.
文摘To meet the high-performance requirements of fifth-generation(5G)and sixth-generation(6G)wireless networks,in particular,ultra-reliable and low-latency communication(URLLC)is considered to be one of the most important communication scenarios in a wireless network.In this paper,we consider the effects of the Rician fading channel on the performance of cooperative device-to-device(D2D)communication with URLLC.For better performance,we maximize and examine the system’s minimal rate of D2D communication.Due to the interference in D2D communication,the problem of maximizing the minimum rate becomes non-convex and difficult to solve.To solve this problem,a learning-to-optimize-based algorithm is proposed to find the optimal power allocation.The conventional branch and bound(BB)algorithm are used to learn the optimal pruning policy with supervised learning.Ensemble learning is used to train the multiple classifiers.To address the imbalanced problem,we used the supervised undersampling technique.Comparisons are made with the conventional BB algorithm and the heuristic algorithm.The outcome of the simulation demonstrates a notable performance improvement in power consumption.The proposed algorithm has significantly low computational complexity and runs faster as compared to the conventional BB algorithm and a heuristic algorithm.