期刊文献+
共找到96篇文章
< 1 2 5 >
每页显示 20 50 100
Machine learning applications in stroke medicine:advancements,challenges,and future prospectives 被引量:3
1
作者 Mario Daidone Sergio Ferrantelli Antonino Tuttolomondo 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第4期769-773,共5页
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique... Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease. 展开更多
关键词 cerebrovascular disease deep learning machine learning reinforcement learning STROKE stroke therapy supervised learning unsupervised learning
下载PDF
AMachine Learning Approach to Cyberbullying Detection in Arabic Tweets
2
作者 Dhiaa Musleh Atta Rahman +8 位作者 Mohammed Abbas Alkherallah Menhal Kamel Al-Bohassan Mustafa Mohammed Alawami Hayder Ali Alsebaa Jawad Ali Alnemer Ghazi Fayez Al-Mutairi May Issa Aldossary Dalal A.Aldowaihi Fahd Alhaidari 《Computers, Materials & Continua》 SCIE EI 2024年第7期1033-1054,共22页
With the rapid growth of internet usage,a new situation has been created that enables practicing bullying.Cyberbullying has increased over the past decade,and it has the same adverse effects as face-to-face bullying,l... With the rapid growth of internet usage,a new situation has been created that enables practicing bullying.Cyberbullying has increased over the past decade,and it has the same adverse effects as face-to-face bullying,like anger,sadness,anxiety,and fear.With the anonymity people get on the internet,they tend to bemore aggressive and express their emotions freely without considering the effects,which can be a reason for the increase in cyberbullying and it is the main motive behind the current study.This study presents a thorough background of cyberbullying and the techniques used to collect,preprocess,and analyze the datasets.Moreover,a comprehensive review of the literature has been conducted to figure out research gaps and effective techniques and practices in cyberbullying detection in various languages,and it was deduced that there is significant room for improvement in the Arabic language.As a result,the current study focuses on the investigation of shortlisted machine learning algorithms in natural language processing(NLP)for the classification of Arabic datasets duly collected from Twitter(also known as X).In this regard,support vector machine(SVM),Naive Bayes(NB),Random Forest(RF),Logistic regression(LR),Bootstrap aggregating(Bagging),Gradient Boosting(GBoost),Light Gradient Boosting Machine(LightGBM),Adaptive Boosting(AdaBoost),and eXtreme Gradient Boosting(XGBoost)were shortlisted and investigated due to their effectiveness in the similar problems.Finally,the scheme was evaluated by well-known performance measures like accuracy,precision,Recall,and F1-score.Consequently,XGBoost exhibited the best performance with 89.95%accuracy,which is promising compared to the state-of-the-art. 展开更多
关键词 Supervised machine learning ensemble learning CYBERBULLYING Arabic tweets NLP
下载PDF
A Hybrid Learning Algorithm for Breast Cancer Diagnosis
3
作者 Alio Boubacar Goga Harouna Naroua Chaibou Kadri 《Journal of Intelligent Learning Systems and Applications》 2024年第3期262-273,共12页
In many fields, particularly that of health, the diagnosis of diseases is a very difficult task to carry out. Therefore, early detection of diseases using artificial intelligence tools can be of paramount importance i... In many fields, particularly that of health, the diagnosis of diseases is a very difficult task to carry out. Therefore, early detection of diseases using artificial intelligence tools can be of paramount importance in the medical field. In this study, we proposed an intelligent system capable of performing diagnoses for radiologists. The support system is designed to evaluate mammographic images, thereby classifying normal and abnormal patients. The proposed method (DiagBC for Breast Cancer Diagnosis) combines two (2) intelligent unsupervised learning algorithms (the C-Means clustering algorithm and the Gaussian Mixture Model) for the segmentation of medical images and an algorithm for supervised learning (a modified DenseNet) for the diagnosis of breast images. Ultimately, a prototype of the proposed system was implemented for the Magori Polyclinic in Niamey (Niger) making it possible to diagnose (or classify) breast cancer into two (2) classes: the normal class and the abnormal class. 展开更多
关键词 Image Diagnosis SEGMENTATION DenseNet Unsupervised learning Supervised learning Breast Cancer
下载PDF
Transfer Learning Approach to Classify the X-Ray Image that Corresponds to Corona Disease Using ResNet50 Pre-Trained by ChexNet
4
作者 Mahyar Bolhassani 《Journal of Intelligent Learning Systems and Applications》 2024年第2期80-90,共11页
The COVID-19 pandemic has had a widespread negative impact globally. It shares symptoms with other respiratory illnesses such as pneumonia and influenza, making rapid and accurate diagnosis essential to treat individu... The COVID-19 pandemic has had a widespread negative impact globally. It shares symptoms with other respiratory illnesses such as pneumonia and influenza, making rapid and accurate diagnosis essential to treat individuals and halt further transmission. X-ray imaging of the lungs is one of the most reliable diagnostic tools. Utilizing deep learning, we can train models to recognize the signs of infection, thus aiding in the identification of COVID-19 cases. For our project, we developed a deep learning model utilizing the ResNet50 architecture, pre-trained with ImageNet and CheXNet datasets. We tackled the challenge of an imbalanced dataset, the CoronaHack Chest X-Ray dataset provided by Kaggle, through both binary and multi-class classification approaches. Additionally, we evaluated the performance impact of using Focal loss versus Cross-entropy loss in our model. 展开更多
关键词 X-Ray Classification Convolutional Neural Network ResNet Transfer learning Supervised learning COVID-19 Chest X-Ray
下载PDF
Survey on AI and Machine Learning Techniques for Microgrid Energy Management Systems 被引量:2
5
作者 Aditya Joshi Skieler Capezza +1 位作者 Ahmad Alhaji Mo-Yuen Chow 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第7期1513-1529,共17页
In the era of an energy revolution,grid decentralization has emerged as a viable solution to meet the increasing global energy demand by incorporating renewables at the distributed level.Microgrids are considered a dr... In the era of an energy revolution,grid decentralization has emerged as a viable solution to meet the increasing global energy demand by incorporating renewables at the distributed level.Microgrids are considered a driving component for accelerating grid decentralization.To optimally utilize the available resources and address potential challenges,there is a need to have an intelligent and reliable energy management system(EMS)for the microgrid.The artificial intelligence field has the potential to address the problems in EMS and can provide resilient,efficient,reliable,and scalable solutions.This paper presents an overview of existing conventional and AI-based techniques for energy management systems in microgrids.We analyze EMS methods for centralized,decentralized,and distributed microgrids separately.Then,we summarize machine learning techniques such as ANNs,federated learning,LSTMs,RNNs,and reinforcement learning for EMS objectives such as economic dispatch,optimal power flow,and scheduling.With the incorporation of AI,microgrids can achieve greater performance efficiency and more reliability for managing a large number of energy resources.However,challenges such as data privacy,security,scalability,explainability,etc.,need to be addressed.To conclude,the authors state the possible future research directions to explore AI-based EMS's potential in real-world applications. 展开更多
关键词 CONSENSUS energy management system(EMS) reinforcement learning supervised learning
下载PDF
Investigation of Android Malware with Machine Learning Classifiers using Enhanced PCA Algorithm 被引量:1
6
作者 V.Joseph Raymond R.Jeberson Retna Raj 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期2147-2163,共17页
Android devices are popularly available in the commercial market at different price levels for various levels of customers.The Android stack is more vulnerable compared to other platforms because of its open-source na... Android devices are popularly available in the commercial market at different price levels for various levels of customers.The Android stack is more vulnerable compared to other platforms because of its open-source nature.There are many android malware detection techniques available to exploit the source code andfind associated components during execution time.To obtain a better result we create a hybrid technique merging static and dynamic processes.In this paper,in thefirst part,we have proposed a technique to check for correlation between features and classify using a supervised learning approach to avoid Mul-ticollinearity problem is one of the drawbacks in the existing system.In the proposed work,a novel PCA(Principal Component Analysis)based feature reduction technique is implemented with conditional dependency features by gathering the functionalities of the application which adds novelty for the given approach.The Android Sensitive Permission is one major key point to be considered while detecting malware.We select vulnerable columns based on features like sensitive permissions,application program interface calls,services requested through the kernel,and the relationship between the variables henceforth build the model using machine learning classifiers and identify whether the given application is malicious or benign.Thefinal goal of this paper is to check benchmarking datasets collected from various repositories like virus share,Github,and the Canadian Institute of cyber security,compare with models ensuring zero-day exploits can be monitored and detected with better accuracy rate. 展开更多
关键词 Zero-day exploit hybrid analysis principal component analysis supervised learning smart cities
下载PDF
A progressive surrogate gradient learning for memristive spiking neural network
7
作者 王姝 陈涛 +4 位作者 龚钰 孙帆 申思远 段书凯 王丽丹 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第6期689-697,共9页
In recent years, spiking neural networks(SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spa... In recent years, spiking neural networks(SNNs) have received increasing attention of research in the field of artificial intelligence due to their high biological plausibility, low energy consumption, and abundant spatio-temporal information.However, the non-differential spike activity makes SNNs more difficult to train in supervised training. Most existing methods focusing on introducing an approximated derivative to replace it, while they are often based on static surrogate functions. In this paper, we propose a progressive surrogate gradient learning for backpropagation of SNNs, which is able to approximate the step function gradually and to reduce information loss. Furthermore, memristor cross arrays are used for speeding up calculation and reducing system energy consumption for their hardware advantage. The proposed algorithm is evaluated on both static and neuromorphic datasets using fully connected and convolutional network architecture, and the experimental results indicate that our approach has a high performance compared with previous research. 展开更多
关键词 spiking neural network surrogate gradient supervised learning memristor cross array
下载PDF
Radar emitter signal recognition method based on improved collaborative semi-supervised learning
8
作者 JIN Tao ZHANG Xindong 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第5期1182-1190,共9页
Rare labeled data are difficult to recognize by using conventional methods in the process of radar emitter recogni-tion.To solve this problem,an optimized cooperative semi-supervised learning radar emitter recognition... Rare labeled data are difficult to recognize by using conventional methods in the process of radar emitter recogni-tion.To solve this problem,an optimized cooperative semi-supervised learning radar emitter recognition method based on a small amount of labeled data is developed.First,a small amount of labeled data are randomly sampled by using the bootstrap method,loss functions for three common deep learning net-works are improved,the uniform distribution and cross-entropy function are combined to reduce the overconfidence of softmax classification.Subsequently,the dataset obtained after sam-pling is adopted to train three improved networks so as to build the initial model.In addition,the unlabeled data are preliminarily screened through dynamic time warping(DTW)and then input into the initial model trained previously for judgment.If the judg-ment results of two or more networks are consistent,the unla-beled data are labeled and put into the labeled data set.Lastly,the three network models are input into the labeled dataset for training,and the final model is built.As revealed by the simula-tion results,the semi-supervised learning method adopted in this paper is capable of exploiting a small amount of labeled data and basically achieving the accuracy of labeled data recognition. 展开更多
关键词 emitter signal identification time series BOOTSTRAP semi supervised learning cross entropy function homogeniza-tion dynamic time warping(DTW)
下载PDF
A machine learning approach for accelerated design of magnesium alloys.Part B: Regression and property prediction
9
作者 M.Ghorbani M.Boley +1 位作者 P.N.H.Nakashima N.Birbilis 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2023年第11期4197-4205,共9页
Machine learning(ML) models provide great opportunities to accelerate novel material development, offering a virtual alternative to laborious and resource-intensive empirical methods. In this work, the second of a two... Machine learning(ML) models provide great opportunities to accelerate novel material development, offering a virtual alternative to laborious and resource-intensive empirical methods. In this work, the second of a two-part study, an ML approach is presented that offers accelerated digital design of Mg alloys. A systematic evaluation of four ML regression algorithms was explored to rationalise the complex relationships in Mg-alloy data and to capture the composition-processing-property patterns. Cross-validation and hold-out set validation techniques were utilised for unbiased estimation of model performance. Using atomic and thermodynamic properties of the alloys, feature augmentation was examined to define the most descriptive representation spaces for the alloy data. Additionally, a graphical user interface(GUI) webtool was developed to facilitate the use of the proposed models in predicting the mechanical properties of new Mg alloys. The results demonstrate that random forest regression model and neural network are robust models for predicting the ultimate tensile strength and ductility of Mg alloys, with accuracies of ~80% and 70% respectively. The developed models in this work are a step towards high-throughput screening of novel candidates for target mechanical properties and provide ML-guided alloy design. 展开更多
关键词 Magnesium alloys Digital alloy design Supervised machine learning Regression models Prediction performance
下载PDF
CoLM^(2)S:Contrastive self‐supervised learning on attributed multiplex graph network with multi‐scale information
10
作者 Beibei Han Yingmei Wei +1 位作者 Qingyong Wang Shanshan Wan 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1464-1479,共16页
Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of t... Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of the real‐word system are multiple relations,where entities are linked by different types of relations,and each relation is a view of the graph network.Second,the rich multi‐scale information(structure‐level and feature‐level)of the graph network can be seen as self‐supervised signals,which are not fully exploited.A novel contrastive self‐supervised representation learning framework on attributed multiplex graph networks with multi‐scale(named CoLM^(2)S)information is presented in this study.It mainly contains two components:intra‐relation contrast learning and interrelation contrastive learning.Specifically,the contrastive self‐supervised representation learning framework on attributed single‐layer graph networks with multi‐scale information(CoLMS)framework with the graph convolutional network as encoder to capture the intra‐relation information with multi‐scale structure‐level and feature‐level selfsupervised signals is introduced first.The structure‐level information includes the edge structure and sub‐graph structure,and the feature‐level information represents the output of different graph convolutional layer.Second,according to the consensus assumption among inter‐relations,the CoLM^(2)S framework is proposed to jointly learn various graph relations in attributed multiplex graph network to achieve global consensus node embedding.The proposed method can fully distil the graph information.Extensive experiments on unsupervised node clustering and graph visualisation tasks demonstrate the effectiveness of our methods,and it outperforms existing competitive baselines. 展开更多
关键词 attributed multiplex graph network contrastive self‐supervised learning graph representation learning multiscale information
下载PDF
Optimizing Power Allocation for D2D Communication with URLLC under Rician Fading Channel:A Learning-to-Optimize Approach
11
作者 Owais Muhammad Hong Jiang +2 位作者 Mushtaq Muhammad Umer Bilal Muhammad Naeem Muhammad Ahtsam 《Intelligent Automation & Soft Computing》 SCIE 2023年第9期3193-3212,共20页
To meet the high-performance requirements of fifth-generation(5G)and sixth-generation(6G)wireless networks,in particular,ultra-reliable and low-latency communication(URLLC)is considered to be one of the most important... To meet the high-performance requirements of fifth-generation(5G)and sixth-generation(6G)wireless networks,in particular,ultra-reliable and low-latency communication(URLLC)is considered to be one of the most important communication scenarios in a wireless network.In this paper,we consider the effects of the Rician fading channel on the performance of cooperative device-to-device(D2D)communication with URLLC.For better performance,we maximize and examine the system’s minimal rate of D2D communication.Due to the interference in D2D communication,the problem of maximizing the minimum rate becomes non-convex and difficult to solve.To solve this problem,a learning-to-optimize-based algorithm is proposed to find the optimal power allocation.The conventional branch and bound(BB)algorithm are used to learn the optimal pruning policy with supervised learning.Ensemble learning is used to train the multiple classifiers.To address the imbalanced problem,we used the supervised undersampling technique.Comparisons are made with the conventional BB algorithm and the heuristic algorithm.The outcome of the simulation demonstrates a notable performance improvement in power consumption.The proposed algorithm has significantly low computational complexity and runs faster as compared to the conventional BB algorithm and a heuristic algorithm. 展开更多
关键词 D2D URLLC rician fading supervised learning
下载PDF
A Systematic Literature Review of Deep Learning Algorithms for Segmentation of the COVID-19 Infection
12
作者 Shroog Alshomrani Muhammad Arif Mohammed A.Al Ghamdi 《Computers, Materials & Continua》 SCIE EI 2023年第6期5717-5742,共26页
Coronavirus has infected more than 753 million people,ranging in severity from one person to another,where more than six million infected people died worldwide.Computer-aided diagnostic(CAD)with artificial intelligenc... Coronavirus has infected more than 753 million people,ranging in severity from one person to another,where more than six million infected people died worldwide.Computer-aided diagnostic(CAD)with artificial intelligence(AI)showed outstanding performance in effectively diagnosing this virus in real-time.Computed tomography is a complementary diagnostic tool to clarify the damage of COVID-19 in the lungs even before symptoms appear in patients.This paper conducts a systematic literature review of deep learning methods for classifying the segmentation of COVID-19 infection in the lungs.We used the methodology of systematic reviews and meta-analyses(PRISMA)flow method.This research aims to systematically analyze the supervised deep learning methods,open resource datasets,data augmentation methods,and loss functions used for various segment shapes of COVID-19 infection from computerized tomography(CT)chest images.We have selected 56 primary studies relevant to the topic of the paper.We have compared different aspects of the algorithms used to segment infected areas in the CT images.Limitations to deep learning in the segmentation of infected areas still need to be developed to predict smaller regions of infection at the beginning of their appearance. 展开更多
关键词 COVID-19 segmentation chest CT images deep learning systematic review 2D and 3D supervised deep learning
下载PDF
Design of N-11-Azaartemisinins Potentially Active against Plasmodium falciparum by Combined Molecular Electrostatic Potential, Ligand-Receptor Interaction and Models Built with Supervised Machine Learning Methods
13
作者 Jeferson Stiver Oliveira de Castro José Ciríaco Pinheiro +5 位作者 Sílvia Simone dos Santos de Morais Heriberto Rodrigues Bitencourt Antonio Florêncio de Figueiredo Marcos Antonio Barros dos Santos Fábio dos Santos Gil Ana Cecília Barbosa Pinheiro 《Journal of Biophysical Chemistry》 CAS 2023年第1期1-29,共29页
N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning m... N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning methods (PCA, HCA, KNN, SIMCA, and SDA). The optimization of molecular structures was performed using the B3LYP/6-31G* approach. MEP maps and ligand-receptor interactions were used to investigate key structural features required for biological activities and likely interactions between N-11-azaartemisinins and heme, respectively. The supervised machine learning methods allowed the separation of the investigated compounds into two classes: cha and cla, with the properties ε<sub>LUMO+1</sub> (one level above lowest unoccupied molecular orbital energy), d(C<sub>6</sub>-C<sub>5</sub>) (distance between C<sub>6</sub> and C<sub>5</sub> atoms in ligands), and TSA (total surface area) responsible for the classification. The insights extracted from the investigation developed and the chemical intuition enabled the design of sixteen new N-11-azaartemisinins (prediction set), moreover, models built with supervised machine learning methods were applied to this prediction set. The result of this application showed twelve new promising N-11-azaartemisinins for synthesis and biological evaluation. 展开更多
关键词 Antimalarial Design MEP Ligand-Receptor Interaction Supervised Machine learning Methods Models Built with Supervised Machine learning Methods
下载PDF
Supervised Learning Algorithm on Unstructured Documents for the Classification of Job Offers: Case of Cameroun
14
作者 Fritz Sosso Makembe Roger Atsa Etoundi Hippolyte Tapamo 《Journal of Computer and Communications》 2023年第2期75-88,共14页
Nowadays, in data science, supervised learning algorithms are frequently used to perform text classification. However, African textual data, in general, have been studied very little using these methods. This article ... Nowadays, in data science, supervised learning algorithms are frequently used to perform text classification. However, African textual data, in general, have been studied very little using these methods. This article notes the particularity of the data and measures the level of precision of predictions of naive Bayes algorithms, decision tree, and SVM (Support Vector Machine) on a corpus of computer jobs taken on the internet. This is due to the data imbalance problem in machine learning. However, this problem essentially focuses on the distribution of the number of documents in each class or subclass. Here, we delve deeper into the problem to the word count distribution in a set of documents. The results are compared with those obtained on a set of French IT offers. It appears that the precision of the classification varies between 88% and 90% for French offers against 67%, at most, for Cameroonian offers. The contribution of this study is twofold. Indeed, it clearly shows that, in a similar job category, job offers on the internet in Cameroon are more unstructured compared to those available in France, for example. Moreover, it makes it possible to emit a strong hypothesis according to which sets of texts having a symmetrical distribution of the number of words obtain better results with supervised learning algorithms. 展开更多
关键词 Job Offer Underemployment Text Classification Imbalanced Data Symmetric Word Distribution Supervised learning
下载PDF
Comparison of Machine Learning Methods for Satellite Image Classification: A Case Study of Casablanca Using Landsat Imagery and Google Earth Engine
15
作者 Hafsa Ouchra Abdessamad Belangour Allae Erraissi 《Journal of Environmental & Earth Sciences》 2023年第2期118-134,共17页
Satellite image classification is crucial in various applications such as urban planning,environmental monitoring,and land use analysis.In this study,the authors present a comparative analysis of different supervised ... Satellite image classification is crucial in various applications such as urban planning,environmental monitoring,and land use analysis.In this study,the authors present a comparative analysis of different supervised and unsupervised learning methods for satellite image classification,focusing on a case study in Casablanca using Landsat 8 imagery.This research aims to identify the most effective machine-learning approach for accurately classifying land cover in an urban environment.The methodology used consists of the pre-processing of Landsat imagery data from Casablanca city,the authors extract relevant features and partition them into training and test sets,and then use random forest(RF),SVM(support vector machine),classification,and regression tree(CART),gradient tree boost(GTB),decision tree(DT),and minimum distance(MD)algorithms.Through a series of experiments,the authors evaluate the performance of each machine learning method in terms of accuracy,and Kappa coefficient.This work shows that random forest is the best-performing algorithm,with an accuracy of 95.42%and 0.94 Kappa coefficient.The authors discuss the factors of their performance,including data characteristics,accurate selection,and model influencing. 展开更多
关键词 Supervised learning Unsupervised learning Satellite image classification Machine learning Google Earth Engine
下载PDF
Weakly Supervised Network with Scribble-Supervised and Edge-Mask for Road Extraction from High-Resolution Remote Sensing Images
16
作者 Supeng Yu Fen Huang Chengcheng Fan 《Computers, Materials & Continua》 SCIE EI 2024年第4期549-562,共14页
Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous human... Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods. 展开更多
关键词 Semantic segmentation road extraction weakly supervised learning scribble supervision remote sensing image
下载PDF
Comparison of Two Recurrent Neural Networks for Rainfall-Runoff Modeling in the Zou River Basin at Atchérigbé (Bénin)
17
作者 Iboukoun Eliézer Biao Oscar Houessou +1 位作者 Pierre Jérôme Zohou Adéchina Eric Alamou 《Journal of Geoscience and Environment Protection》 2024年第9期167-181,共15页
Hydrological models are developed to simulate river flows over a watershed for many practical applications in the field of water resource management. The present paper compares the performance of two recurrent neural ... Hydrological models are developed to simulate river flows over a watershed for many practical applications in the field of water resource management. The present paper compares the performance of two recurrent neural networks for rainfall-runoff modeling in the Zou River basin at Atchérigbé outlet. To this end, we used daily precipitation data over the period 1988-2010 as input of the models, such as the Long Short-Term Memory (LSTM) and Recurrent Gate Networks (GRU) to simulate river discharge in the study area. The investigated models give good results in calibration (R2 = 0.888, NSE = 0.886, and RMSE = 0.42 for LSTM;R2 = 0.9, NSE = 0.9 and RMSE = 0.397 for GRU) and in validation (R2 = 0.865, NSE = 0.851, and RMSE = 0.329 for LSTM;R2 = 0.9, NSE = 0.865 and RMSE = 0.301 for GRU). This good performance of LSTM and GRU models confirms the importance of models based on machine learning in modeling hydrological phenomena for better decision-making. 展开更多
关键词 Supervised learning Modeling Zou Basin Long and Short-Term Memory Gated Recurrent Unit Hyperparameters Optimization
下载PDF
Human Action Recognition Based on Supervised Class-Specific Dictionary Learning with Deep Convolutional Neural Network Features 被引量:6
18
作者 Binjie Gu 《Computers, Materials & Continua》 SCIE EI 2020年第4期243-262,共20页
Human action recognition under complex environment is a challenging work.Recently,sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions.The ma... Human action recognition under complex environment is a challenging work.Recently,sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions.The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class,and the minimal reconstruction error indicates its corresponding class.However,how to learn a discriminative dictionary is still a difficult work.In this work,we make two contributions.First,we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network(CNN)features.Secondly,we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term.Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models. 展开更多
关键词 Action recognition deep CNN features sparse model supervised dictionary learning
下载PDF
Machine learning in orthopaedic surgery 被引量:5
19
作者 Simon P Lalehzarian Anirudh K Gowd Joseph N Liu 《World Journal of Orthopedics》 2021年第9期685-699,共15页
Artificial intelligence and machine learning in orthopaedic surgery has gained mass interest over the last decade or so.In prior studies,researchers have demonstrated that machine learning in orthopaedics can be used ... Artificial intelligence and machine learning in orthopaedic surgery has gained mass interest over the last decade or so.In prior studies,researchers have demonstrated that machine learning in orthopaedics can be used for different applications such as fracture detection,bone tumor diagnosis,detecting hip implant mechanical loosening,and grading osteoarthritis.As time goes on,the utility of artificial intelligence and machine learning algorithms,such as deep learning,continues to grow and expand in orthopaedic surgery.The purpose of this review is to provide an understanding of the concepts of machine learning and a background of current and future orthopaedic applications of machine learning in risk assessment,outcomes assessment,imaging,and basic science fields.In most cases,machine learning has proven to be just as effective,if not more effective,than prior methods such as logistic regression in assessment and prediction.With the help of deep learning algorithms,such as artificial neural networks and convolutional neural networks,artificial intelligence in orthopaedics has been able to improve diagnostic accuracy and speed,flag the most critical and urgent patients for immediate attention,reduce the amount of human error,reduce the strain on medical professionals,and improve care.Because machine learning has shown diagnostic and prognostic uses in orthopaedic surgery,physicians should continue to research these techniques and be trained to use these methods effectively in order to improve orthopaedic treatment. 展开更多
关键词 Artificial intelligence Machine learning Supervised learning Unsupervised learning Deep learning Orthopaedic surgery
下载PDF
Development of a depression in Parkinson's disease prediction model using machine learning 被引量:9
20
作者 Haewon Byeon 《World Journal of Psychiatry》 SCIE 2020年第10期234-244,共11页
BACKGROUND It is important to diagnose depression in Parkinson’s disease(DPD)as soon as possible and identify the predictors of depression to improve quality of life in Parkinson’s disease(PD)patients.AIM To develop... BACKGROUND It is important to diagnose depression in Parkinson’s disease(DPD)as soon as possible and identify the predictors of depression to improve quality of life in Parkinson’s disease(PD)patients.AIM To develop a model for predicting DPD based on the support vector machine,while considering sociodemographic factors,health habits,Parkinson's symptoms,sleep behavior disorders,and neuropsychiatric indicators as predictors and provide baseline data for identifying DPD.METHODS This study analyzed 223 of 335 patients who were 60 years or older with PD.Depression was measured using the 30 items of the Geriatric Depression Scale,and the explanatory variables included PD-related motor signs,rapid eye movement sleep behavior disorders,and neuropsychological tests.The support vector machine was used to develop a DPD prediction model.RESULTS When the effects of PD motor symptoms were compared using“functional weight”,late motor complications(occurrence of levodopa-induced dyskinesia)were the most influential risk factors for Parkinson's symptoms.CONCLUSION It is necessary to develop customized screening tests that can detect DPD in the early stage and continuously monitor high-risk groups based on the factors related to DPD derived from this predictive model in order to maintain the emotional health of PD patients. 展开更多
关键词 Depression in Parkinson's disease Supervised Machine learning Neuropsychological test Risk factor Support vector machine Rapid eye movement sleep behavior disorders
下载PDF
上一页 1 2 5 下一页 到第
使用帮助 返回顶部