期刊文献+
共找到15,808篇文章
< 1 2 250 >
每页显示 20 50 100
Model Change Active Learning in Graph-Based Semi-supervised Learning
1
作者 Kevin S.Miller Andrea L.Bertozzi 《Communications on Applied Mathematics and Computation》 EI 2024年第2期1270-1298,共29页
Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to bes... Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to best improve performance while limiting the number of new labels."Model Change"active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s).We pair this idea with graph-based semi-supervised learning(SSL)methods,that use the spectrum of the graph Laplacian matrix,which can be truncated to avoid prohibitively large computational and storage costs.We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.We show a variety of multiclass examples that illustrate improved performance over prior state-of-art. 展开更多
关键词 Active learning Graph-based methods semi-supervised learning(SSL) Graph Laplacian
下载PDF
A Novel Graph Structure Learning Based Semi-Supervised Framework for Anomaly Identification in Fluctuating IoT Environment
2
作者 Weijian Song Xi Li +3 位作者 Peng Chen Juan Chen Jianhua Ren Yunni Xia 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期3001-3016,共16页
With the rapid development of Internet of Things(IoT)technology,IoT systems have been widely applied in health-care,transportation,home,and other fields.However,with the continuous expansion of the scale and increasin... With the rapid development of Internet of Things(IoT)technology,IoT systems have been widely applied in health-care,transportation,home,and other fields.However,with the continuous expansion of the scale and increasing complexity of IoT systems,the stability and security issues of IoT systems have become increasingly prominent.Thus,it is crucial to detect anomalies in the collected IoT time series from various sensors.Recently,deep learning models have been leveraged for IoT anomaly detection.However,owing to the challenges associated with data labeling,most IoT anomaly detection methods resort to unsupervised learning techniques.Nevertheless,the absence of accurate abnormal information in unsupervised learning methods limits their performance.To address these problems,we propose AS-GCN-MTM,an adaptive structural Graph Convolutional Networks(GCN)-based framework using a mean-teacher mechanism(AS-GCN-MTM)for anomaly identification.It performs better than unsupervised methods using only a small amount of labeled data.Mean Teachers is an effective semi-supervised learning method that utilizes unlabeled data for training to improve the generalization ability and performance of the model.However,the dependencies between data are often unknown in time series data.To solve this problem,we designed a graph structure adaptive learning layer based on neural networks,which can automatically learn the graph structure from time series data.It not only better captures the relationships between nodes but also enhances the model’s performance by augmenting key data.Experiments have demonstrated that our method improves the baseline model with the highest F1 value by 10.4%,36.1%,and 5.6%,respectively,on three real datasets with a 10%data labeling rate. 展开更多
关键词 IoT multivariate time series anomaly detection graph learning semi-supervised mean teachers
下载PDF
Decentralized Semi-Supervised Learning for Stochastic Configuration Networks Based on the Mean Teacher Method
3
作者 Kaijing Li Wu Ai 《Journal of Computer and Communications》 2024年第4期247-261,共15页
The aim of this paper is to broaden the application of Stochastic Configuration Network (SCN) in the semi-supervised domain by utilizing common unlabeled data in daily life. It can enhance the classification accuracy ... The aim of this paper is to broaden the application of Stochastic Configuration Network (SCN) in the semi-supervised domain by utilizing common unlabeled data in daily life. It can enhance the classification accuracy of decentralized SCN algorithms while effectively protecting user privacy. To this end, we propose a decentralized semi-supervised learning algorithm for SCN, called DMT-SCN, which introduces teacher and student models by combining the idea of consistency regularization to improve the response speed of model iterations. In order to reduce the possible negative impact of unsupervised data on the model, we purposely change the way of adding noise to the unlabeled data. Simulation results show that the algorithm can effectively utilize unlabeled data to improve the classification accuracy of SCN training and is robust under different ground simulation environments. 展开更多
关键词 Stochastic Neural Network Consistency Regularization semi-supervised learning Decentralized learning
下载PDF
Enhancing Deep Learning Soil Moisture Forecasting Models by Integrating Physics-based Models 被引量:1
4
作者 Lu LI Yongjiu DAI +5 位作者 Zhongwang WEI Wei SHANGGUAN Nan WEI Yonggen ZHANG Qingliang LI Xian-Xiang LI 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1326-1341,共16页
Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient... Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient representation of land-surface processes.In addition to PB models,deep learning(DL)models have been widely used in SM predictions recently.However,few pure DL models have notably high success rates due to lacking physical information.Thus,we developed hybrid models to effectively integrate the outputs of PB models into DL models to improve SM predictions.To this end,we first developed a hybrid model based on the attention mechanism to take advantage of PB models at each forecast time scale(attention model).We further built an ensemble model that combined the advantages of different hybrid schemes(ensemble model).We utilized SM forecasts from the Global Forecast System to enhance the convolutional long short-term memory(ConvLSTM)model for 1–16 days of SM predictions.The performances of the proposed hybrid models were investigated and compared with two existing hybrid models.The results showed that the attention model could leverage benefits of PB models and achieved the best predictability of drought events among the different hybrid models.Moreover,the ensemble model performed best among all hybrid models at all forecast time scales and different soil conditions.It is highlighted that the ensemble model outperformed the pure DL model over 79.5%of in situ stations for 16-day predictions.These findings suggest that our proposed hybrid models can adequately exploit the benefits of PB model outputs to aid DL models in making SM predictions. 展开更多
关键词 soil moisture forecasting hybrid model deep learning ConvLSTM attention mechanism
下载PDF
Federated Learning Model for Auto Insurance Rate Setting Based on Tweedie Distribution 被引量:1
5
作者 Tao Yin Changgen Peng +2 位作者 Weijie Tan Dequan Xu Hanlin Tang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期827-843,共17页
In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining ... In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining the Tweedie regression model involves training on a centralized dataset,when the data is provided by multiple parties,training a privacy-preserving Tweedie regression model without exchanging raw data becomes a challenge.To address this issue,this study introduces a novel vertical federated learning-based Tweedie regression algorithm for multi-party auto insurance rate setting in data silos.The algorithm can keep sensitive data locally and uses privacy-preserving techniques to achieve intersection operations between the two parties holding the data.After determining which entities are shared,the participants train the model locally using the shared entity data to obtain the local generalized linear model intermediate parameters.The homomorphic encryption algorithms are introduced to interact with and update the model intermediate parameters to collaboratively complete the joint training of the car insurance rate-setting model.Performance tests on two publicly available datasets show that the proposed federated Tweedie regression algorithm can effectively generate Tweedie regression models that leverage the value of data fromboth partieswithout exchanging data.The assessment results of the scheme approach those of the Tweedie regressionmodel learned fromcentralized data,and outperformthe Tweedie regressionmodel learned independently by a single party. 展开更多
关键词 Rate setting Tweedie distribution generalized linear models federated learning homomorphic encryption
下载PDF
Use of machine learning models for the prognostication of liver transplantation: A systematic review 被引量:2
6
作者 Gidion Chongo Jonathan Soldera 《World Journal of Transplantation》 2024年第1期164-188,共25页
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p... BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication. 展开更多
关键词 Liver transplantation Machine learning models PROGNOSTICATION Allograft allocation Artificial intelligence
下载PDF
Effectiveness of hybrid ensemble machine learning models for landslide susceptibility analysis:Evidence from Shimla district of North-west Indian Himalayan region
7
作者 SHARMA Aastha SAJJAD Haroon +2 位作者 RAHAMAN Md Hibjur SAHA Tamal Kanti BHUYAN Nirsobha 《Journal of Mountain Science》 SCIE CSCD 2024年第7期2368-2393,共26页
The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper ... The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics. 展开更多
关键词 Landslide susceptibility Site-specific factors Machine learning models Hybrid ensemble learning Geospatial techniques Himalayan region
下载PDF
Model Agnostic Meta-Learning(MAML)-Based Ensemble Model for Accurate Detection of Wheat Diseases Using Vision Transformer and Graph Neural Networks
8
作者 Yasir Maqsood Syed Muhammad Usman +3 位作者 Musaed Alhussein Khursheed Aurangzeb Shehzad Khalid Muhammad Zubair 《Computers, Materials & Continua》 SCIE EI 2024年第5期2795-2811,共17页
Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly di... Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed. 展开更多
关键词 Wheat disease detection deep learning vision transformer graph neural network model agnostic meta learning
下载PDF
Cybernet Model:A New Deep Learning Model for Cyber DDoS Attacks Detection and Recognition
9
作者 Azar Abid Salih Maiwan Bahjat Abdulrazaq 《Computers, Materials & Continua》 SCIE EI 2024年第1期1275-1295,共21页
Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being... Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being one of the most crucial due to their rapid cyberattack detection capabilities on networks and hosts.The capabilities of DL in feature learning and analyzing extensive data volumes lead to the recognition of network traffic patterns.This study presents novel lightweight DL models,known as Cybernet models,for the detection and recognition of various cyber Distributed Denial of Service(DDoS)attacks.These models were constructed to have a reasonable number of learnable parameters,i.e.,less than 225,000,hence the name“lightweight.”This not only helps reduce the number of computations required but also results in faster training and inference times.Additionally,these models were designed to extract features in parallel from 1D Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM),which makes them unique compared to earlier existing architectures and results in better performance measures.To validate their robustness and effectiveness,they were tested on the CIC-DDoS2019 dataset,which is an imbalanced and large dataset that contains different types of DDoS attacks.Experimental results revealed that bothmodels yielded promising results,with 99.99% for the detectionmodel and 99.76% for the recognition model in terms of accuracy,precision,recall,and F1 score.Furthermore,they outperformed the existing state-of-the-art models proposed for the same task.Thus,the proposed models can be used in cyber security research domains to successfully identify different types of attacks with a high detection and recognition rate. 展开更多
关键词 Deep learning CNN LSTM Cybernet model DDoS recognition
下载PDF
Advancing automated pupillometry:a practical deep learning model utilizing infrared pupil images
10
作者 Dai Guangzheng Yu Sile +2 位作者 Liu Ziming Yan Hairu He Xingru 《国际眼科杂志》 CAS 2024年第10期1522-1528,共7页
AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hos... AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hospital from Spetember to December 2022 were included,and 13470 infrared pupil images were collected for the study.All infrared images for pupil segmentation were labeled using the Labelme software.The computation of pupil diameter is divided into four steps:image pre-processing,pupil identification and localization,pupil segmentation,and diameter calculation.Two major models are used in the computation process:the modified YoloV3 and Deeplabv 3+models,which must be trained beforehand.RESULTS:The test dataset included 1348 infrared pupil images.On the test dataset,the modified YoloV3 model had a detection rate of 99.98% and an average precision(AP)of 0.80 for pupils.The DeeplabV3+model achieved a background intersection over union(IOU)of 99.23%,a pupil IOU of 93.81%,and a mean IOU of 96.52%.The pupil diameters in the test dataset ranged from 20 to 56 pixels,with a mean of 36.06±6.85 pixels.The absolute error in pupil diameters between predicted and actual values ranged from 0 to 7 pixels,with a mean absolute error(MAE)of 1.06±0.96 pixels.CONCLUSION:This study successfully demonstrates a robust infrared image-based pupil diameter measurement algorithm,proven to be highly accurate and reliable for clinical application. 展开更多
关键词 PUPIL infrared image algorithm deep learning model
下载PDF
Construction and evaluation of a liver cancer risk prediction model based on machine learning
11
作者 Ying-Ying Wang Wan-Xia Yang +3 位作者 Qia-Jun Du Zhen-Hua Liu Ming-Hua Lu Chong-Ge You 《World Journal of Gastrointestinal Oncology》 SCIE 2024年第9期3839-3850,共12页
BACKGROUND Liver cancer is one of the most prevalent malignant tumors worldwide,and its early detection and treatment are crucial for enhancing patient survival rates and quality of life.However,the early symptoms of ... BACKGROUND Liver cancer is one of the most prevalent malignant tumors worldwide,and its early detection and treatment are crucial for enhancing patient survival rates and quality of life.However,the early symptoms of liver cancer are often not obvious,resulting in a late-stage diagnosis in many patients,which significantly reduces the effectiveness of treatment.Developing a highly targeted,widely applicable,and practical risk prediction model for liver cancer is crucial for enhancing the early diagnosis and long-term survival rates among affected individuals.AIM To develop a liver cancer risk prediction model by employing machine learning techniques,and subsequently assess its performance.METHODS In this study,a total of 550 patients were enrolled,with 190 hepatocellular carcinoma(HCC)and 195 cirrhosis patients serving as the training cohort,and 83 HCC and 82 cirrhosis patients forming the validation cohort.Logistic regression(LR),support vector machine(SVM),random forest(RF),and least absolute shrinkage and selection operator(LASSO)regression models were developed in the training cohort.Model performance was assessed in the validation cohort.Additionally,this study conducted a comparative evaluation of the diagnostic efficacy between the ASAP model and the model developed in this study using receiver operating characteristic curve,calibration curve,and decision curve analysis(DCA)to determine the optimal predictive model for assessing liver cancer risk.RESULTS Six variables including age,white blood cell,red blood cell,platelet counts,alpha-fetoprotein and protein induced by vitamin K absence or antagonist II levels were used to develop LR,SVM,RF,and LASSO regression models.The RF model exhibited superior discrimination,and the area under curve of the training and validation sets was 0.969 and 0.858,respectively.These values significantly surpassed those of the LR(0.850 and 0.827),SVM(0.860 and 0.803),LASSO regression(0.845 and 0.831),and ASAP(0.866 and 0.813)models.Furthermore,calibration and DCA indicated that the RF model exhibited robust calibration and clinical validity.CONCLUSION The RF model demonstrated excellent prediction capabilities for HCC and can facilitate early diagnosis of HCC in clinical practice. 展开更多
关键词 Hepatocellular carcinoma CIRRHOSIS Prediction model Machine learning Random forest
下载PDF
A deep learning fusion model for accurate classification of brain tumours in Magnetic Resonance images
12
作者 Nechirvan Asaad Zebari Chira Nadheef Mohammed +8 位作者 Dilovan Asaad Zebari Mazin Abed Mohammed Diyar Qader Zeebaree Haydar Abdulameer Marhoon Karrar Hameed Abdulkareem Seifedine Kadry Wattana Viriyasitavat Jan Nedoma Radek Martinek 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第4期790-804,共15页
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods... Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly. 展开更多
关键词 brain tumour deep learning feature fusion model MRI images multi‐classification
下载PDF
Development and validation of a machine learning-based early prediction model for massive intraoperative bleeding in patients with primary hepatic malignancies
13
作者 Jin Li Yu-Ming Jia +4 位作者 Zhi-Lei Zhang Cheng-Yu Liu Zhan-Wu Jiang Zhi-Wei Hao Li Peng 《World Journal of Gastrointestinal Oncology》 SCIE 2024年第1期90-101,共12页
BACKGROUND Surgical resection remains the primary treatment for hepatic malignancies,and intraoperative bleeding is associated with a significantly increased risk of death.Therefore,accurate prediction of intraoperati... BACKGROUND Surgical resection remains the primary treatment for hepatic malignancies,and intraoperative bleeding is associated with a significantly increased risk of death.Therefore,accurate prediction of intraoperative bleeding risk in patients with hepatic malignancies is essential to preventing bleeding in advance and providing safer and more effective treatment.AIM To develop a predictive model for intraoperative bleeding in primary hepatic malignancy patients for improving surgical planning and outcomes.METHODS The retrospective analysis enrolled patients diagnosed with primary hepatic malignancies who underwent surgery at the Hepatobiliary Surgery Department of the Fourth Hospital of Hebei Medical University between 2010 and 2020.Logistic regression analysis was performed to identify potential risk factors for intraoperative bleeding.A prediction model was developed using Python programming language,and its accuracy was evaluated using receiver operating characteristic(ROC)curve analysis.RESULTS Among 406 primary liver cancer patients,16.0%(65/406)suffered massive intraoperative bleeding.Logistic regression analysis identified four variables as associated with intraoperative bleeding in these patients:ascites[odds ratio(OR):22.839;P<0.05],history of alcohol consumption(OR:2.950;P<0.015),TNM staging(OR:2.441;P<0.001),and albumin-bilirubin score(OR:2.361;P<0.001).These variables were used to construct the prediction model.The 406 patients were randomly assigned to a training set(70%)and a prediction set(30%).The area under the ROC curve values for the model’s ability to predict intraoperative bleeding were 0.844 in the training set and 0.80 in the prediction set.CONCLUSION The developed and validated model predicts significant intraoperative blood loss in primary hepatic malignancies using four preoperative clinical factors by considering four preoperative clinical factors:ascites,history of alcohol consumption,TNM staging,and albumin-bilirubin score.Consequently,this model holds promise for enhancing individualised surgical planning. 展开更多
关键词 Primary liver cancer Intraoperative bleeding Machine learning model
下载PDF
Autonomous Vehicle Platoons In Urban Road Networks:A Joint Distributed Reinforcement Learning and Model Predictive Control Approach
14
作者 Luigi D’Alfonso Francesco Giannini +3 位作者 Giuseppe Franzè Giuseppe Fedele Francesco Pupo Giancarlo Fortino 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期141-156,共16页
In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory... In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory tubes by means of routing decisions complying with traffic congestion criteria. To this end, a novel distributed control architecture is conceived by taking advantage of two methodologies: deep reinforcement learning and model predictive control. On one hand, the routing decisions are obtained by using a distributed reinforcement learning algorithm that exploits available traffic data at each road junction. On the other hand, a bank of model predictive controllers is in charge of computing the more adequate control action for each involved vehicle. Such tasks are here combined into a single framework:the deep reinforcement learning output(action) is translated into a set-point to be tracked by the model predictive controller;conversely, the current vehicle position, resulting from the application of the control move, is exploited by the deep reinforcement learning unit for improving its reliability. The main novelty of the proposed solution lies in its hybrid nature: on one hand it fully exploits deep reinforcement learning capabilities for decisionmaking purposes;on the other hand, time-varying hard constraints are always satisfied during the dynamical platoon evolution imposed by the computed routing decisions. To efficiently evaluate the performance of the proposed control architecture, a co-design procedure, involving the SUMO and MATLAB platforms, is implemented so that complex operating environments can be used, and the information coming from road maps(links,junctions, obstacles, semaphores, etc.) and vehicle state trajectories can be shared and exchanged. Finally by considering as operating scenario a real entire city block and a platoon of eleven vehicles described by double-integrator models, several simulations have been performed with the aim to put in light the main f eatures of the proposed approach. Moreover, it is important to underline that in different operating scenarios the proposed reinforcement learning scheme is capable of significantly reducing traffic congestion phenomena when compared with well-reputed competitors. 展开更多
关键词 Distributed model predictive control distributed reinforcement learning routing decisions urban road networks
下载PDF
Establishing and clinically validating a machine learning model for predicting unplanned reoperation risk in colorectal cancer
15
作者 Li-Qun Cai Da-Qing Yang +2 位作者 Rong-Jian Wang He Huang Yi-Xiong Shi 《World Journal of Gastroenterology》 SCIE CAS 2024年第23期2991-3004,共14页
BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in ... BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in integrating complex clinical data.AIM To develop and validate a machine learning model for predicting unplanned reoperation risk in colorectal cancer patients.METHODS Data of patients treated for colorectal cancer(n=2044)at the First Affiliated Hospital of Wenzhou Medical University and Wenzhou Central Hospital from March 2020 to March 2022 were retrospectively collected.Patients were divided into an experimental group(n=60)and a control group(n=1984)according to unplanned reoperation occurrence.Patients were also divided into a training group and a validation group(7:3 ratio).We used three different machine learning methods to screen characteristic variables.A nomogram was created based on multifactor logistic regression,and the model performance was assessed using receiver operating characteristic curve,calibration curve,Hosmer-Lemeshow test,and decision curve analysis.The risk scores of the two groups were calculated and compared to validate the model.RESULTS More patients in the experimental group were≥60 years old,male,and had a history of hypertension,laparotomy,and hypoproteinemia,compared to the control group.Multiple logistic regression analysis confirmed the following as independent risk factors for unplanned reoperation(P<0.05):Prognostic Nutritional Index value,history of laparotomy,hypertension,or stroke,hypoproteinemia,age,tumor-node-metastasis staging,surgical time,gender,and American Society of Anesthesiologists classification.Receiver operating characteristic curve analysis showed that the model had good discrimination and clinical utility.CONCLUSION This study used a machine learning approach to build a model that accurately predicts the risk of postoperative unplanned reoperation in patients with colorectal cancer,which can improve treatment decisions and prognosis. 展开更多
关键词 Colorectal cancer Postoperative unplanned reoperation Unplanned reoperation Clinical validation NOMOGRAM Machine learning models
下载PDF
Prognostic prediction models for postoperative patients with stageⅠtoⅢcolorectal cancer based on machine learning
16
作者 Xiao-Lin Ji Shuo Xu +5 位作者 Xiao-Yu Li Jin-Huan Xu Rong-Shuang Han Ying-Jie Guo Li-Ping Duan Zi-Bin Tian 《World Journal of Gastrointestinal Oncology》 SCIE 2024年第12期4597-4613,共17页
BACKGROUND Colorectal cancer(CRC)is characterized by high heterogeneity,aggressiveness,and high morbidity and mortality rates.With machine learning(ML)algorithms,patient,tumor,and treatment features can be used to dev... BACKGROUND Colorectal cancer(CRC)is characterized by high heterogeneity,aggressiveness,and high morbidity and mortality rates.With machine learning(ML)algorithms,patient,tumor,and treatment features can be used to develop and validate models for predicting survival.In addition,important variables can be screened and different applications can be provided that could serve as vital references when making clinical decisions and potentially improving patient outcomes in clinical settings.AIM To construct prognostic prediction models and screen important variables for patients with stageⅠtoⅢCRC.METHODS More than 1000 postoperative CRC patients were grouped according to survival time(with cutoff values of 3 years and 5 years)and assigned to training and testing cohorts(7:3).For each 3-category survival time,predictions were made by 4 ML algorithms(all-variable and important variable-only datasets),each of which was validated via 5-fold cross-validation and bootstrap validation.Important variables were screened with multivariable regression methods.Model performance was evaluated and compared before and after variable screening with the area under the curve(AUC).SHapley Additive exPlanations(SHAP)further demonstrated the impact of important variables on model decision-making.Nomograms were constructed for practical model application.RESULTS Our ML models performed well;the model performance before and after important parameter identification was consistent,and variable screening was effective.The highest pre-and postscreening model AUCs 95%confidence intervals in the testing set were 0.87(0.81-0.92)and 0.89(0.84-0.93)for overall survival,0.75(0.69-0.82)and 0.73(0.64-0.81)for disease-free survival,0.95(0.88-1.00)and 0.88(0.75-0.97)for recurrence-free survival,and 0.76(0.47-0.95)and 0.80(0.53-0.94)for distant metastasis-free survival.Repeated cross-validation and bootstrap validation were performed in both the training and testing datasets.The SHAP values of the important variables were consistent with the clinicopathological characteristics of patients with tumors.The nomograms were created.CONCLUSION We constructed a comprehensive,high-accuracy,important variable-based ML architecture for predicting the 3-category survival times.This architecture could serve as a vital reference for managing CRC patients. 展开更多
关键词 Colorectal cancer Machine learning Prognostic prediction model Survival times Important variables
下载PDF
Production Capacity Prediction Method of Shale Oil Based on Machine Learning Combination Model
17
作者 Qin Qian Mingjing Lu +3 位作者 Anhai Zhong Feng Yang Wenjun He Min Li 《Energy Engineering》 EI 2024年第8期2167-2190,共24页
The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinea... The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets. 展开更多
关键词 Shale oil production capacity data-driven model model-driven method machine learning
下载PDF
The Extreme Machine Learning Actuarial Intelligent Agricultural Insurance Based Automated Underwriting Model
18
作者 Brighton Mahohoho 《Open Journal of Statistics》 2024年第5期598-633,共36页
The paper presents an innovative approach towards agricultural insurance underwriting and risk pricing through the development of an Extreme Machine Learning (ELM) Actuarial Intelligent Model. This model integrates di... The paper presents an innovative approach towards agricultural insurance underwriting and risk pricing through the development of an Extreme Machine Learning (ELM) Actuarial Intelligent Model. This model integrates diverse datasets, including climate change scenarios, crop types, farm sizes, and various risk factors, to automate underwriting decisions and estimate loss reserves in agricultural insurance. The study conducts extensive exploratory data analysis, model building, feature engineering, and validation to demonstrate the effectiveness of the proposed approach. Additionally, the paper discusses the application of robust tests, stress tests, and scenario tests to assess the model’s resilience and adaptability to changing market conditions. Overall, the research contributes to advancing actuarial science in agricultural insurance by leveraging advanced machine learning techniques for enhanced risk management and decision-making. 展开更多
关键词 Extreme Machine learning Actuarial Underwriting Machine learning Intelligent model Agricultural Insurance
下载PDF
Unified deep learning model for predicting fundus fluorescein angiography image from fundus structure image
19
作者 Yiwei Chen Yi He +3 位作者 Hong Ye Lina Xing Xin Zhang Guohua Shi 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第3期105-113,共9页
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im... The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error. 展开更多
关键词 Fundus fluorescein angiography image fundus structure image image translation unified deep learning model generative adversarial networks
下载PDF
Predictive value of machine learning models for lymph node metastasis in gastric cancer: A two-center study
20
作者 Tong Lu Miao Lu +4 位作者 Dong Wu Yuan-Yuan Ding Hao-Nan Liu Tao-Tao Li Da-Qing Song 《World Journal of Gastrointestinal Surgery》 SCIE 2024年第1期85-94,共10页
BACKGROUND Gastric cancer is one of the most common malignant tumors in the digestive system,ranking sixth in incidence and fourth in mortality worldwide.Since 42.5%of metastatic lymph nodes in gastric cancer belong t... BACKGROUND Gastric cancer is one of the most common malignant tumors in the digestive system,ranking sixth in incidence and fourth in mortality worldwide.Since 42.5%of metastatic lymph nodes in gastric cancer belong to nodule type and peripheral type,the application of imaging diagnosis is restricted.AIM To establish models for predicting the risk of lymph node metastasis in gastric cancer patients using machine learning(ML)algorithms and to evaluate their pre-dictive performance in clinical practice.METHODS Data of a total of 369 patients who underwent radical gastrectomy at the Depart-ment of General Surgery of Affiliated Hospital of Xuzhou Medical University(Xuzhou,China)from March 2016 to November 2019 were collected and retro-spectively analyzed as the training group.In addition,data of 123 patients who underwent radical gastrectomy at the Department of General Surgery of Jining First People’s Hospital(Jining,China)were collected and analyzed as the verifi-cation group.Seven ML models,including decision tree,random forest,support vector machine(SVM),gradient boosting machine,naive Bayes,neural network,and logistic regression,were developed to evaluate the occurrence of lymph node metastasis in patients with gastric cancer.The ML models were established fo-llowing ten cross-validation iterations using the training dataset,and subsequently,each model was assessed using the test dataset.The models’performance was evaluated by comparing the area under the receiver operating characteristic curve of each model.RESULTS Among the seven ML models,except for SVM,the other ones exhibited higher accuracy and reliability,and the influences of various risk factors on the models are intuitive.CONCLUSION The ML models developed exhibit strong predictive capabilities for lymph node metastasis in gastric cancer,which can aid in personalized clinical diagnosis and treatment. 展开更多
关键词 Machine learning Prediction model Gastric cancer Lymph node metastasis
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部