Objective:To explore the application effect of flipped classroom combined with case-based learning teaching methods in pharmacoeconomics teaching.Methods:The students majoring in clinical pharmacy in 2019 were selecte...Objective:To explore the application effect of flipped classroom combined with case-based learning teaching methods in pharmacoeconomics teaching.Methods:The students majoring in clinical pharmacy in 2019 were selected as the study subjects,and the cost-effectiveness analysis of different dosage forms of Yinzhihuang in the treatment of neonatal jaundice was selected as the teaching case.The flipped classroom combined with case-based learning teaching method was used to carry out theoretical teaching to the students.After the course,questionnaires were distributed through the Sojump platform to evaluate the teaching effect.Results:The results of the questionnaire showed that 85.71%of the students believed that the flipped classroom combined with case-based learning teaching method was helpful in mobilizing the learning enthusiasm and initiative,and improving the comprehensive application ability of the knowledge of pharmacoeconomics.92.86%of the students think that it is conducive to the understanding and memorization of learning content,as well as the cultivation of teamwork,communication,etc.Conclusion:Flipped classroom combined with case-based learning teaching method can improve students’knowledge mastery,thinking skills,and practical application skills,as well as optimize and improve teachers’teaching levels.展开更多
Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient...Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient representation of land-surface processes.In addition to PB models,deep learning(DL)models have been widely used in SM predictions recently.However,few pure DL models have notably high success rates due to lacking physical information.Thus,we developed hybrid models to effectively integrate the outputs of PB models into DL models to improve SM predictions.To this end,we first developed a hybrid model based on the attention mechanism to take advantage of PB models at each forecast time scale(attention model).We further built an ensemble model that combined the advantages of different hybrid schemes(ensemble model).We utilized SM forecasts from the Global Forecast System to enhance the convolutional long short-term memory(ConvLSTM)model for 1–16 days of SM predictions.The performances of the proposed hybrid models were investigated and compared with two existing hybrid models.The results showed that the attention model could leverage benefits of PB models and achieved the best predictability of drought events among the different hybrid models.Moreover,the ensemble model performed best among all hybrid models at all forecast time scales and different soil conditions.It is highlighted that the ensemble model outperformed the pure DL model over 79.5%of in situ stations for 16-day predictions.These findings suggest that our proposed hybrid models can adequately exploit the benefits of PB model outputs to aid DL models in making SM predictions.展开更多
In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining ...In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining the Tweedie regression model involves training on a centralized dataset,when the data is provided by multiple parties,training a privacy-preserving Tweedie regression model without exchanging raw data becomes a challenge.To address this issue,this study introduces a novel vertical federated learning-based Tweedie regression algorithm for multi-party auto insurance rate setting in data silos.The algorithm can keep sensitive data locally and uses privacy-preserving techniques to achieve intersection operations between the two parties holding the data.After determining which entities are shared,the participants train the model locally using the shared entity data to obtain the local generalized linear model intermediate parameters.The homomorphic encryption algorithms are introduced to interact with and update the model intermediate parameters to collaboratively complete the joint training of the car insurance rate-setting model.Performance tests on two publicly available datasets show that the proposed federated Tweedie regression algorithm can effectively generate Tweedie regression models that leverage the value of data fromboth partieswithout exchanging data.The assessment results of the scheme approach those of the Tweedie regressionmodel learned fromcentralized data,and outperformthe Tweedie regressionmodel learned independently by a single party.展开更多
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p...BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.展开更多
The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper ...The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.展开更多
Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly di...Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed.展开更多
Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being...Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being one of the most crucial due to their rapid cyberattack detection capabilities on networks and hosts.The capabilities of DL in feature learning and analyzing extensive data volumes lead to the recognition of network traffic patterns.This study presents novel lightweight DL models,known as Cybernet models,for the detection and recognition of various cyber Distributed Denial of Service(DDoS)attacks.These models were constructed to have a reasonable number of learnable parameters,i.e.,less than 225,000,hence the name“lightweight.”This not only helps reduce the number of computations required but also results in faster training and inference times.Additionally,these models were designed to extract features in parallel from 1D Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM),which makes them unique compared to earlier existing architectures and results in better performance measures.To validate their robustness and effectiveness,they were tested on the CIC-DDoS2019 dataset,which is an imbalanced and large dataset that contains different types of DDoS attacks.Experimental results revealed that bothmodels yielded promising results,with 99.99% for the detectionmodel and 99.76% for the recognition model in terms of accuracy,precision,recall,and F1 score.Furthermore,they outperformed the existing state-of-the-art models proposed for the same task.Thus,the proposed models can be used in cyber security research domains to successfully identify different types of attacks with a high detection and recognition rate.展开更多
AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hos...AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hospital from Spetember to December 2022 were included,and 13470 infrared pupil images were collected for the study.All infrared images for pupil segmentation were labeled using the Labelme software.The computation of pupil diameter is divided into four steps:image pre-processing,pupil identification and localization,pupil segmentation,and diameter calculation.Two major models are used in the computation process:the modified YoloV3 and Deeplabv 3+models,which must be trained beforehand.RESULTS:The test dataset included 1348 infrared pupil images.On the test dataset,the modified YoloV3 model had a detection rate of 99.98% and an average precision(AP)of 0.80 for pupils.The DeeplabV3+model achieved a background intersection over union(IOU)of 99.23%,a pupil IOU of 93.81%,and a mean IOU of 96.52%.The pupil diameters in the test dataset ranged from 20 to 56 pixels,with a mean of 36.06±6.85 pixels.The absolute error in pupil diameters between predicted and actual values ranged from 0 to 7 pixels,with a mean absolute error(MAE)of 1.06±0.96 pixels.CONCLUSION:This study successfully demonstrates a robust infrared image-based pupil diameter measurement algorithm,proven to be highly accurate and reliable for clinical application.展开更多
BACKGROUND Surgical resection remains the primary treatment for hepatic malignancies,and intraoperative bleeding is associated with a significantly increased risk of death.Therefore,accurate prediction of intraoperati...BACKGROUND Surgical resection remains the primary treatment for hepatic malignancies,and intraoperative bleeding is associated with a significantly increased risk of death.Therefore,accurate prediction of intraoperative bleeding risk in patients with hepatic malignancies is essential to preventing bleeding in advance and providing safer and more effective treatment.AIM To develop a predictive model for intraoperative bleeding in primary hepatic malignancy patients for improving surgical planning and outcomes.METHODS The retrospective analysis enrolled patients diagnosed with primary hepatic malignancies who underwent surgery at the Hepatobiliary Surgery Department of the Fourth Hospital of Hebei Medical University between 2010 and 2020.Logistic regression analysis was performed to identify potential risk factors for intraoperative bleeding.A prediction model was developed using Python programming language,and its accuracy was evaluated using receiver operating characteristic(ROC)curve analysis.RESULTS Among 406 primary liver cancer patients,16.0%(65/406)suffered massive intraoperative bleeding.Logistic regression analysis identified four variables as associated with intraoperative bleeding in these patients:ascites[odds ratio(OR):22.839;P<0.05],history of alcohol consumption(OR:2.950;P<0.015),TNM staging(OR:2.441;P<0.001),and albumin-bilirubin score(OR:2.361;P<0.001).These variables were used to construct the prediction model.The 406 patients were randomly assigned to a training set(70%)and a prediction set(30%).The area under the ROC curve values for the model’s ability to predict intraoperative bleeding were 0.844 in the training set and 0.80 in the prediction set.CONCLUSION The developed and validated model predicts significant intraoperative blood loss in primary hepatic malignancies using four preoperative clinical factors by considering four preoperative clinical factors:ascites,history of alcohol consumption,TNM staging,and albumin-bilirubin score.Consequently,this model holds promise for enhancing individualised surgical planning.展开更多
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory...In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory tubes by means of routing decisions complying with traffic congestion criteria. To this end, a novel distributed control architecture is conceived by taking advantage of two methodologies: deep reinforcement learning and model predictive control. On one hand, the routing decisions are obtained by using a distributed reinforcement learning algorithm that exploits available traffic data at each road junction. On the other hand, a bank of model predictive controllers is in charge of computing the more adequate control action for each involved vehicle. Such tasks are here combined into a single framework:the deep reinforcement learning output(action) is translated into a set-point to be tracked by the model predictive controller;conversely, the current vehicle position, resulting from the application of the control move, is exploited by the deep reinforcement learning unit for improving its reliability. The main novelty of the proposed solution lies in its hybrid nature: on one hand it fully exploits deep reinforcement learning capabilities for decisionmaking purposes;on the other hand, time-varying hard constraints are always satisfied during the dynamical platoon evolution imposed by the computed routing decisions. To efficiently evaluate the performance of the proposed control architecture, a co-design procedure, involving the SUMO and MATLAB platforms, is implemented so that complex operating environments can be used, and the information coming from road maps(links,junctions, obstacles, semaphores, etc.) and vehicle state trajectories can be shared and exchanged. Finally by considering as operating scenario a real entire city block and a platoon of eleven vehicles described by double-integrator models, several simulations have been performed with the aim to put in light the main f eatures of the proposed approach. Moreover, it is important to underline that in different operating scenarios the proposed reinforcement learning scheme is capable of significantly reducing traffic congestion phenomena when compared with well-reputed competitors.展开更多
BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in ...BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in integrating complex clinical data.AIM To develop and validate a machine learning model for predicting unplanned reoperation risk in colorectal cancer patients.METHODS Data of patients treated for colorectal cancer(n=2044)at the First Affiliated Hospital of Wenzhou Medical University and Wenzhou Central Hospital from March 2020 to March 2022 were retrospectively collected.Patients were divided into an experimental group(n=60)and a control group(n=1984)according to unplanned reoperation occurrence.Patients were also divided into a training group and a validation group(7:3 ratio).We used three different machine learning methods to screen characteristic variables.A nomogram was created based on multifactor logistic regression,and the model performance was assessed using receiver operating characteristic curve,calibration curve,Hosmer-Lemeshow test,and decision curve analysis.The risk scores of the two groups were calculated and compared to validate the model.RESULTS More patients in the experimental group were≥60 years old,male,and had a history of hypertension,laparotomy,and hypoproteinemia,compared to the control group.Multiple logistic regression analysis confirmed the following as independent risk factors for unplanned reoperation(P<0.05):Prognostic Nutritional Index value,history of laparotomy,hypertension,or stroke,hypoproteinemia,age,tumor-node-metastasis staging,surgical time,gender,and American Society of Anesthesiologists classification.Receiver operating characteristic curve analysis showed that the model had good discrimination and clinical utility.CONCLUSION This study used a machine learning approach to build a model that accurately predicts the risk of postoperative unplanned reoperation in patients with colorectal cancer,which can improve treatment decisions and prognosis.展开更多
The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinea...The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets.展开更多
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im...The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.展开更多
BACKGROUND Gastric cancer is one of the most common malignant tumors in the digestive system,ranking sixth in incidence and fourth in mortality worldwide.Since 42.5%of metastatic lymph nodes in gastric cancer belong t...BACKGROUND Gastric cancer is one of the most common malignant tumors in the digestive system,ranking sixth in incidence and fourth in mortality worldwide.Since 42.5%of metastatic lymph nodes in gastric cancer belong to nodule type and peripheral type,the application of imaging diagnosis is restricted.AIM To establish models for predicting the risk of lymph node metastasis in gastric cancer patients using machine learning(ML)algorithms and to evaluate their pre-dictive performance in clinical practice.METHODS Data of a total of 369 patients who underwent radical gastrectomy at the Depart-ment of General Surgery of Affiliated Hospital of Xuzhou Medical University(Xuzhou,China)from March 2016 to November 2019 were collected and retro-spectively analyzed as the training group.In addition,data of 123 patients who underwent radical gastrectomy at the Department of General Surgery of Jining First People’s Hospital(Jining,China)were collected and analyzed as the verifi-cation group.Seven ML models,including decision tree,random forest,support vector machine(SVM),gradient boosting machine,naive Bayes,neural network,and logistic regression,were developed to evaluate the occurrence of lymph node metastasis in patients with gastric cancer.The ML models were established fo-llowing ten cross-validation iterations using the training dataset,and subsequently,each model was assessed using the test dataset.The models’performance was evaluated by comparing the area under the receiver operating characteristic curve of each model.RESULTS Among the seven ML models,except for SVM,the other ones exhibited higher accuracy and reliability,and the influences of various risk factors on the models are intuitive.CONCLUSION The ML models developed exhibit strong predictive capabilities for lymph node metastasis in gastric cancer,which can aid in personalized clinical diagnosis and treatment.展开更多
Machine learning is currently one of the research hotspots in the field of landslide prediction.To clarify and evaluate the differences in characteristics and prediction effects of different machine learning models,Co...Machine learning is currently one of the research hotspots in the field of landslide prediction.To clarify and evaluate the differences in characteristics and prediction effects of different machine learning models,Conghua District,which is the most prone to landslide disasters in Guangzhou,was selected for landslide susceptibility evaluation.The evaluation factors were selected by using correlation analysis and variance expansion factor method.Applying four machine learning methods namely Logistic Regression(LR),Random Forest(RF),Support Vector Machines(SVM),and Extreme Gradient Boosting(XGB),landslide models were constructed.Comparative analysis and evaluation of the model were conducted through statistical indices and receiver operating characteristic(ROC)curves.The results showed that LR,RF,SVM,and XGB models have good predictive performance for landslide susceptibility,with the area under curve(AUC)values of 0.752,0.965,0.996,and 0.998,respectively.XGB model had the highest predictive ability,followed by RF model,SVM model,and LR model.The frequency ratio(FR)accuracy of LR,RF,SVM,and XGB models was 0.775,0.842,0.759,and 0.822,respectively.RF and XGB models were superior to LR and SVM models,indicating that the integrated algorithm has better predictive ability than a single classification algorithm in regional landslide classification problems.展开更多
BACKGROUND Synchronous liver metastasis(SLM)is a significant contributor to morbidity in colorectal cancer(CRC).There are no effective predictive device integration algorithms to predict adverse SLM events during the ...BACKGROUND Synchronous liver metastasis(SLM)is a significant contributor to morbidity in colorectal cancer(CRC).There are no effective predictive device integration algorithms to predict adverse SLM events during the diagnosis of CRC.AIM To explore the risk factors for SLM in CRC and construct a visual prediction model based on gray-level co-occurrence matrix(GLCM)features collected from magnetic resonance imaging(MRI).METHODS Our study retrospectively enrolled 392 patients with CRC from Yichang Central People’s Hospital from January 2015 to May 2023.Patients were randomly divided into a training and validation group(3:7).The clinical parameters and GLCM features extracted from MRI were included as candidate variables.The prediction model was constructed using a generalized linear regression model,random forest model(RFM),and artificial neural network model.Receiver operating characteristic curves and decision curves were used to evaluate the prediction model.RESULTS Among the 392 patients,48 had SLM(12.24%).We obtained fourteen GLCM imaging data for variable screening of SLM prediction models.Inverse difference,mean sum,sum entropy,sum variance,sum of squares,energy,and difference variance were listed as candidate variables,and the prediction efficiency(area under the curve)of the subsequent RFM in the training set and internal validation set was 0.917[95%confidence interval(95%CI):0.866-0.968]and 0.09(95%CI:0.858-0.960),respectively.CONCLUSION A predictive model combining GLCM image features with machine learning can predict SLM in CRC.This model can assist clinicians in making timely and personalized clinical decisions.展开更多
The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on ...The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on wind turbine blades,a blade surface defect detection and quantification method based on an improved Deeplabv3+deep learning model is proposed.Firstly,an improved method for wind turbine blade surface defect detection,utilizing Mobilenetv2 as the backbone feature extraction network,is proposed based on an original Deeplabv3+deep learning model to address the issue of limited robustness.Secondly,through integrating the concept of pre-trained weights from transfer learning and implementing a freeze training strategy,significant improvements have been made to enhance both the training speed and model training accuracy of this deep learning model.Finally,based on segmented blade surface defect images,a method for quantifying blade defects is proposed.This method combines image stitching algorithms to achieve overall quantification and risk assessment of the entire blade.Test results show that the improved Deeplabv3+deep learning model reduces training time by approximately 43.03%compared to the original model,while achieving mAP and MIoU values of 96.87%and 96.93%,respectively.Moreover,it demonstrates robustness in detecting different surface defects on blades across different back-grounds.The application of a blade surface defect quantification method enables the precise quantification of dif-ferent defects and facilitates the assessment of risk levels associated with defect measurements across the entire blade.This method enables non-contact,long-distance,high-precision detection and quantification of surface defects on the blades,providing a reference for assessing surface defects on wind turbine blades.展开更多
Cardiovascular Diseases (CVDs) pose a significant global health challenge, necessitating accurate risk prediction for effective preventive measures. This comprehensive comparative study explores the performance of tra...Cardiovascular Diseases (CVDs) pose a significant global health challenge, necessitating accurate risk prediction for effective preventive measures. This comprehensive comparative study explores the performance of traditional Machine Learning (ML) and Deep Learning (DL) models in predicting CVD risk, utilizing a meticulously curated dataset derived from health records. Rigorous preprocessing, including normalization and outlier removal, enhances model robustness. Diverse ML models (Logistic Regression, Random Forest, Support Vector Machine, K-Nearest Neighbor, Decision Tree, and Gradient Boosting) are compared with a Long Short-Term Memory (LSTM) neural network for DL. Evaluation metrics include accuracy, ROC AUC, computation time, and memory usage. Results identify the Gradient Boosting Classifier and LSTM as top performers, demonstrating high accuracy and ROC AUC scores. Comparative analyses highlight model strengths and limitations, contributing valuable insights for optimizing predictive strategies. This study advances predictive analytics for cardiovascular health, with implications for personalized medicine. The findings underscore the versatility of intelligent systems in addressing health challenges, emphasizing the broader applications of ML and DL in disease identification beyond cardiovascular health.展开更多
Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a p...Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a particular geographic region or location, also known as geo-spatial data or geographic information. Focusing on spatial heterogeneity, we present a hybrid machine learning model combining two competitive algorithms: the Random Forest Regressor and CNN. The model is fine-tuned using cross validation for hyper-parameter adjustment and performance evaluation, ensuring robustness and generalization. Our approach integrates Global Moran’s I for examining global autocorrelation, and local Moran’s I for assessing local spatial autocorrelation in the residuals. To validate our approach, we implemented the hybrid model on a real-world dataset and compared its performance with that of the traditional machine learning models. Results indicate superior performance with an R-squared of 0.90, outperforming RF 0.84 and CNN 0.74. This study contributed to a detailed understanding of spatial variations in data considering the geographical information (Longitude & Latitude) present in the dataset. Our results, also assessed using the Root Mean Squared Error (RMSE), indicated that the hybrid yielded lower errors, showing a deviation of 53.65% from the RF model and 63.24% from the CNN model. Additionally, the global Moran’s I index was observed to be 0.10. This study underscores that the hybrid was able to predict correctly the house prices both in clusters and in dispersed areas.展开更多
基金2022 Medical Innovation and Development Project of Lanzhou University(lzuyxcx-2022-40)2022 Education and Teaching Reform Research Project of Lanzhou University General Project(202201)The Foundation of the First Hospital of Lanzhou University(ldyyyn 2021-92)。
文摘Objective:To explore the application effect of flipped classroom combined with case-based learning teaching methods in pharmacoeconomics teaching.Methods:The students majoring in clinical pharmacy in 2019 were selected as the study subjects,and the cost-effectiveness analysis of different dosage forms of Yinzhihuang in the treatment of neonatal jaundice was selected as the teaching case.The flipped classroom combined with case-based learning teaching method was used to carry out theoretical teaching to the students.After the course,questionnaires were distributed through the Sojump platform to evaluate the teaching effect.Results:The results of the questionnaire showed that 85.71%of the students believed that the flipped classroom combined with case-based learning teaching method was helpful in mobilizing the learning enthusiasm and initiative,and improving the comprehensive application ability of the knowledge of pharmacoeconomics.92.86%of the students think that it is conducive to the understanding and memorization of learning content,as well as the cultivation of teamwork,communication,etc.Conclusion:Flipped classroom combined with case-based learning teaching method can improve students’knowledge mastery,thinking skills,and practical application skills,as well as optimize and improve teachers’teaching levels.
基金supported by the Natural Science Foundation of China(Grant Nos.42088101 and 42205149)Zhongwang WEI was supported by the Natural Science Foundation of China(Grant No.42075158)+1 种基金Wei SHANGGUAN was supported by the Natural Science Foundation of China(Grant No.41975122)Yonggen ZHANG was supported by the National Natural Science Foundation of Tianjin(Grant No.20JCQNJC01660).
文摘Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient representation of land-surface processes.In addition to PB models,deep learning(DL)models have been widely used in SM predictions recently.However,few pure DL models have notably high success rates due to lacking physical information.Thus,we developed hybrid models to effectively integrate the outputs of PB models into DL models to improve SM predictions.To this end,we first developed a hybrid model based on the attention mechanism to take advantage of PB models at each forecast time scale(attention model).We further built an ensemble model that combined the advantages of different hybrid schemes(ensemble model).We utilized SM forecasts from the Global Forecast System to enhance the convolutional long short-term memory(ConvLSTM)model for 1–16 days of SM predictions.The performances of the proposed hybrid models were investigated and compared with two existing hybrid models.The results showed that the attention model could leverage benefits of PB models and achieved the best predictability of drought events among the different hybrid models.Moreover,the ensemble model performed best among all hybrid models at all forecast time scales and different soil conditions.It is highlighted that the ensemble model outperformed the pure DL model over 79.5%of in situ stations for 16-day predictions.These findings suggest that our proposed hybrid models can adequately exploit the benefits of PB model outputs to aid DL models in making SM predictions.
基金This research was funded by the National Natural Science Foundation of China(No.62272124)the National Key Research and Development Program of China(No.2022YFB2701401)+3 种基金Guizhou Province Science and Technology Plan Project(Grant Nos.Qiankehe Paltform Talent[2020]5017)The Research Project of Guizhou University for Talent Introduction(No.[2020]61)the Cultivation Project of Guizhou University(No.[2019]56)the Open Fund of Key Laboratory of Advanced Manufacturing Technology,Ministry of Education(GZUAMT2021KF[01]).
文摘In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining the Tweedie regression model involves training on a centralized dataset,when the data is provided by multiple parties,training a privacy-preserving Tweedie regression model without exchanging raw data becomes a challenge.To address this issue,this study introduces a novel vertical federated learning-based Tweedie regression algorithm for multi-party auto insurance rate setting in data silos.The algorithm can keep sensitive data locally and uses privacy-preserving techniques to achieve intersection operations between the two parties holding the data.After determining which entities are shared,the participants train the model locally using the shared entity data to obtain the local generalized linear model intermediate parameters.The homomorphic encryption algorithms are introduced to interact with and update the model intermediate parameters to collaboratively complete the joint training of the car insurance rate-setting model.Performance tests on two publicly available datasets show that the proposed federated Tweedie regression algorithm can effectively generate Tweedie regression models that leverage the value of data fromboth partieswithout exchanging data.The assessment results of the scheme approach those of the Tweedie regressionmodel learned fromcentralized data,and outperformthe Tweedie regressionmodel learned independently by a single party.
文摘BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.
文摘The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.
基金Researchers Supporting Project Number(RSPD2024R 553),King Saud University,Riyadh,Saudi Arabia.
文摘Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed.
文摘Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being one of the most crucial due to their rapid cyberattack detection capabilities on networks and hosts.The capabilities of DL in feature learning and analyzing extensive data volumes lead to the recognition of network traffic patterns.This study presents novel lightweight DL models,known as Cybernet models,for the detection and recognition of various cyber Distributed Denial of Service(DDoS)attacks.These models were constructed to have a reasonable number of learnable parameters,i.e.,less than 225,000,hence the name“lightweight.”This not only helps reduce the number of computations required but also results in faster training and inference times.Additionally,these models were designed to extract features in parallel from 1D Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM),which makes them unique compared to earlier existing architectures and results in better performance measures.To validate their robustness and effectiveness,they were tested on the CIC-DDoS2019 dataset,which is an imbalanced and large dataset that contains different types of DDoS attacks.Experimental results revealed that bothmodels yielded promising results,with 99.99% for the detectionmodel and 99.76% for the recognition model in terms of accuracy,precision,recall,and F1 score.Furthermore,they outperformed the existing state-of-the-art models proposed for the same task.Thus,the proposed models can be used in cyber security research domains to successfully identify different types of attacks with a high detection and recognition rate.
文摘AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hospital from Spetember to December 2022 were included,and 13470 infrared pupil images were collected for the study.All infrared images for pupil segmentation were labeled using the Labelme software.The computation of pupil diameter is divided into four steps:image pre-processing,pupil identification and localization,pupil segmentation,and diameter calculation.Two major models are used in the computation process:the modified YoloV3 and Deeplabv 3+models,which must be trained beforehand.RESULTS:The test dataset included 1348 infrared pupil images.On the test dataset,the modified YoloV3 model had a detection rate of 99.98% and an average precision(AP)of 0.80 for pupils.The DeeplabV3+model achieved a background intersection over union(IOU)of 99.23%,a pupil IOU of 93.81%,and a mean IOU of 96.52%.The pupil diameters in the test dataset ranged from 20 to 56 pixels,with a mean of 36.06±6.85 pixels.The absolute error in pupil diameters between predicted and actual values ranged from 0 to 7 pixels,with a mean absolute error(MAE)of 1.06±0.96 pixels.CONCLUSION:This study successfully demonstrates a robust infrared image-based pupil diameter measurement algorithm,proven to be highly accurate and reliable for clinical application.
文摘BACKGROUND Surgical resection remains the primary treatment for hepatic malignancies,and intraoperative bleeding is associated with a significantly increased risk of death.Therefore,accurate prediction of intraoperative bleeding risk in patients with hepatic malignancies is essential to preventing bleeding in advance and providing safer and more effective treatment.AIM To develop a predictive model for intraoperative bleeding in primary hepatic malignancy patients for improving surgical planning and outcomes.METHODS The retrospective analysis enrolled patients diagnosed with primary hepatic malignancies who underwent surgery at the Hepatobiliary Surgery Department of the Fourth Hospital of Hebei Medical University between 2010 and 2020.Logistic regression analysis was performed to identify potential risk factors for intraoperative bleeding.A prediction model was developed using Python programming language,and its accuracy was evaluated using receiver operating characteristic(ROC)curve analysis.RESULTS Among 406 primary liver cancer patients,16.0%(65/406)suffered massive intraoperative bleeding.Logistic regression analysis identified four variables as associated with intraoperative bleeding in these patients:ascites[odds ratio(OR):22.839;P<0.05],history of alcohol consumption(OR:2.950;P<0.015),TNM staging(OR:2.441;P<0.001),and albumin-bilirubin score(OR:2.361;P<0.001).These variables were used to construct the prediction model.The 406 patients were randomly assigned to a training set(70%)and a prediction set(30%).The area under the ROC curve values for the model’s ability to predict intraoperative bleeding were 0.844 in the training set and 0.80 in the prediction set.CONCLUSION The developed and validated model predicts significant intraoperative blood loss in primary hepatic malignancies using four preoperative clinical factors by considering four preoperative clinical factors:ascites,history of alcohol consumption,TNM staging,and albumin-bilirubin score.Consequently,this model holds promise for enhancing individualised surgical planning.
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
文摘In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory tubes by means of routing decisions complying with traffic congestion criteria. To this end, a novel distributed control architecture is conceived by taking advantage of two methodologies: deep reinforcement learning and model predictive control. On one hand, the routing decisions are obtained by using a distributed reinforcement learning algorithm that exploits available traffic data at each road junction. On the other hand, a bank of model predictive controllers is in charge of computing the more adequate control action for each involved vehicle. Such tasks are here combined into a single framework:the deep reinforcement learning output(action) is translated into a set-point to be tracked by the model predictive controller;conversely, the current vehicle position, resulting from the application of the control move, is exploited by the deep reinforcement learning unit for improving its reliability. The main novelty of the proposed solution lies in its hybrid nature: on one hand it fully exploits deep reinforcement learning capabilities for decisionmaking purposes;on the other hand, time-varying hard constraints are always satisfied during the dynamical platoon evolution imposed by the computed routing decisions. To efficiently evaluate the performance of the proposed control architecture, a co-design procedure, involving the SUMO and MATLAB platforms, is implemented so that complex operating environments can be used, and the information coming from road maps(links,junctions, obstacles, semaphores, etc.) and vehicle state trajectories can be shared and exchanged. Finally by considering as operating scenario a real entire city block and a platoon of eleven vehicles described by double-integrator models, several simulations have been performed with the aim to put in light the main f eatures of the proposed approach. Moreover, it is important to underline that in different operating scenarios the proposed reinforcement learning scheme is capable of significantly reducing traffic congestion phenomena when compared with well-reputed competitors.
基金This study has been reviewed and approved by the Clinical Research Ethics Committee of Wenzhou Central Hospital and the First Hospital Affiliated to Wenzhou Medical University,No.KY2024-R016.
文摘BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in integrating complex clinical data.AIM To develop and validate a machine learning model for predicting unplanned reoperation risk in colorectal cancer patients.METHODS Data of patients treated for colorectal cancer(n=2044)at the First Affiliated Hospital of Wenzhou Medical University and Wenzhou Central Hospital from March 2020 to March 2022 were retrospectively collected.Patients were divided into an experimental group(n=60)and a control group(n=1984)according to unplanned reoperation occurrence.Patients were also divided into a training group and a validation group(7:3 ratio).We used three different machine learning methods to screen characteristic variables.A nomogram was created based on multifactor logistic regression,and the model performance was assessed using receiver operating characteristic curve,calibration curve,Hosmer-Lemeshow test,and decision curve analysis.The risk scores of the two groups were calculated and compared to validate the model.RESULTS More patients in the experimental group were≥60 years old,male,and had a history of hypertension,laparotomy,and hypoproteinemia,compared to the control group.Multiple logistic regression analysis confirmed the following as independent risk factors for unplanned reoperation(P<0.05):Prognostic Nutritional Index value,history of laparotomy,hypertension,or stroke,hypoproteinemia,age,tumor-node-metastasis staging,surgical time,gender,and American Society of Anesthesiologists classification.Receiver operating characteristic curve analysis showed that the model had good discrimination and clinical utility.CONCLUSION This study used a machine learning approach to build a model that accurately predicts the risk of postoperative unplanned reoperation in patients with colorectal cancer,which can improve treatment decisions and prognosis.
基金supported by the China Postdoctoral Science Foundation(2021M702304)Natural Science Foundation of Shandong Province(ZR20210E260).
文摘The production capacity of shale oil reservoirs after hydraulic fracturing is influenced by a complex interplay involving geological characteristics,engineering quality,and well conditions.These relationships,nonlinear in nature,pose challenges for accurate description through physical models.While field data provides insights into real-world effects,its limited volume and quality restrict its utility.Complementing this,numerical simulation models offer effective support.To harness the strengths of both data-driven and model-driven approaches,this study established a shale oil production capacity prediction model based on a machine learning combination model.Leveraging fracturing development data from 236 wells in the field,a data-driven method employing the random forest algorithm is implemented to identify the main controlling factors for different types of shale oil reservoirs.Through the combination model integrating support vector machine(SVM)algorithm and back propagation neural network(BPNN),a model-driven shale oil production capacity prediction model is developed,capable of swiftly responding to shale oil development performance under varying geological,fluid,and well conditions.The results of numerical experiments show that the proposed method demonstrates a notable enhancement in R2 by 22.5%and 5.8%compared to singular machine learning models like SVM and BPNN,showcasing its superior precision in predicting shale oil production capacity across diverse datasets.
基金supported in part by the Gusu Innovation and Entrepreneurship Leading Talents in Suzhou City,grant numbers ZXL2021425 and ZXL2022476Doctor of Innovation and Entrepreneurship Program in Jiangsu Province,grant number JSSCBS20211440+6 种基金Jiangsu Province Key R&D Program,grant number BE2019682Natural Science Foundation of Jiangsu Province,grant number BK20200214National Key R&D Program of China,grant number 2017YFB0403701National Natural Science Foundation of China,grant numbers 61605210,61675226,and 62075235Youth Innovation Promotion Association of Chinese Academy of Sciences,grant number 2019320Frontier Science Research Project of the Chinese Academy of Sciences,grant number QYZDB-SSW-JSC03Strategic Priority Research Program of the Chinese Academy of Sciences,grant number XDB02060000.
文摘The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.
文摘BACKGROUND Gastric cancer is one of the most common malignant tumors in the digestive system,ranking sixth in incidence and fourth in mortality worldwide.Since 42.5%of metastatic lymph nodes in gastric cancer belong to nodule type and peripheral type,the application of imaging diagnosis is restricted.AIM To establish models for predicting the risk of lymph node metastasis in gastric cancer patients using machine learning(ML)algorithms and to evaluate their pre-dictive performance in clinical practice.METHODS Data of a total of 369 patients who underwent radical gastrectomy at the Depart-ment of General Surgery of Affiliated Hospital of Xuzhou Medical University(Xuzhou,China)from March 2016 to November 2019 were collected and retro-spectively analyzed as the training group.In addition,data of 123 patients who underwent radical gastrectomy at the Department of General Surgery of Jining First People’s Hospital(Jining,China)were collected and analyzed as the verifi-cation group.Seven ML models,including decision tree,random forest,support vector machine(SVM),gradient boosting machine,naive Bayes,neural network,and logistic regression,were developed to evaluate the occurrence of lymph node metastasis in patients with gastric cancer.The ML models were established fo-llowing ten cross-validation iterations using the training dataset,and subsequently,each model was assessed using the test dataset.The models’performance was evaluated by comparing the area under the receiver operating characteristic curve of each model.RESULTS Among the seven ML models,except for SVM,the other ones exhibited higher accuracy and reliability,and the influences of various risk factors on the models are intuitive.CONCLUSION The ML models developed exhibit strong predictive capabilities for lymph node metastasis in gastric cancer,which can aid in personalized clinical diagnosis and treatment.
基金supported by the projects of the China Geological Survey(DD20221729,DD20190291)Zhuhai Urban Geological Survey(including informatization)(MZCD–2201–008).
文摘Machine learning is currently one of the research hotspots in the field of landslide prediction.To clarify and evaluate the differences in characteristics and prediction effects of different machine learning models,Conghua District,which is the most prone to landslide disasters in Guangzhou,was selected for landslide susceptibility evaluation.The evaluation factors were selected by using correlation analysis and variance expansion factor method.Applying four machine learning methods namely Logistic Regression(LR),Random Forest(RF),Support Vector Machines(SVM),and Extreme Gradient Boosting(XGB),landslide models were constructed.Comparative analysis and evaluation of the model were conducted through statistical indices and receiver operating characteristic(ROC)curves.The results showed that LR,RF,SVM,and XGB models have good predictive performance for landslide susceptibility,with the area under curve(AUC)values of 0.752,0.965,0.996,and 0.998,respectively.XGB model had the highest predictive ability,followed by RF model,SVM model,and LR model.The frequency ratio(FR)accuracy of LR,RF,SVM,and XGB models was 0.775,0.842,0.759,and 0.822,respectively.RF and XGB models were superior to LR and SVM models,indicating that the integrated algorithm has better predictive ability than a single classification algorithm in regional landslide classification problems.
文摘BACKGROUND Synchronous liver metastasis(SLM)is a significant contributor to morbidity in colorectal cancer(CRC).There are no effective predictive device integration algorithms to predict adverse SLM events during the diagnosis of CRC.AIM To explore the risk factors for SLM in CRC and construct a visual prediction model based on gray-level co-occurrence matrix(GLCM)features collected from magnetic resonance imaging(MRI).METHODS Our study retrospectively enrolled 392 patients with CRC from Yichang Central People’s Hospital from January 2015 to May 2023.Patients were randomly divided into a training and validation group(3:7).The clinical parameters and GLCM features extracted from MRI were included as candidate variables.The prediction model was constructed using a generalized linear regression model,random forest model(RFM),and artificial neural network model.Receiver operating characteristic curves and decision curves were used to evaluate the prediction model.RESULTS Among the 392 patients,48 had SLM(12.24%).We obtained fourteen GLCM imaging data for variable screening of SLM prediction models.Inverse difference,mean sum,sum entropy,sum variance,sum of squares,energy,and difference variance were listed as candidate variables,and the prediction efficiency(area under the curve)of the subsequent RFM in the training set and internal validation set was 0.917[95%confidence interval(95%CI):0.866-0.968]and 0.09(95%CI:0.858-0.960),respectively.CONCLUSION A predictive model combining GLCM image features with machine learning can predict SLM in CRC.This model can assist clinicians in making timely and personalized clinical decisions.
基金supported by the National Science Foundation of China(Grant Nos.52068049 and 51908266)the Science Fund for Distinguished Young Scholars of Gansu Province(No.21JR7RA267)Hongliu Outstanding Young Talents Program of Lanzhou University of Technology.
文摘The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on wind turbine blades,a blade surface defect detection and quantification method based on an improved Deeplabv3+deep learning model is proposed.Firstly,an improved method for wind turbine blade surface defect detection,utilizing Mobilenetv2 as the backbone feature extraction network,is proposed based on an original Deeplabv3+deep learning model to address the issue of limited robustness.Secondly,through integrating the concept of pre-trained weights from transfer learning and implementing a freeze training strategy,significant improvements have been made to enhance both the training speed and model training accuracy of this deep learning model.Finally,based on segmented blade surface defect images,a method for quantifying blade defects is proposed.This method combines image stitching algorithms to achieve overall quantification and risk assessment of the entire blade.Test results show that the improved Deeplabv3+deep learning model reduces training time by approximately 43.03%compared to the original model,while achieving mAP and MIoU values of 96.87%and 96.93%,respectively.Moreover,it demonstrates robustness in detecting different surface defects on blades across different back-grounds.The application of a blade surface defect quantification method enables the precise quantification of dif-ferent defects and facilitates the assessment of risk levels associated with defect measurements across the entire blade.This method enables non-contact,long-distance,high-precision detection and quantification of surface defects on the blades,providing a reference for assessing surface defects on wind turbine blades.
文摘Cardiovascular Diseases (CVDs) pose a significant global health challenge, necessitating accurate risk prediction for effective preventive measures. This comprehensive comparative study explores the performance of traditional Machine Learning (ML) and Deep Learning (DL) models in predicting CVD risk, utilizing a meticulously curated dataset derived from health records. Rigorous preprocessing, including normalization and outlier removal, enhances model robustness. Diverse ML models (Logistic Regression, Random Forest, Support Vector Machine, K-Nearest Neighbor, Decision Tree, and Gradient Boosting) are compared with a Long Short-Term Memory (LSTM) neural network for DL. Evaluation metrics include accuracy, ROC AUC, computation time, and memory usage. Results identify the Gradient Boosting Classifier and LSTM as top performers, demonstrating high accuracy and ROC AUC scores. Comparative analyses highlight model strengths and limitations, contributing valuable insights for optimizing predictive strategies. This study advances predictive analytics for cardiovascular health, with implications for personalized medicine. The findings underscore the versatility of intelligent systems in addressing health challenges, emphasizing the broader applications of ML and DL in disease identification beyond cardiovascular health.
文摘Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a particular geographic region or location, also known as geo-spatial data or geographic information. Focusing on spatial heterogeneity, we present a hybrid machine learning model combining two competitive algorithms: the Random Forest Regressor and CNN. The model is fine-tuned using cross validation for hyper-parameter adjustment and performance evaluation, ensuring robustness and generalization. Our approach integrates Global Moran’s I for examining global autocorrelation, and local Moran’s I for assessing local spatial autocorrelation in the residuals. To validate our approach, we implemented the hybrid model on a real-world dataset and compared its performance with that of the traditional machine learning models. Results indicate superior performance with an R-squared of 0.90, outperforming RF 0.84 and CNN 0.74. This study contributed to a detailed understanding of spatial variations in data considering the geographical information (Longitude & Latitude) present in the dataset. Our results, also assessed using the Root Mean Squared Error (RMSE), indicated that the hybrid yielded lower errors, showing a deviation of 53.65% from the RF model and 63.24% from the CNN model. Additionally, the global Moran’s I index was observed to be 0.10. This study underscores that the hybrid was able to predict correctly the house prices both in clusters and in dispersed areas.