BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p...BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.展开更多
The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper ...The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.展开更多
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im...The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.展开更多
BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in ...BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in integrating complex clinical data.AIM To develop and validate a machine learning model for predicting unplanned reoperation risk in colorectal cancer patients.METHODS Data of patients treated for colorectal cancer(n=2044)at the First Affiliated Hospital of Wenzhou Medical University and Wenzhou Central Hospital from March 2020 to March 2022 were retrospectively collected.Patients were divided into an experimental group(n=60)and a control group(n=1984)according to unplanned reoperation occurrence.Patients were also divided into a training group and a validation group(7:3 ratio).We used three different machine learning methods to screen characteristic variables.A nomogram was created based on multifactor logistic regression,and the model performance was assessed using receiver operating characteristic curve,calibration curve,Hosmer-Lemeshow test,and decision curve analysis.The risk scores of the two groups were calculated and compared to validate the model.RESULTS More patients in the experimental group were≥60 years old,male,and had a history of hypertension,laparotomy,and hypoproteinemia,compared to the control group.Multiple logistic regression analysis confirmed the following as independent risk factors for unplanned reoperation(P<0.05):Prognostic Nutritional Index value,history of laparotomy,hypertension,or stroke,hypoproteinemia,age,tumor-node-metastasis staging,surgical time,gender,and American Society of Anesthesiologists classification.Receiver operating characteristic curve analysis showed that the model had good discrimination and clinical utility.CONCLUSION This study used a machine learning approach to build a model that accurately predicts the risk of postoperative unplanned reoperation in patients with colorectal cancer,which can improve treatment decisions and prognosis.展开更多
AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hos...AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hospital from Spetember to December 2022 were included,and 13470 infrared pupil images were collected for the study.All infrared images for pupil segmentation were labeled using the Labelme software.The computation of pupil diameter is divided into four steps:image pre-processing,pupil identification and localization,pupil segmentation,and diameter calculation.Two major models are used in the computation process:the modified YoloV3 and Deeplabv 3+models,which must be trained beforehand.RESULTS:The test dataset included 1348 infrared pupil images.On the test dataset,the modified YoloV3 model had a detection rate of 99.98% and an average precision(AP)of 0.80 for pupils.The DeeplabV3+model achieved a background intersection over union(IOU)of 99.23%,a pupil IOU of 93.81%,and a mean IOU of 96.52%.The pupil diameters in the test dataset ranged from 20 to 56 pixels,with a mean of 36.06±6.85 pixels.The absolute error in pupil diameters between predicted and actual values ranged from 0 to 7 pixels,with a mean absolute error(MAE)of 1.06±0.96 pixels.CONCLUSION:This study successfully demonstrates a robust infrared image-based pupil diameter measurement algorithm,proven to be highly accurate and reliable for clinical application.展开更多
The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on ...The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on wind turbine blades,a blade surface defect detection and quantification method based on an improved Deeplabv3+deep learning model is proposed.Firstly,an improved method for wind turbine blade surface defect detection,utilizing Mobilenetv2 as the backbone feature extraction network,is proposed based on an original Deeplabv3+deep learning model to address the issue of limited robustness.Secondly,through integrating the concept of pre-trained weights from transfer learning and implementing a freeze training strategy,significant improvements have been made to enhance both the training speed and model training accuracy of this deep learning model.Finally,based on segmented blade surface defect images,a method for quantifying blade defects is proposed.This method combines image stitching algorithms to achieve overall quantification and risk assessment of the entire blade.Test results show that the improved Deeplabv3+deep learning model reduces training time by approximately 43.03%compared to the original model,while achieving mAP and MIoU values of 96.87%and 96.93%,respectively.Moreover,it demonstrates robustness in detecting different surface defects on blades across different back-grounds.The application of a blade surface defect quantification method enables the precise quantification of dif-ferent defects and facilitates the assessment of risk levels associated with defect measurements across the entire blade.This method enables non-contact,long-distance,high-precision detection and quantification of surface defects on the blades,providing a reference for assessing surface defects on wind turbine blades.展开更多
Objective:To analyze the effect of using a problem-based(PBL)independent learning model in teaching cerebral ischemic stroke(CIS)first aid in emergency medicine.Methods:90 interns in the emergency department of our ho...Objective:To analyze the effect of using a problem-based(PBL)independent learning model in teaching cerebral ischemic stroke(CIS)first aid in emergency medicine.Methods:90 interns in the emergency department of our hospital from May 2022 to May 2023 were selected for the study.They were divided into Group A(45,conventional teaching method)and Group B(45 cases,PBL independent learning model)by randomized numerical table method to compare the effects of the two groups.Results:The teaching effect indicators and student satisfaction scores in Group B were higher than those in Group A(P<0.05).Conclusion:The use of the PBL independent learning model in the teaching of CIS first aid can significantly improve the teaching effect and student satisfaction.展开更多
To perform landslide susceptibility prediction(LSP),it is important to select appropriate mapping unit and landslide-related conditioning factors.The efficient and automatic multi-scale segmentation(MSS)method propose...To perform landslide susceptibility prediction(LSP),it is important to select appropriate mapping unit and landslide-related conditioning factors.The efficient and automatic multi-scale segmentation(MSS)method proposed by the authors promotes the application of slope units.However,LSP modeling based on these slope units has not been performed.Moreover,the heterogeneity of conditioning factors in slope units is neglected,leading to incomplete input variables of LSP modeling.In this study,the slope units extracted by the MSS method are used to construct LSP modeling,and the heterogeneity of conditioning factors is represented by the internal variations of conditioning factors within slope unit using the descriptive statistics features of mean,standard deviation and range.Thus,slope units-based machine learning models considering internal variations of conditioning factors(variant slope-machine learning)are proposed.The Chongyi County is selected as the case study and is divided into 53,055 slope units.Fifteen original slope unit-based conditioning factors are expanded to 38 slope unit-based conditioning factors through considering their internal variations.Random forest(RF)and multi-layer perceptron(MLP)machine learning models are used to construct variant Slope-RF and Slope-MLP models.Meanwhile,the Slope-RF and Slope-MLP models without considering the internal variations of conditioning factors,and conventional grid units-based machine learning(Grid-RF and MLP)models are built for comparisons through the LSP performance assessments.Results show that the variant Slopemachine learning models have higher LSP performances than Slope-machine learning models;LSP results of variant Slope-machine learning models have stronger directivity and practical application than Grid-machine learning models.It is concluded that slope units extracted by MSS method can be appropriate for LSP modeling,and the heterogeneity of conditioning factors within slope units can more comprehensively reflect the relationships between conditioning factors and landslides.The research results have important reference significance for land use and landslide prevention.展开更多
Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topo...Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topographic factors like altitude,slope,slope direction,slope variability,surface roughness,and meteorological factors like temperature and wind speed.The results of the correction demonstrated that the ensemble learning method has a considerably corrective effect and the three methods(Random Forest,AdaBoost,and Bagging)adopted in the study had similar results.The mean bias between CMPAS and 85%of automatic weather stations has dropped by more than 30%.The plateau region displays the largest accuracy increase,the winter season shows the greatest error reduction,and decreasing precipitation improves the correction outcome.Additionally,the heavy precipitation process’precision has improved to some degree.For individual stations,the revised CMPAS error fluctuation range is significantly reduced.展开更多
Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of ...Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled.展开更多
This study employs nine distinct deep learning models to categorize 12,444 blood cell images and automatically extract from them relevant information with an accuracy that is beyond that achievable with traditional te...This study employs nine distinct deep learning models to categorize 12,444 blood cell images and automatically extract from them relevant information with an accuracy that is beyond that achievable with traditional techniques.The work is intended to improve current methods for the assessment of human health through measurement of the distribution of four types of blood cells,namely,eosinophils,neutrophils,monocytes,and lymphocytes,known for their relationship with human body damage,inflammatory regions,and organ illnesses,in particular,and with the health of the immune system and other hazards,such as cardiovascular disease or infections,more in general.The results of the experiments show that the deep learning models can automatically extract features from the blood cell images and properly classify them with an accuracy of 98%,97%,and 89%,respectively,with regard to the training,verification,and testing of the corresponding datasets.展开更多
BACKGROUND Bleeding is one of the major complications after endoscopic submucosal dissection(ESD)in early gastric cancer(EGC)patients.There are limited studies on estimating the bleeding risk after ESD using an artifi...BACKGROUND Bleeding is one of the major complications after endoscopic submucosal dissection(ESD)in early gastric cancer(EGC)patients.There are limited studies on estimating the bleeding risk after ESD using an artificial intelligence system.AIM To derivate and verify the performance of the deep learning model and the clinical model for predicting bleeding risk after ESD in EGC patients.METHODS Patients with EGC who underwent ESD between January 2010 and June 2020 at the Samsung Medical Center were enrolled,and post-ESD bleeding(PEB)was investigated retrospectively.We split the entire cohort into a development set(80%)and a validation set(20%).The deep learning and clinical model were built on the development set and tested in the validation set.The performance of the deep learning model and the clinical model were compared using the area under the curve and the stratification of bleeding risk after ESD.RESULTS A total of 5629 patients were included,and PEB occurred in 325 patients.The area under the curve for predicting PEB was 0.71(95%confidence interval:0.63-0.78)in the deep learning model and 0.70(95%confidence interval:0.62-0.77)in the clinical model,without significant difference(P=0.730).The patients expected to the low-(<5%),intermediate-(≥5%,<9%),and high-risk(≥9%)categories were observed with actual bleeding rate of 2.2%,3.9%,and 11.6%,respectively,in the deep learning model;4.0%,8.8%,and 18.2%,respectively,in the clinical model.CONCLUSION A deep learning model can predict and stratify the bleeding risk after ESD in patients with EGC.展开更多
In recent years evidence has emerged suggesting that Mini-basketball training program(MBTP)can be an effec-tive intervention method to improve social communication(SC)impairments and restricted and repetitive beha-vio...In recent years evidence has emerged suggesting that Mini-basketball training program(MBTP)can be an effec-tive intervention method to improve social communication(SC)impairments and restricted and repetitive beha-viors(RRBs)in preschool children suffering from autism spectrum disorder(ASD).However,there is a considerable degree if interindividual variability concerning these social outcomes and thus not all preschool chil-dren with ASD profit from a MBTP intervention to the same extent.In order to make more accurate predictions which preschool children with ASD can benefit from an MBTP intervention or which preschool children with ASD need additional interventions to achieve behavioral improvements,further research is required.This study aimed to investigate which individual factors of preschool children with ASD can predict MBTP intervention out-comes concerning SC impairments and RRBs.Then,test the performance of machine learning models in predict-ing intervention outcomes based on these factors.Participants were 26 preschool children with ASD who enrolled in a quasi-experiment and received MBTP intervention.Baseline demographic variables(e.g.,age,body,mass index[BMI]),indicators of physicalfitness(e.g.,handgrip strength,balance performance),performance in execu-tive function,severity of ASD symptoms,level of SC impairments,and severity of RRBs were obtained to predict treatment outcomes after MBTP intervention.Machine learning models were established based on support vector machine algorithm were implemented.For comparison,we also employed multiple linear regression models in statistics.Ourfindings suggest that in preschool children with ASD symptomatic severity(r=0.712,p<0.001)and baseline SC impairments(r=0.713,p<0.001)are predictors for intervention outcomes of SC impair-ments.Furthermore,BMI(r=-0.430,p=0.028),symptomatic severity(r=0.656,p<0.001),baseline SC impair-ments(r=0.504,p=0.009)and baseline RRBs(r=0.647,p<0.001)can predict intervention outcomes of RRBs.Statistical models predicted 59.6%of variance in post-treatment SC impairments(MSE=0.455,RMSE=0.675,R2=0.596)and 58.9%of variance in post-treatment RRBs(MSE=0.464,RMSE=0.681,R2=0.589).Machine learning models predicted 83%of variance in post-treatment SC impairments(MSE=0.188,RMSE=0.434,R2=0.83)and 85.9%of variance in post-treatment RRBs(MSE=0.051,RMSE=0.226,R2=0.859),which were better than statistical models.Ourfindings suggest that baseline characteristics such as symptomatic severity of 144 IJMHP,2022,vol.24,no.2 ASD symptoms and SC impairments are important predictors determining MBTP intervention-induced improvements concerning SC impairments and RBBs.Furthermore,the current study revealed that machine learning models can successfully be applied to predict the MBTP intervention-related outcomes in preschool chil-dren with ASD,and performed better than statistical models.Ourfindings can help to inform which preschool children with ASD are most likely to benefit from an MBTP intervention,and they might provide a reference for the development of personalized intervention programs for preschool children with ASD.展开更多
Forecasting the movement of stock market is a long-time attractive topic. This paper implements different statistical learning models to predict the movement of S&P 500 index. The S&P 500 index is influenced b...Forecasting the movement of stock market is a long-time attractive topic. This paper implements different statistical learning models to predict the movement of S&P 500 index. The S&P 500 index is influenced by other important financial indexes across the world such as commodity price and financial technical indicators. This paper systematically investigated four supervised learning models, including Logistic Regression, Gaussian Discriminant Analysis (GDA), Naive Bayes and Support Vector Machine (SVM) in the forecast of S&P 500 index. After several experiments of optimization in features and models, especially the SVM kernel selection and feature selection for different models, this paper concludes that a SVM model with a Radial Basis Function (RBF) kernel can achieve an accuracy rate of 62.51% for the future market trend of the S&P 500 index.展开更多
Seasonal location and intensity changes in the western Pacific subtropical high(WPSH)are important factors dominating the synoptic weather and the distribution and magnitude of precipitation in the rain belt over East...Seasonal location and intensity changes in the western Pacific subtropical high(WPSH)are important factors dominating the synoptic weather and the distribution and magnitude of precipitation in the rain belt over East Asia.Therefore,this article delves into the forecast of the western Pacific subtropical high index during typhoon activity by adopting a hybrid deep learning model.Firstly,the predictors,which are the inputs of the model,are analysed based on three characteristics:the first is the statistical discipline of the WPSH index anomalies corresponding to the three types of typhoon paths;the second is the correspondence of distributions between sea surface temperature,850 hPa zonal wind(u),meridional wind(v),and 500 hPa potential height field;and the third is the numerical sensitivity experiment,which reflects the evident impact of variations in the physical field around the typhoon to the WPSH index.Secondly,the model is repeatedly trained through the backward propagation algorithm to predict the WPSH index using 2011–2018 atmospheric variables as the input of the training set.The model predicts the WPSH index after 6 h,24 h,48 h,and 72 h.The validation set using independent data in 2019 is utilized to illustrate the performance.Finally,the model is improved by changing the CNN2D module to the DeCNN module to enhance its ability to predict images.Taking the 2019 typhoon“Lekima”as an example,it shows the promising performance of this model to predict the 500 hPa potential height field.展开更多
Production performance prediction of tight gas reservoirs is crucial to the estimation of ultimate recovery,which has an important impact on gas field development planning and economic evaluation.Owing to the model’s...Production performance prediction of tight gas reservoirs is crucial to the estimation of ultimate recovery,which has an important impact on gas field development planning and economic evaluation.Owing to the model’s simplicity,the decline curve analysis method has been widely used to predict production performance.The advancement of deep-learning methods provides an intelligent way of analyzing production performance in tight gas reservoirs.In this paper,a sequence learning method to improve the accuracy and efficiency of tight gas production forecasting is proposed.The sequence learning methods used in production performance analysis herein include the recurrent neural network(RNN),long short-term memory(LSTM)neural network,and gated recurrent unit(GRU)neural network,and their performance in the tight gas reservoir production prediction is investigated and compared.To further improve the performance of the sequence learning method,the hyperparameters in the sequence learning methods are optimized through a particle swarm optimization algorithm,which can greatly simplify the optimization process of the neural network model in an automated manner.Results show that the optimized GRU and RNN models have more compact neural network structures than the LSTM model and that the GRU is more efficiently trained.The predictive performance of LSTM and GRU is similar,and both are better than the RNN and the decline curve analysis model and thus can be used to predict tight gas production.展开更多
Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to...Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to each other.Several traditional Stock Technical Indicators(STIs)may incorrectly predict the stockmarket trends.To study the stock market characteristics using STIs and make efficient trading decisions,a robust model is built.This paper aims to build up an Evolutionary Deep Learning Model(EDLM)to identify stock trends’prices by using STIs.The proposed model has implemented the Deep Learning(DL)model to establish the concept of Correlation-Tensor.The analysis of the dataset of three most popular banking organizations obtained from the live stock market based on the National Stock exchange(NSE)-India,a Long Short Term Memory(LSTM)is used.The datasets encompassed the trading days from the 17^(th) of Nov 2008 to the 15^(th) of Nov 2018.This work also conducted exhaustive experiments to study the correlation of various STIs with stock price trends.The model built with an EDLM has shown significant improvements over two benchmark ML models and a deep learning one.The proposed model aids investors in making profitable investment decisions as it presents trend-based forecasting and has achieved a prediction accuracy of 63.59%,56.25%,and 57.95%on the datasets of HDFC,Yes Bank,and SBI,respectively.Results indicate that the proposed EDLA with a combination of STIs can often provide improved results than the other state-of-the-art algorithms.展开更多
Data is always a crucial issue of concern especially during its prediction and computation in digital revolution.This paper exactly helps in providing efficient learning mechanism for accurate predictability and reduc...Data is always a crucial issue of concern especially during its prediction and computation in digital revolution.This paper exactly helps in providing efficient learning mechanism for accurate predictability and reducing redundant data communication.It also discusses the Bayesian analysis that finds the conditional probability of at least two parametric based predictions for the data.The paper presents a method for improving the performance of Bayesian classification using the combination of Kalman Filter and K-means.The method is applied on a small dataset just for establishing the fact that the proposed algorithm can reduce the time for computing the clusters from data.The proposed Bayesian learning probabilistic model is used to check the statistical noise and other inaccuracies using unknown variables.This scenario is being implemented using efficient machine learning algorithm to perpetuate the Bayesian probabilistic approach.It also demonstrates the generative function forKalman-filer based prediction model and its observations.This paper implements the algorithm using open source platform of Python and efficiently integrates all different modules to piece of code via Common Platform Enumeration(CPE)for Python.展开更多
In Internet of Things (IoT), large amount of data are processed andcommunicated through different network technologies. Wireless Body Area Networks (WBAN) plays pivotal role in the health care domain with an integrat...In Internet of Things (IoT), large amount of data are processed andcommunicated through different network technologies. Wireless Body Area Networks (WBAN) plays pivotal role in the health care domain with an integration ofIoT and Artificial Intelligence (AI). The amalgamation of above mentioned toolshas taken the new peak in terms of diagnosis and treatment process especially inthe pandemic period. But the real challenges such as low latency, energy consumption high throughput still remains in the dark side of the research. This paperproposes a novel optimized cognitive learning based BAN model based on FogIoT technology as a real-time health monitoring systems with the increased network-life time. Energy and latency aware features of BAN have been extractedand used to train the proposed fog based learning algorithm to achieve low energyconsumption and low-latency scheduling algorithm. To test the proposed network,Fog-IoT-BAN test bed has been developed with the battery driven MICOTTboards interfaced with the health care sensors using Micro Python programming.The extensive experimentation is carried out using the above test beds and variousparameters such as accuracy, precision, recall, F1score and specificity has beencalculated along with QoS (quality of service) parameters such as latency, energyand throughput. To prove the superiority of the proposed framework, the performance of the proposed learning based framework has been compared with theother state-of-art classical learning frameworks and other existing Fog-BAN networks such as WORN, DARE, L-No-DEAF networks. Results proves the proposed framework has outperformed the other classical learning models in termsof accuracy and high False Alarm Rate (FAR), energy efficiency and latency.展开更多
Classifying the visual features in images to retrieve a specific image is a significant problem within the computer vision field especially when dealing with historical faded colored images.Thus,there were lots of eff...Classifying the visual features in images to retrieve a specific image is a significant problem within the computer vision field especially when dealing with historical faded colored images.Thus,there were lots of efforts trying to automate the classification operation and retrieve similar images accurately.To reach this goal,we developed a VGG19 deep convolutional neural network to extract the visual features from the images automatically.Then,the distances among the extracted features vectors are measured and a similarity score is generated using a Siamese deep neural network.The Siamese model built and trained at first from scratch but,it didn’t generated high evaluation metrices.Thus,we re-built it from VGG19 pre-trained deep learning model to generate higher evaluation metrices.Afterward,three different distance metrics combined with the Sigmoid activation function are experimented looking for the most accurate method formeasuring the similarities among the retrieved images.Reaching that the highest evaluation parameters generated using the Cosine distance metric.Moreover,the Graphics Processing Unit(GPU)utilized to run the code instead of running it on the Central Processing Unit(CPU).This step optimized the execution further since it expedited both the training and the retrieval time efficiently.After extensive experimentation,we reached satisfactory solution recording 0.98 and 0.99 F-score for the classification and for the retrieval,respectively.展开更多
文摘BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.
文摘The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.
基金supported in part by the Gusu Innovation and Entrepreneurship Leading Talents in Suzhou City,grant numbers ZXL2021425 and ZXL2022476Doctor of Innovation and Entrepreneurship Program in Jiangsu Province,grant number JSSCBS20211440+6 种基金Jiangsu Province Key R&D Program,grant number BE2019682Natural Science Foundation of Jiangsu Province,grant number BK20200214National Key R&D Program of China,grant number 2017YFB0403701National Natural Science Foundation of China,grant numbers 61605210,61675226,and 62075235Youth Innovation Promotion Association of Chinese Academy of Sciences,grant number 2019320Frontier Science Research Project of the Chinese Academy of Sciences,grant number QYZDB-SSW-JSC03Strategic Priority Research Program of the Chinese Academy of Sciences,grant number XDB02060000.
文摘The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error.
基金This study has been reviewed and approved by the Clinical Research Ethics Committee of Wenzhou Central Hospital and the First Hospital Affiliated to Wenzhou Medical University,No.KY2024-R016.
文摘BACKGROUND Colorectal cancer significantly impacts global health,with unplanned reoperations post-surgery being key determinants of patient outcomes.Existing predictive models for these reoperations lack precision in integrating complex clinical data.AIM To develop and validate a machine learning model for predicting unplanned reoperation risk in colorectal cancer patients.METHODS Data of patients treated for colorectal cancer(n=2044)at the First Affiliated Hospital of Wenzhou Medical University and Wenzhou Central Hospital from March 2020 to March 2022 were retrospectively collected.Patients were divided into an experimental group(n=60)and a control group(n=1984)according to unplanned reoperation occurrence.Patients were also divided into a training group and a validation group(7:3 ratio).We used three different machine learning methods to screen characteristic variables.A nomogram was created based on multifactor logistic regression,and the model performance was assessed using receiver operating characteristic curve,calibration curve,Hosmer-Lemeshow test,and decision curve analysis.The risk scores of the two groups were calculated and compared to validate the model.RESULTS More patients in the experimental group were≥60 years old,male,and had a history of hypertension,laparotomy,and hypoproteinemia,compared to the control group.Multiple logistic regression analysis confirmed the following as independent risk factors for unplanned reoperation(P<0.05):Prognostic Nutritional Index value,history of laparotomy,hypertension,or stroke,hypoproteinemia,age,tumor-node-metastasis staging,surgical time,gender,and American Society of Anesthesiologists classification.Receiver operating characteristic curve analysis showed that the model had good discrimination and clinical utility.CONCLUSION This study used a machine learning approach to build a model that accurately predicts the risk of postoperative unplanned reoperation in patients with colorectal cancer,which can improve treatment decisions and prognosis.
文摘AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hospital from Spetember to December 2022 were included,and 13470 infrared pupil images were collected for the study.All infrared images for pupil segmentation were labeled using the Labelme software.The computation of pupil diameter is divided into four steps:image pre-processing,pupil identification and localization,pupil segmentation,and diameter calculation.Two major models are used in the computation process:the modified YoloV3 and Deeplabv 3+models,which must be trained beforehand.RESULTS:The test dataset included 1348 infrared pupil images.On the test dataset,the modified YoloV3 model had a detection rate of 99.98% and an average precision(AP)of 0.80 for pupils.The DeeplabV3+model achieved a background intersection over union(IOU)of 99.23%,a pupil IOU of 93.81%,and a mean IOU of 96.52%.The pupil diameters in the test dataset ranged from 20 to 56 pixels,with a mean of 36.06±6.85 pixels.The absolute error in pupil diameters between predicted and actual values ranged from 0 to 7 pixels,with a mean absolute error(MAE)of 1.06±0.96 pixels.CONCLUSION:This study successfully demonstrates a robust infrared image-based pupil diameter measurement algorithm,proven to be highly accurate and reliable for clinical application.
基金supported by the National Science Foundation of China(Grant Nos.52068049 and 51908266)the Science Fund for Distinguished Young Scholars of Gansu Province(No.21JR7RA267)Hongliu Outstanding Young Talents Program of Lanzhou University of Technology.
文摘The accumulation of defects on wind turbine blade surfaces can lead to irreversible damage,impacting the aero-dynamic performance of the blades.To address the challenge of detecting and quantifying surface defects on wind turbine blades,a blade surface defect detection and quantification method based on an improved Deeplabv3+deep learning model is proposed.Firstly,an improved method for wind turbine blade surface defect detection,utilizing Mobilenetv2 as the backbone feature extraction network,is proposed based on an original Deeplabv3+deep learning model to address the issue of limited robustness.Secondly,through integrating the concept of pre-trained weights from transfer learning and implementing a freeze training strategy,significant improvements have been made to enhance both the training speed and model training accuracy of this deep learning model.Finally,based on segmented blade surface defect images,a method for quantifying blade defects is proposed.This method combines image stitching algorithms to achieve overall quantification and risk assessment of the entire blade.Test results show that the improved Deeplabv3+deep learning model reduces training time by approximately 43.03%compared to the original model,while achieving mAP and MIoU values of 96.87%and 96.93%,respectively.Moreover,it demonstrates robustness in detecting different surface defects on blades across different back-grounds.The application of a blade surface defect quantification method enables the precise quantification of dif-ferent defects and facilitates the assessment of risk levels associated with defect measurements across the entire blade.This method enables non-contact,long-distance,high-precision detection and quantification of surface defects on the blades,providing a reference for assessing surface defects on wind turbine blades.
文摘Objective:To analyze the effect of using a problem-based(PBL)independent learning model in teaching cerebral ischemic stroke(CIS)first aid in emergency medicine.Methods:90 interns in the emergency department of our hospital from May 2022 to May 2023 were selected for the study.They were divided into Group A(45,conventional teaching method)and Group B(45 cases,PBL independent learning model)by randomized numerical table method to compare the effects of the two groups.Results:The teaching effect indicators and student satisfaction scores in Group B were higher than those in Group A(P<0.05).Conclusion:The use of the PBL independent learning model in the teaching of CIS first aid can significantly improve the teaching effect and student satisfaction.
基金funded by the Natural Science Foundation of China(Grant Nos.41807285,41972280 and 52179103).
文摘To perform landslide susceptibility prediction(LSP),it is important to select appropriate mapping unit and landslide-related conditioning factors.The efficient and automatic multi-scale segmentation(MSS)method proposed by the authors promotes the application of slope units.However,LSP modeling based on these slope units has not been performed.Moreover,the heterogeneity of conditioning factors in slope units is neglected,leading to incomplete input variables of LSP modeling.In this study,the slope units extracted by the MSS method are used to construct LSP modeling,and the heterogeneity of conditioning factors is represented by the internal variations of conditioning factors within slope unit using the descriptive statistics features of mean,standard deviation and range.Thus,slope units-based machine learning models considering internal variations of conditioning factors(variant slope-machine learning)are proposed.The Chongyi County is selected as the case study and is divided into 53,055 slope units.Fifteen original slope unit-based conditioning factors are expanded to 38 slope unit-based conditioning factors through considering their internal variations.Random forest(RF)and multi-layer perceptron(MLP)machine learning models are used to construct variant Slope-RF and Slope-MLP models.Meanwhile,the Slope-RF and Slope-MLP models without considering the internal variations of conditioning factors,and conventional grid units-based machine learning(Grid-RF and MLP)models are built for comparisons through the LSP performance assessments.Results show that the variant Slopemachine learning models have higher LSP performances than Slope-machine learning models;LSP results of variant Slope-machine learning models have stronger directivity and practical application than Grid-machine learning models.It is concluded that slope units extracted by MSS method can be appropriate for LSP modeling,and the heterogeneity of conditioning factors within slope units can more comprehensively reflect the relationships between conditioning factors and landslides.The research results have important reference significance for land use and landslide prevention.
基金Program of Science and Technology Department of Sichuan Province(2022YFS0541-02)Program of Heavy Rain and Drought-flood Disasters in Plateau and Basin Key Laboratory of Sichuan Province(SCQXKJQN202121)Innovative Development Program of the China Meteorological Administration(CXFZ2021Z007)。
文摘Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topographic factors like altitude,slope,slope direction,slope variability,surface roughness,and meteorological factors like temperature and wind speed.The results of the correction demonstrated that the ensemble learning method has a considerably corrective effect and the three methods(Random Forest,AdaBoost,and Bagging)adopted in the study had similar results.The mean bias between CMPAS and 85%of automatic weather stations has dropped by more than 30%.The plateau region displays the largest accuracy increase,the winter season shows the greatest error reduction,and decreasing precipitation improves the correction outcome.Additionally,the heavy precipitation process’precision has improved to some degree.For individual stations,the revised CMPAS error fluctuation range is significantly reduced.
文摘Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled.
基金supported by National Natural Science Foundation of China(NSFC)(Nos.61806087,61902158).
文摘This study employs nine distinct deep learning models to categorize 12,444 blood cell images and automatically extract from them relevant information with an accuracy that is beyond that achievable with traditional techniques.The work is intended to improve current methods for the assessment of human health through measurement of the distribution of four types of blood cells,namely,eosinophils,neutrophils,monocytes,and lymphocytes,known for their relationship with human body damage,inflammatory regions,and organ illnesses,in particular,and with the health of the immune system and other hazards,such as cardiovascular disease or infections,more in general.The results of the experiments show that the deep learning models can automatically extract features from the blood cell images and properly classify them with an accuracy of 98%,97%,and 89%,respectively,with regard to the training,verification,and testing of the corresponding datasets.
文摘BACKGROUND Bleeding is one of the major complications after endoscopic submucosal dissection(ESD)in early gastric cancer(EGC)patients.There are limited studies on estimating the bleeding risk after ESD using an artificial intelligence system.AIM To derivate and verify the performance of the deep learning model and the clinical model for predicting bleeding risk after ESD in EGC patients.METHODS Patients with EGC who underwent ESD between January 2010 and June 2020 at the Samsung Medical Center were enrolled,and post-ESD bleeding(PEB)was investigated retrospectively.We split the entire cohort into a development set(80%)and a validation set(20%).The deep learning and clinical model were built on the development set and tested in the validation set.The performance of the deep learning model and the clinical model were compared using the area under the curve and the stratification of bleeding risk after ESD.RESULTS A total of 5629 patients were included,and PEB occurred in 325 patients.The area under the curve for predicting PEB was 0.71(95%confidence interval:0.63-0.78)in the deep learning model and 0.70(95%confidence interval:0.62-0.77)in the clinical model,without significant difference(P=0.730).The patients expected to the low-(<5%),intermediate-(≥5%,<9%),and high-risk(≥9%)categories were observed with actual bleeding rate of 2.2%,3.9%,and 11.6%,respectively,in the deep learning model;4.0%,8.8%,and 18.2%,respectively,in the clinical model.CONCLUSION A deep learning model can predict and stratify the bleeding risk after ESD in patients with EGC.
基金supported by grants from the National Natural Science Foundation of China(31771243)the Fok Ying Tong Education Foundation(141113)to Aiguo Chen.
文摘In recent years evidence has emerged suggesting that Mini-basketball training program(MBTP)can be an effec-tive intervention method to improve social communication(SC)impairments and restricted and repetitive beha-viors(RRBs)in preschool children suffering from autism spectrum disorder(ASD).However,there is a considerable degree if interindividual variability concerning these social outcomes and thus not all preschool chil-dren with ASD profit from a MBTP intervention to the same extent.In order to make more accurate predictions which preschool children with ASD can benefit from an MBTP intervention or which preschool children with ASD need additional interventions to achieve behavioral improvements,further research is required.This study aimed to investigate which individual factors of preschool children with ASD can predict MBTP intervention out-comes concerning SC impairments and RRBs.Then,test the performance of machine learning models in predict-ing intervention outcomes based on these factors.Participants were 26 preschool children with ASD who enrolled in a quasi-experiment and received MBTP intervention.Baseline demographic variables(e.g.,age,body,mass index[BMI]),indicators of physicalfitness(e.g.,handgrip strength,balance performance),performance in execu-tive function,severity of ASD symptoms,level of SC impairments,and severity of RRBs were obtained to predict treatment outcomes after MBTP intervention.Machine learning models were established based on support vector machine algorithm were implemented.For comparison,we also employed multiple linear regression models in statistics.Ourfindings suggest that in preschool children with ASD symptomatic severity(r=0.712,p<0.001)and baseline SC impairments(r=0.713,p<0.001)are predictors for intervention outcomes of SC impair-ments.Furthermore,BMI(r=-0.430,p=0.028),symptomatic severity(r=0.656,p<0.001),baseline SC impair-ments(r=0.504,p=0.009)and baseline RRBs(r=0.647,p<0.001)can predict intervention outcomes of RRBs.Statistical models predicted 59.6%of variance in post-treatment SC impairments(MSE=0.455,RMSE=0.675,R2=0.596)and 58.9%of variance in post-treatment RRBs(MSE=0.464,RMSE=0.681,R2=0.589).Machine learning models predicted 83%of variance in post-treatment SC impairments(MSE=0.188,RMSE=0.434,R2=0.83)and 85.9%of variance in post-treatment RRBs(MSE=0.051,RMSE=0.226,R2=0.859),which were better than statistical models.Ourfindings suggest that baseline characteristics such as symptomatic severity of 144 IJMHP,2022,vol.24,no.2 ASD symptoms and SC impairments are important predictors determining MBTP intervention-induced improvements concerning SC impairments and RBBs.Furthermore,the current study revealed that machine learning models can successfully be applied to predict the MBTP intervention-related outcomes in preschool chil-dren with ASD,and performed better than statistical models.Ourfindings can help to inform which preschool children with ASD are most likely to benefit from an MBTP intervention,and they might provide a reference for the development of personalized intervention programs for preschool children with ASD.
文摘Forecasting the movement of stock market is a long-time attractive topic. This paper implements different statistical learning models to predict the movement of S&P 500 index. The S&P 500 index is influenced by other important financial indexes across the world such as commodity price and financial technical indicators. This paper systematically investigated four supervised learning models, including Logistic Regression, Gaussian Discriminant Analysis (GDA), Naive Bayes and Support Vector Machine (SVM) in the forecast of S&P 500 index. After several experiments of optimization in features and models, especially the SVM kernel selection and feature selection for different models, this paper concludes that a SVM model with a Radial Basis Function (RBF) kernel can achieve an accuracy rate of 62.51% for the future market trend of the S&P 500 index.
文摘Seasonal location and intensity changes in the western Pacific subtropical high(WPSH)are important factors dominating the synoptic weather and the distribution and magnitude of precipitation in the rain belt over East Asia.Therefore,this article delves into the forecast of the western Pacific subtropical high index during typhoon activity by adopting a hybrid deep learning model.Firstly,the predictors,which are the inputs of the model,are analysed based on three characteristics:the first is the statistical discipline of the WPSH index anomalies corresponding to the three types of typhoon paths;the second is the correspondence of distributions between sea surface temperature,850 hPa zonal wind(u),meridional wind(v),and 500 hPa potential height field;and the third is the numerical sensitivity experiment,which reflects the evident impact of variations in the physical field around the typhoon to the WPSH index.Secondly,the model is repeatedly trained through the backward propagation algorithm to predict the WPSH index using 2011–2018 atmospheric variables as the input of the training set.The model predicts the WPSH index after 6 h,24 h,48 h,and 72 h.The validation set using independent data in 2019 is utilized to illustrate the performance.Finally,the model is improved by changing the CNN2D module to the DeCNN module to enhance its ability to predict images.Taking the 2019 typhoon“Lekima”as an example,it shows the promising performance of this model to predict the 500 hPa potential height field.
基金funded by the Joint Funds of the National Natural Science Foundation of China(U19B6003)the PetroChina Innovation Foundation(Grant No.2020D5007-0203)it was further supported by the Science Foundation of China University of Petroleum,Beijing(Nos.2462021YXZZ010,2462018QZDX13,and 2462020YXZZ028).
文摘Production performance prediction of tight gas reservoirs is crucial to the estimation of ultimate recovery,which has an important impact on gas field development planning and economic evaluation.Owing to the model’s simplicity,the decline curve analysis method has been widely used to predict production performance.The advancement of deep-learning methods provides an intelligent way of analyzing production performance in tight gas reservoirs.In this paper,a sequence learning method to improve the accuracy and efficiency of tight gas production forecasting is proposed.The sequence learning methods used in production performance analysis herein include the recurrent neural network(RNN),long short-term memory(LSTM)neural network,and gated recurrent unit(GRU)neural network,and their performance in the tight gas reservoir production prediction is investigated and compared.To further improve the performance of the sequence learning method,the hyperparameters in the sequence learning methods are optimized through a particle swarm optimization algorithm,which can greatly simplify the optimization process of the neural network model in an automated manner.Results show that the optimized GRU and RNN models have more compact neural network structures than the LSTM model and that the GRU is more efficiently trained.The predictive performance of LSTM and GRU is similar,and both are better than the RNN and the decline curve analysis model and thus can be used to predict tight gas production.
基金Funding is provided by Taif University Researchers Supporting Project Number(TURSP-2020/10),Taif University,Taif,Saudi Arabia.
文摘Stock market trends forecast is one of the most current topics and a significant research challenge due to its dynamic and unstable nature.The stock data is usually non-stationary,and attributes are non-correlative to each other.Several traditional Stock Technical Indicators(STIs)may incorrectly predict the stockmarket trends.To study the stock market characteristics using STIs and make efficient trading decisions,a robust model is built.This paper aims to build up an Evolutionary Deep Learning Model(EDLM)to identify stock trends’prices by using STIs.The proposed model has implemented the Deep Learning(DL)model to establish the concept of Correlation-Tensor.The analysis of the dataset of three most popular banking organizations obtained from the live stock market based on the National Stock exchange(NSE)-India,a Long Short Term Memory(LSTM)is used.The datasets encompassed the trading days from the 17^(th) of Nov 2008 to the 15^(th) of Nov 2018.This work also conducted exhaustive experiments to study the correlation of various STIs with stock price trends.The model built with an EDLM has shown significant improvements over two benchmark ML models and a deep learning one.The proposed model aids investors in making profitable investment decisions as it presents trend-based forecasting and has achieved a prediction accuracy of 63.59%,56.25%,and 57.95%on the datasets of HDFC,Yes Bank,and SBI,respectively.Results indicate that the proposed EDLA with a combination of STIs can often provide improved results than the other state-of-the-art algorithms.
文摘Data is always a crucial issue of concern especially during its prediction and computation in digital revolution.This paper exactly helps in providing efficient learning mechanism for accurate predictability and reducing redundant data communication.It also discusses the Bayesian analysis that finds the conditional probability of at least two parametric based predictions for the data.The paper presents a method for improving the performance of Bayesian classification using the combination of Kalman Filter and K-means.The method is applied on a small dataset just for establishing the fact that the proposed algorithm can reduce the time for computing the clusters from data.The proposed Bayesian learning probabilistic model is used to check the statistical noise and other inaccuracies using unknown variables.This scenario is being implemented using efficient machine learning algorithm to perpetuate the Bayesian probabilistic approach.It also demonstrates the generative function forKalman-filer based prediction model and its observations.This paper implements the algorithm using open source platform of Python and efficiently integrates all different modules to piece of code via Common Platform Enumeration(CPE)for Python.
文摘In Internet of Things (IoT), large amount of data are processed andcommunicated through different network technologies. Wireless Body Area Networks (WBAN) plays pivotal role in the health care domain with an integration ofIoT and Artificial Intelligence (AI). The amalgamation of above mentioned toolshas taken the new peak in terms of diagnosis and treatment process especially inthe pandemic period. But the real challenges such as low latency, energy consumption high throughput still remains in the dark side of the research. This paperproposes a novel optimized cognitive learning based BAN model based on FogIoT technology as a real-time health monitoring systems with the increased network-life time. Energy and latency aware features of BAN have been extractedand used to train the proposed fog based learning algorithm to achieve low energyconsumption and low-latency scheduling algorithm. To test the proposed network,Fog-IoT-BAN test bed has been developed with the battery driven MICOTTboards interfaced with the health care sensors using Micro Python programming.The extensive experimentation is carried out using the above test beds and variousparameters such as accuracy, precision, recall, F1score and specificity has beencalculated along with QoS (quality of service) parameters such as latency, energyand throughput. To prove the superiority of the proposed framework, the performance of the proposed learning based framework has been compared with theother state-of-art classical learning frameworks and other existing Fog-BAN networks such as WORN, DARE, L-No-DEAF networks. Results proves the proposed framework has outperformed the other classical learning models in termsof accuracy and high False Alarm Rate (FAR), energy efficiency and latency.
基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4400271DSR01).
文摘Classifying the visual features in images to retrieve a specific image is a significant problem within the computer vision field especially when dealing with historical faded colored images.Thus,there were lots of efforts trying to automate the classification operation and retrieve similar images accurately.To reach this goal,we developed a VGG19 deep convolutional neural network to extract the visual features from the images automatically.Then,the distances among the extracted features vectors are measured and a similarity score is generated using a Siamese deep neural network.The Siamese model built and trained at first from scratch but,it didn’t generated high evaluation metrices.Thus,we re-built it from VGG19 pre-trained deep learning model to generate higher evaluation metrices.Afterward,three different distance metrics combined with the Sigmoid activation function are experimented looking for the most accurate method formeasuring the similarities among the retrieved images.Reaching that the highest evaluation parameters generated using the Cosine distance metric.Moreover,the Graphics Processing Unit(GPU)utilized to run the code instead of running it on the Central Processing Unit(CPU).This step optimized the execution further since it expedited both the training and the retrieval time efficiently.After extensive experimentation,we reached satisfactory solution recording 0.98 and 0.99 F-score for the classification and for the retrieval,respectively.