Predicting the displacement of landslide is of utmost practical importance as the landslide can pose serious threats to both human life and property.However,traditional methods have the limitation of random selection ...Predicting the displacement of landslide is of utmost practical importance as the landslide can pose serious threats to both human life and property.However,traditional methods have the limitation of random selection in sliding window selection and seldom incorporate weather forecast data for displacement prediction,while a single structural model cannot handle input sequences of different lengths at the same time.In order to solve these limitations,in this study,a new approach is proposed that utilizes weather forecast data and incorporates the maximum information coefficient(MIC),long short-term memory network(LSTM),and attention mechanism to establish a teacher-student coupling model with parallel structure for short-term landslide displacement prediction.Through MIC,a suitable input sequence length is selected for the LSTM model.To investigate the influence of rainfall on landslides during different seasons,a parallel teacher-student coupling model is developed that is able to learn sequential information from various time series of different lengths.The teacher model learns sequence information from rainfall intensity time series while incorporating reliable short-term weather forecast data from platforms such as China Meteorological Administration(CMA)and Reliable Prognosis(https://rp5.ru)to improve the model’s expression capability,and the student model learns sequence information from other time series.An attention module is then designed to integrate different sequence information to derive a context vector,representing seasonal temporal attention mode.Finally,the predicted displacement is obtained through a linear layer.The proposed method demonstrates superior prediction accuracies,surpassing those of the support vector machine(SVM),LSTM,recurrent neural network(RNN),temporal convolutional network(TCN),and LSTM-Attention models.It achieves a mean absolute error(MAE)of 0.072 mm,root mean square error(RMSE)of 0.096 mm,and pearson correlation coefficients(PCCS)of 0.85.Additionally,it exhibits enhanced prediction stability and interpretability,rendering it an indispensable tool for landslide disaster prevention and mitigation.展开更多
Natural events have had a significant impact on overall flight activity,and the aviation industry plays a vital role in helping society cope with the impact of these events.As one of the most impactful weather typhoon...Natural events have had a significant impact on overall flight activity,and the aviation industry plays a vital role in helping society cope with the impact of these events.As one of the most impactful weather typhoon seasons appears and continues,airlines operating in threatened areas and passengers having travel plans during this time period will pay close attention to the development of tropical storms.This paper proposes a deep multimodal fusion and multitasking trajectory prediction model that can improve the reliability of typhoon trajectory prediction and reduce the quantity of flight scheduling cancellation.The deep multimodal fusion module is formed by deep fusion of the feature output by multiple submodal fusion modules,and the multitask generation module uses longitude and latitude as two related tasks for simultaneous prediction.With more dependable data accuracy,problems can be analysed rapidly and more efficiently,enabling better decision-making with a proactive versus reactive posture.When multiple modalities coexist,features can be extracted from them simultaneously to supplement each other’s information.An actual case study,the typhoon Lichma that swept China in 2019,has demonstrated that the algorithm can effectively reduce the number of unnecessary flight cancellations compared to existing flight scheduling and assist the new generation of flight scheduling systems under extreme weather.展开更多
Among steganalysis techniques,detection against MV(motion vector)domain-based video steganography in the HEVC(High Efficiency Video Coding)standard remains a challenging issue.For the purpose of improving the detectio...Among steganalysis techniques,detection against MV(motion vector)domain-based video steganography in the HEVC(High Efficiency Video Coding)standard remains a challenging issue.For the purpose of improving the detection performance,this paper proposes a steganalysis method that can perfectly detectMV-based steganography in HEVC.Firstly,we define the local optimality of MVP(Motion Vector Prediction)based on the technology of AMVP(Advanced Motion Vector Prediction).Secondly,we analyze that in HEVC video,message embedding either usingMVP index orMVD(Motion Vector Difference)may destroy the above optimality of MVP.And then,we define the optimal rate of MVP as a steganalysis feature.Finally,we conduct steganalysis detection experiments on two general datasets for three popular steganographymethods and compare the performance with four state-ofthe-art steganalysis methods.The experimental results demonstrate the effectiveness of the proposed feature set.Furthermore,our method stands out for its practical applicability,requiring no model training and exhibiting low computational complexity,making it a viable solution for real-world scenarios.展开更多
Advanced carbon emission factors of a power grid can provide users with effective carbon reduction advice,which is of immense importance in mobilizing the entire society to reduce carbon emissions.The method of calcul...Advanced carbon emission factors of a power grid can provide users with effective carbon reduction advice,which is of immense importance in mobilizing the entire society to reduce carbon emissions.The method of calculating node carbon emission factors based on the carbon emissions flow theory requires real-time parameters of a power grid.Therefore,it cannot provide carbon factor information beforehand.To address this issue,a prediction model based on the graph attention network is proposed.The model uses a graph structure that is suitable for the topology of the power grid and designs a supervised network using the loads of the grid nodes and the corresponding carbon factor data.The network extracts features and transmits information more suitable for the power system and can flexibly adjust the equivalent topology,thereby increasing the diversity of the structure.Its input and output data are simple,without the power grid parameters.We demonstrated its effect by testing IEEE-39 bus and IEEE-118 bus systems with average error rates of 2.46%and 2.51%.展开更多
This paper addresses the micro wind-hydrogen coupled system,aiming to improve the power tracking capability of micro wind farms,the regulation capability of hydrogen storage systems,and to mitigate the volatility of w...This paper addresses the micro wind-hydrogen coupled system,aiming to improve the power tracking capability of micro wind farms,the regulation capability of hydrogen storage systems,and to mitigate the volatility of wind power generation.A predictive control strategy for the micro wind-hydrogen coupled system is proposed based on the ultra-short-term wind power prediction,the hydrogen storage state division interval,and the daily scheduled output of wind power generation.The control strategy maximizes the power tracking capability,the regulation capability of the hydrogen storage system,and the fluctuation of the joint output of the wind-hydrogen coupled system as the objective functions,and adaptively optimizes the control coefficients of the hydrogen storage interval and the output parameters of the system by the combined sigmoid function and particle swarm algorithm(sigmoid-PSO).Compared with the real-time control strategy,the proposed predictive control strategy can significantly improve the output tracking capability of the wind-hydrogen coupling system,minimize the gap between the actual output and the predicted output,significantly enhance the regulation capability of the hydrogen storage system,and mitigate the power output fluctuation of the wind-hydrogen integrated system,which has a broad practical application prospect.展开更多
This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while ...This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while the corrosion rate as the output.6 dif-ferent ML algorithms were used to construct the proposed model.Through optimization and filtering,the eXtreme gradient boosting(XG-Boost)model exhibited good corrosion rate prediction accuracy.The features of material properties were then transformed into atomic and physical features using the proposed property transformation approach,and the dominant descriptors that affected the corrosion rate were filtered using the recursive feature elimination(RFE)as well as XGBoost methods.The established ML models exhibited better predic-tion performance and generalization ability via property transformation descriptors.In addition,the SHapley additive exPlanations(SHAP)method was applied to analyze the relationship between the descriptors and corrosion rate.The results showed that the property transformation model could effectively help with analyzing the corrosion behavior,thereby significantly improving the generalization ability of corrosion rate prediction models.展开更多
Floods are one of the most serious natural disasters that can cause huge societal and economic losses.Extensive research has been conducted on topics like flood monitoring,prediction,and loss estimation.In these resea...Floods are one of the most serious natural disasters that can cause huge societal and economic losses.Extensive research has been conducted on topics like flood monitoring,prediction,and loss estimation.In these research fields,flood velocity plays a crucial role and is an important factor that influences the reliability of the outcomes.Traditional methods rely on physical models for flood simulation and prediction and could generate accurate results but often take a long time.Deep learning technology has recently shown significant potential in the same field,especially in terms of efficiency,helping to overcome the time-consuming associated with traditional methods.This study explores the potential of deep learning models in predicting flood velocity.More specifically,we use a Multi-Layer Perceptron(MLP)model,a specific type of Artificial Neural Networks(ANNs),to predict the velocity in the test area of the Lundesokna River in Norway with diverse terrain conditions.Geographic data and flood velocity simulated based on the physical hydraulic model are used in the study for the pre-training,optimization,and testing of the MLP model.Our experiment indicates that the MLP model has the potential to predict flood velocity in diverse terrain conditions of the river with acceptable accuracy against simulated velocity results but with a significant decrease in training time and testing time.Meanwhile,we discuss the limitations for the improvement in future work.展开更多
Background:Choosing the appropriate antipsychotic drug(APD)treatment for patients with schizophrenia(SCZ)can be challenging,as the treatment response to APD is highly variable and difficult to predict due to the lack ...Background:Choosing the appropriate antipsychotic drug(APD)treatment for patients with schizophrenia(SCZ)can be challenging,as the treatment response to APD is highly variable and difficult to predict due to the lack of effective biomarkers.Previous studies have indicated the association between treatment response and genetic and epigenetic factors,but no effective biomarkers have been identified.Hence,further research is imperative to enhance precision medicine in SCZ treatment.Methods:Participants with SCZ were recruited from two randomized trials.The discovery cohort was recruited from the CAPOC trial(n=2307)involved 6 weeks of treatment and equally randomized the participants to the Olanzapine,Risperidone,Quetiapine,Aripiprazole,Ziprasidone,and Haloperidol/Perphenazine(subsequently equally assigned to one or the other)groups.The external validation cohort was recruited from the CAPEC trial(n=1379),which involved 8 weeks of treatment and equally randomized the participants to the Olanzapine,Risperidone,and Aripiprazole groups.Additionally,healthy controls(n=275)from the local community were utilized as a genetic/epigenetic reference.The genetic and epigenetic(DNA methylation)risks of SCZ were assessed using the polygenic risk score(PRS)and polymethylation score,respectively.The study also examined the genetic-epigenetic interactions with treatment response through differential methylation analysis,methylation quantitative trait loci,colocalization,and promoteranchored chromatin interaction.Machine learning was used to develop a prediction model for treatment response,which was evaluated for accuracy and clinical benefit using the area under curve(AUC)for classification,R^(2) for regression,and decision curve analysis.Results:Six risk genes for SCZ(LINC01795,DDHD2,SBNO1,KCNG2,SEMA7A,and RUFY1)involved in cortical morphology were identified as having a genetic-epigenetic interaction associated with treatment response.The developed and externally validated prediction model,which incorporated clinical information,PRS,genetic risk score(GRS),and proxy methylation level(proxyDNAm),demonstrated positive benefits for a wide range of patients receiving different APDs,regardless of sex[discovery cohort:AUC=0.874(95%CI 0.867-0.881),R^(2)=0.478;external validation cohort:AUC=0.851(95%CI 0.841-0.861),R^(2)=0.507].Conclusions:This study presents a promising precision medicine approach to evaluate treatment response,which has the potential to aid clinicians in making informed decisions about APD treatment for patients with SCZ.Trial registration Chinese Clinical Trial Registry(https://www.chictr.org.cn/),18 Aug 2009 retrospectively registered:CAPOC-ChiCTR-RNC-09000521(https://www.chictr.org.cn/showproj.aspx?proj=9014),CAPEC-ChiCTRRNC-09000522(https://www.chictr.org.cn/showproj.aspx?proj=9013).展开更多
Background:There exist few maximal oxygen uptake(VO_(2max))non-exercise-based prediction equations,fewer using machine learning(ML),and none specifically for older adults.Since direct measurement of VO_(2max)is infeas...Background:There exist few maximal oxygen uptake(VO_(2max))non-exercise-based prediction equations,fewer using machine learning(ML),and none specifically for older adults.Since direct measurement of VO_(2max)is infeasible in large epidemiologic cohort studies,we sought to develop,validate,compare,and assess the transportability of several ML VO_(2max)prediction algorithms.Methods:The Baltimore Longitudinal Study of Aging(BLSA)participants with valid VO2_(max)tests were included(n=1080).Least absolute shrinkage and selection operator,linear-and tree-boosted extreme gradient boosting,random forest,and support vector machine(SVM)algorithms were trained to predict VO_(2max)values.We developed these algorithms for:(a)the overall BLSA,(b)by sex,(c)using all BLSA variables,and(d)variables common in aging cohorts.Finally,we quantified the associations between measured and predicted VO_(2max)and mortality.Results:The age was 69.0±10.4 years(mean±SD)and the measured VO_(2max)was 21.6±5.9 mL/kg/min.Least absolute shrinkage and selection operator,linear-and tree-boosted extreme gradient boosting,random forest,and support vector machine yielded root mean squared errors of 3.4 mL/kg/min,3.6 mL/kg/min,3.4 mL/kg/min,3.6 mL/kg/min,and 3.5 mL/kg/min,respectively.Incremental quartiles of measured VO_(2max)showed an inverse gradient in mortality risk.Predicted VO_(2max)variables yielded similar effect estimates but were not robust to adjustment.Conclusion:Measured VO_(2max)is a strong predictor of mortality.Using ML can improve the accuracy of prediction as compared to simpler approaches but estimates of association with mortality remain sensitive to adjustment.Future studies should seek to reproduce these results so that VO_(2max),an important vital sign,can be more broadly studied as a modifiable target for promoting functional resiliency and healthy aging.展开更多
We read with interest the recent systematic reviewaArtificial intelligence and machine learning for hemorrhagic trauma careoby Peng et al.[1],which evaluated literature on machine learning(ML)in the management of trau...We read with interest the recent systematic reviewaArtificial intelligence and machine learning for hemorrhagic trauma careoby Peng et al.[1],which evaluated literature on machine learning(ML)in the management of traumatic haemorrhage.We thank the authors for their contribution to the role of ML in trauma.展开更多
The scientific community recognizes the seriousness of rockbursts and the need for effective mitigation measures.The literature reports various successful applications of machine learning(ML)models for rockburst asses...The scientific community recognizes the seriousness of rockbursts and the need for effective mitigation measures.The literature reports various successful applications of machine learning(ML)models for rockburst assessment;however,a significant question remains unanswered:How reliable are these models,and at what confidence level are classifications made?Typically,ML models output single rockburst grade even in the face of intricate and out-of-distribution samples,without any associated confidence value.Given the susceptibility of ML models to errors,it becomes imperative to quantify their uncertainty to prevent consequential failures.To address this issue,we propose a conformal prediction(CP)framework built on traditional ML models(extreme gradient boosting and random forest)to generate valid classifications of rockburst while producing a measure of confidence for its output.The proposed framework guarantees marginal coverage and,in most cases,conditional coverage on the test dataset.The CP was evaluated on a rockburst case in the Sanshandao Gold Mine in China,where it achieved high coverage and efficiency at applicable confidence levels.Significantly,the CP identified several“confident”classifications from the traditional ML model as unreliable,necessitating expert verification for informed decision-making.The proposed framework improves the reliability and accuracy of rockburst assessments,with the potential to bolster user confidence.展开更多
Accurate prediction of tropical cyclone(TC)intensity is challenging due to the complex physical processes involved.Here,we introduce a new TC intensity prediction scheme for the western North Pacific(WNP)based on a ti...Accurate prediction of tropical cyclone(TC)intensity is challenging due to the complex physical processes involved.Here,we introduce a new TC intensity prediction scheme for the western North Pacific(WNP)based on a time-dependent theory of TC intensification,termed the energetically based dynamical system(EBDS)model,together with the use of a long short-term memory(LSTM)neural network.In time-dependent theory,TC intensity change is controlled by both the internal dynamics of the TC system and various environmental factors,expressed as environmental dynamical efficiency.The LSTM neural network is used to predict the environmental dynamical efficiency in the EBDS model trained using besttrack TC data and global reanalysis data during 1982–2017.The transfer learning and ensemble methods are used to retrain the scheme using the environmental factors predicted by the Global Forecast System(GFS)of the National Centers for Environmental Prediction during 2017–21.The predicted environmental dynamical efficiency is finally iterated into the EBDS equations to predict TC intensity.The new scheme is evaluated for TC intensity prediction using both reanalysis data and the GFS prediction data.The intensity prediction by the new scheme shows better skill than the official prediction from the China Meteorological Administration(CMA)and those by other state-of-art statistical and dynamical forecast systems,except for the 72-h forecast.Particularly at the longer lead times of 96 h and 120 h,the new scheme has smaller forecast errors,with a more than 30%improvement over the official forecasts.展开更多
The rapidly changing Antarctic sea ice has garnered significant interest. To enhance the prediction skill for sea ice and respond to the Sea Ice Prediction Network-South's latest call, this study presents the refo...The rapidly changing Antarctic sea ice has garnered significant interest. To enhance the prediction skill for sea ice and respond to the Sea Ice Prediction Network-South's latest call, this study presents the reforecast results of Antarctic sea-ice area and extent from December to June of the coming year with a Convolutional Long Short-Term Memory(Conv LSTM)Network. The reforecast experiments demonstrate that Conv LSTM captures the interannual and interseasonal variability of Antarctic sea ice successfully, and performs better than the European Centre for Medium-Range Weather Forecasts. Based on this, we present the prediction from December 2023 to June 2024, indicating that the Antarctic sea ice will remain at lows, but may not create a new record low. This research highlights the promising application of deep learning in Antarctic sea-ice prediction.展开更多
Background:According to clinical practice guidelines,transarterial chemoembolization(TACE)is the standard treatment modality for patients with intermediate-stage hepatocellular carcinoma(HCC).Early prediction of treat...Background:According to clinical practice guidelines,transarterial chemoembolization(TACE)is the standard treatment modality for patients with intermediate-stage hepatocellular carcinoma(HCC).Early prediction of treatment response can help patients choose a reasonable treatment plan.This study aimed to investigate the value of the radiomic-clinical model in predicting the efficacy of the first TACE treatment for HCC to prolong patient survival.Methods:A total of 164 patients with HCC who underwent the first TACE from January 2017 to September 2021 were analyzed.The tumor response was assessed by modified response evaluation criteria in solid tumors(mRECIST),and the response of the first TACE to each session and its correlation with overall survival were evaluated.The radiomic signatures associated with the treatment response were identified by the least absolute shrinkage and selection operator(LASSO),and four machine learning models were built with different types of regions of interest(ROIs)(tumor and corresponding tissues)and the model with the best performance was selected.The predictive performance was assessed with receiver operating characteristic(ROC)curves and calibration curves.Results:Of all the models,the random forest(RF)model with peritumor(+10 mm)radiomic signatures had the best performance[area under ROC curve(AUC)=0.964 in the training cohort,AUC=0.949 in the validation cohort].The RF model was used to calculate the radiomic score(Rad-score),and the optimal cutoff value(0.34)was calculated according to the Youden’s index.Patients were then divided into a high-risk group(Rad-score>0.34)and a low-risk group(Rad-score≤0.34),and a nomogram model was successfully established to predict treatment response.The predicted treatment response also allowed for significant discrimination of Kaplan-Meier curves.Multivariate Cox regression identified six independent prognostic factors for overall survival,including male[hazard ratio(HR)=0.500,95%confidence interval(CI):0.260–0.962,P=0.038],alpha-fetoprotein(HR=1.003,95%CI:1.002–1.004,P<0.001),alanine aminotransferase(HR=1.003,95%CI:1.001–1.005,P=0.025),performance status(HR=2.400,95%CI:1.200–4.800,P=0.013),the number of TACE sessions(HR=0.870,95%CI:0.780–0.970,P=0.012)and Rad-score(HR=3.480,95%CI:1.416–8.552,P=0.007).Conclusions:The radiomic signatures and clinical factors can be well-used to predict the response of HCC patients to the first TACE and may help identify the patients most likely to benefit from TACE.展开更多
In the coal-to-ethylene glycol(CTEG)process,precisely estimating quality variables is crucial for process monitoring,optimization,and control.A significant challenge in this regard is relying on offline laboratory ana...In the coal-to-ethylene glycol(CTEG)process,precisely estimating quality variables is crucial for process monitoring,optimization,and control.A significant challenge in this regard is relying on offline laboratory analysis to obtain these variables,which often incurs substantial monetary costs and significant time delays.The resulting few-shot learning scenarios present a hurdle to the efficient development of predictive models.To address this issue,our study introduces the transferable adversarial slow feature extraction network(TASF-Net),an innovative approach designed specifically for few-shot quality prediction in the CTEG process.TASF-Net uniquely integrates the slowness principle with a deep Bayesian framework,effectively capturing the nonlinear and inertial characteristics of the CTEG process.Additionally,the model employs a variable attention mechanism to identify quality-related input variables adaptively at each time step.A key strength of TASF-Net lies in its ability to navigate the complex measurement noise,outliers,and system interference typical in CTEG data.Adversarial learning strategy using a min-max game is adopted to improve its robustness and ability to model irregular industrial data accurately and significantly.Furthermore,an incremental refining transfer learning framework is designed to further improve few-shot prediction performance achieved by transferring knowledge from the pretrained model on the source domain to the target domain.The effectiveness and superiority of TASF-Net have been empirically validated using a real-world CTEG dataset.Compared with some state-of-the-art methods,TASF-Net demonstrates exceptional capability in addressing the intricate challenges for few-shot quality prediction in the CTEG process.展开更多
Water-based aerosol is widely used as an effective strategy in electro-optical countermeasure on the battlefield used to the preponderance of high efficiency,low cost and eco-friendly.Unfortunately,the stability of th...Water-based aerosol is widely used as an effective strategy in electro-optical countermeasure on the battlefield used to the preponderance of high efficiency,low cost and eco-friendly.Unfortunately,the stability of the water-based aerosol is always unsatisfactory due to the rapid evaporation and sedimentation of the aerosol droplets.Great efforts have been devoted to improve the stability of water-based aerosol by using additives with different composition and proportion.However,the lack of the criterion and principle for screening the effective additives results in excessive experimental time consumption and cost.And the stabilization time of the aerosol is still only 30 min,which could not meet the requirements of the perdurable interference.Herein,to improve the stability of water-based aerosol and optimize the complex formulation efficiently,a theoretical calculation method based on thermodynamic entropy theory is proposed.All the factors that influence the shielding effect,including polyol,stabilizer,propellant,water and cosolvent,are considered within calculation.An ultra-stable water-based aerosol with long duration over 120 min is obtained with the optimal fogging agent composition,providing enough time for fighting the electro-optic weapon.Theoretical design guideline for choosing the additives with high phase transition temperature and low phase transition enthalpy is also proposed,which greatly improves the total entropy change and reduce the absolute entropy change of the aerosol cooling process,and gives rise to an enhanced stability of the water-based aerosol.The theoretical calculation methodology contributes to an abstemious time and space for sieving the water-based aerosol with desirable performance and stability,and provides the powerful guarantee to the homeland security.展开更多
A recently published modeling approach for the penetration into adobe and previous approaches implicitly criticized are reviewed and discussed.This article contains a note on the paper titled“Ballistic model for the ...A recently published modeling approach for the penetration into adobe and previous approaches implicitly criticized are reviewed and discussed.This article contains a note on the paper titled“Ballistic model for the prediction of penetration depth and residual velocity in adobe:A new interpretation of the ballistic resistance of earthen masonry”(DOI:https://doi.org/10.1016/j.dt.2018.07.017).Reply to the Note from Li Piani et al is linked to this article.展开更多
Highway safety researchers focus on crash injury severity,utilizing deep learning—specifically,deep neural networks(DNN),deep convolutional neural networks(D-CNN),and deep recurrent neural networks(D-RNN)—as the pre...Highway safety researchers focus on crash injury severity,utilizing deep learning—specifically,deep neural networks(DNN),deep convolutional neural networks(D-CNN),and deep recurrent neural networks(D-RNN)—as the preferred method for modeling accident severity.Deep learning’s strength lies in handling intricate relation-ships within extensive datasets,making it popular for accident severity level(ASL)prediction and classification.Despite prior success,there is a need for an efficient system recognizing ASL in diverse road conditions.To address this,we present an innovative Accident Severity Level Prediction Deep Learning(ASLP-DL)framework,incorporating DNN,D-CNN,and D-RNN models fine-tuned through iterative hyperparameter selection with Stochastic Gradient Descent.The framework optimizes hidden layers and integrates data augmentation,Gaussian noise,and dropout regularization for improved generalization.Sensitivity and factor contribution analyses identify influential predictors.Evaluated on three diverse crash record databases—NCDB 2018–2019,UK 2015–2020,and US 2016–2021—the D-RNN model excels with an ACC score of 89.0281%,a Roc Area of 0.751,an F-estimate of 0.941,and a Kappa score of 0.0629 over the NCDB dataset.The proposed framework consistently outperforms traditional methods,existing machine learning,and deep learning techniques.展开更多
Cardiovascular computed tomography angiography(CTA)is a widely used imaging modality in the diagnosis of cardiovascular disease.Advancements in CT imaging technology have further advanced its applications from high di...Cardiovascular computed tomography angiography(CTA)is a widely used imaging modality in the diagnosis of cardiovascular disease.Advancements in CT imaging technology have further advanced its applications from high diagnostic value to minimising radiation exposure to patients.In addition to the standard application of assessing vascular lumen changes,CTA-derived applications including 3D printed personalised models,3D visualisations such as virtual endoscopy,virtual reality,augmented reality and mixed reality,as well as CT-derived hemodynamic flow analysis and fractional flow reserve(FFRCT)greatly enhance the diagnostic performance of CTA in cardiovascular disease.The widespread application of artificial intelligence in medicine also significantly contributes to the clinical value of CTA in cardiovascular disease.Clinical value of CTA has extended from the initial diagnosis to identification of vulnerable lesions,and prediction of disease extent,hence improving patient care and management.In this review article,as an active researcher in cardiovascular imaging for more than 20 years,I will provide an overview of cardiovascular CTA in cardiovascular disease.It is expected that this review will provide readers with an update of CTA applications,from the initial lumen assessment to recent developments utilising latest novel imaging and visualisation technologies.It will serve as a useful resource for researchers and clinicians to judiciously use the cardiovascular CT in clinical practice.展开更多
Due to the complexity and variability of carbonate formation leakage zones, lost circulation prediction and control is one of the major challenges of carbonate drilling. It raises well-control risks and production exp...Due to the complexity and variability of carbonate formation leakage zones, lost circulation prediction and control is one of the major challenges of carbonate drilling. It raises well-control risks and production expenses. This research utilizes the H oilfield as an example, employs seismic features to analyze mud loss prediction, and produces a complete set of pre-drilling mud loss prediction solutions. Firstly, 16seismic attributes are calculated based on the post-stack seismic data, and the mud loss rate per unit footage is specified. The sample set is constructed by extracting each attribute from the seismic trace surrounding 15 typical wells, with a ratio of 8:2 between the training set and the test set. With the calibration results for mud loss rate per unit footage, the nonlinear mapping relationship between seismic attributes and mud loss rate per unit size is established using the mixed density network model.Then, the influence of the number of sub-Gausses and the uncertainty coefficient on the model's prediction is evaluated. Finally, the model is used in conjunction with downhole drilling conditions to assess the risk of mud loss in various layers and along the wellbore trajectory. The study demonstrates that the mean relative errors of the model for training data and test data are 6.9% and 7.5%, respectively, and that R2is 90% and 88%, respectively, for training data and test data. The accuracy and efficacy of mud loss prediction may be greatly enhanced by combining 16 seismic attributes with the mud loss rate per unit footage and applying machine learning methods. The mud loss prediction model based on the MDN model can not only predict the mud loss rate but also objectively evaluate the prediction based on the quality of the data and the model.展开更多
基金This research work is supported by Sichuan Science and Technology Program(Grant No.2022YFS0586)the National Key R&D Program of China(Grant No.2019YFC1509301)the National Natural Science Foundation of China(Grant No.61976046).
文摘Predicting the displacement of landslide is of utmost practical importance as the landslide can pose serious threats to both human life and property.However,traditional methods have the limitation of random selection in sliding window selection and seldom incorporate weather forecast data for displacement prediction,while a single structural model cannot handle input sequences of different lengths at the same time.In order to solve these limitations,in this study,a new approach is proposed that utilizes weather forecast data and incorporates the maximum information coefficient(MIC),long short-term memory network(LSTM),and attention mechanism to establish a teacher-student coupling model with parallel structure for short-term landslide displacement prediction.Through MIC,a suitable input sequence length is selected for the LSTM model.To investigate the influence of rainfall on landslides during different seasons,a parallel teacher-student coupling model is developed that is able to learn sequential information from various time series of different lengths.The teacher model learns sequence information from rainfall intensity time series while incorporating reliable short-term weather forecast data from platforms such as China Meteorological Administration(CMA)and Reliable Prognosis(https://rp5.ru)to improve the model’s expression capability,and the student model learns sequence information from other time series.An attention module is then designed to integrate different sequence information to derive a context vector,representing seasonal temporal attention mode.Finally,the predicted displacement is obtained through a linear layer.The proposed method demonstrates superior prediction accuracies,surpassing those of the support vector machine(SVM),LSTM,recurrent neural network(RNN),temporal convolutional network(TCN),and LSTM-Attention models.It achieves a mean absolute error(MAE)of 0.072 mm,root mean square error(RMSE)of 0.096 mm,and pearson correlation coefficients(PCCS)of 0.85.Additionally,it exhibits enhanced prediction stability and interpretability,rendering it an indispensable tool for landslide disaster prevention and mitigation.
基金supported by the National Natural Science Foundation of China(62073330)。
文摘Natural events have had a significant impact on overall flight activity,and the aviation industry plays a vital role in helping society cope with the impact of these events.As one of the most impactful weather typhoon seasons appears and continues,airlines operating in threatened areas and passengers having travel plans during this time period will pay close attention to the development of tropical storms.This paper proposes a deep multimodal fusion and multitasking trajectory prediction model that can improve the reliability of typhoon trajectory prediction and reduce the quantity of flight scheduling cancellation.The deep multimodal fusion module is formed by deep fusion of the feature output by multiple submodal fusion modules,and the multitask generation module uses longitude and latitude as two related tasks for simultaneous prediction.With more dependable data accuracy,problems can be analysed rapidly and more efficiently,enabling better decision-making with a proactive versus reactive posture.When multiple modalities coexist,features can be extracted from them simultaneously to supplement each other’s information.An actual case study,the typhoon Lichma that swept China in 2019,has demonstrated that the algorithm can effectively reduce the number of unnecessary flight cancellations compared to existing flight scheduling and assist the new generation of flight scheduling systems under extreme weather.
基金the National Natural Science Foundation of China(Grant Nos.62272478,62202496,61872384).
文摘Among steganalysis techniques,detection against MV(motion vector)domain-based video steganography in the HEVC(High Efficiency Video Coding)standard remains a challenging issue.For the purpose of improving the detection performance,this paper proposes a steganalysis method that can perfectly detectMV-based steganography in HEVC.Firstly,we define the local optimality of MVP(Motion Vector Prediction)based on the technology of AMVP(Advanced Motion Vector Prediction).Secondly,we analyze that in HEVC video,message embedding either usingMVP index orMVD(Motion Vector Difference)may destroy the above optimality of MVP.And then,we define the optimal rate of MVP as a steganalysis feature.Finally,we conduct steganalysis detection experiments on two general datasets for three popular steganographymethods and compare the performance with four state-ofthe-art steganalysis methods.The experimental results demonstrate the effectiveness of the proposed feature set.Furthermore,our method stands out for its practical applicability,requiring no model training and exhibiting low computational complexity,making it a viable solution for real-world scenarios.
基金This work is supposed by the Science and Technology Projects of China Southern Power Grid(YNKJXM20222402).
文摘Advanced carbon emission factors of a power grid can provide users with effective carbon reduction advice,which is of immense importance in mobilizing the entire society to reduce carbon emissions.The method of calculating node carbon emission factors based on the carbon emissions flow theory requires real-time parameters of a power grid.Therefore,it cannot provide carbon factor information beforehand.To address this issue,a prediction model based on the graph attention network is proposed.The model uses a graph structure that is suitable for the topology of the power grid and designs a supervised network using the loads of the grid nodes and the corresponding carbon factor data.The network extracts features and transmits information more suitable for the power system and can flexibly adjust the equivalent topology,thereby increasing the diversity of the structure.Its input and output data are simple,without the power grid parameters.We demonstrated its effect by testing IEEE-39 bus and IEEE-118 bus systems with average error rates of 2.46%and 2.51%.
基金the Key Research&Development Program of Xinjiang(Grant Number 2022B01003).
文摘This paper addresses the micro wind-hydrogen coupled system,aiming to improve the power tracking capability of micro wind farms,the regulation capability of hydrogen storage systems,and to mitigate the volatility of wind power generation.A predictive control strategy for the micro wind-hydrogen coupled system is proposed based on the ultra-short-term wind power prediction,the hydrogen storage state division interval,and the daily scheduled output of wind power generation.The control strategy maximizes the power tracking capability,the regulation capability of the hydrogen storage system,and the fluctuation of the joint output of the wind-hydrogen coupled system as the objective functions,and adaptively optimizes the control coefficients of the hydrogen storage interval and the output parameters of the system by the combined sigmoid function and particle swarm algorithm(sigmoid-PSO).Compared with the real-time control strategy,the proposed predictive control strategy can significantly improve the output tracking capability of the wind-hydrogen coupling system,minimize the gap between the actual output and the predicted output,significantly enhance the regulation capability of the hydrogen storage system,and mitigate the power output fluctuation of the wind-hydrogen integrated system,which has a broad practical application prospect.
基金the National Key R&D Program of China(No.2021YFB3701705).
文摘This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while the corrosion rate as the output.6 dif-ferent ML algorithms were used to construct the proposed model.Through optimization and filtering,the eXtreme gradient boosting(XG-Boost)model exhibited good corrosion rate prediction accuracy.The features of material properties were then transformed into atomic and physical features using the proposed property transformation approach,and the dominant descriptors that affected the corrosion rate were filtered using the recursive feature elimination(RFE)as well as XGBoost methods.The established ML models exhibited better predic-tion performance and generalization ability via property transformation descriptors.In addition,the SHapley additive exPlanations(SHAP)method was applied to analyze the relationship between the descriptors and corrosion rate.The results showed that the property transformation model could effectively help with analyzing the corrosion behavior,thereby significantly improving the generalization ability of corrosion rate prediction models.
文摘Floods are one of the most serious natural disasters that can cause huge societal and economic losses.Extensive research has been conducted on topics like flood monitoring,prediction,and loss estimation.In these research fields,flood velocity plays a crucial role and is an important factor that influences the reliability of the outcomes.Traditional methods rely on physical models for flood simulation and prediction and could generate accurate results but often take a long time.Deep learning technology has recently shown significant potential in the same field,especially in terms of efficiency,helping to overcome the time-consuming associated with traditional methods.This study explores the potential of deep learning models in predicting flood velocity.More specifically,we use a Multi-Layer Perceptron(MLP)model,a specific type of Artificial Neural Networks(ANNs),to predict the velocity in the test area of the Lundesokna River in Norway with diverse terrain conditions.Geographic data and flood velocity simulated based on the physical hydraulic model are used in the study for the pre-training,optimization,and testing of the MLP model.Our experiment indicates that the MLP model has the potential to predict flood velocity in diverse terrain conditions of the river with acceptable accuracy against simulated velocity results but with a significant decrease in training time and testing time.Meanwhile,we discuss the limitations for the improvement in future work.
基金supported by the National Natural Science Foundation of China(81825009,82071505,81901358)the Chinese Academy of Medical Sciences Innovation Fund for Medical Sciences(2021-I2MC&T-B-099,2019-I2M-5–006)+2 种基金the Program of Chinese Institute for Brain Research Beijing(2020-NKX-XM-12)the King’s College London-Peking University Health Science Center Joint Institute for Medical Research(BMU2020KCL001,BMU2019LCKXJ012)the National Key R&D Program of China(2021YFF1201103,2016YFC1307000).
文摘Background:Choosing the appropriate antipsychotic drug(APD)treatment for patients with schizophrenia(SCZ)can be challenging,as the treatment response to APD is highly variable and difficult to predict due to the lack of effective biomarkers.Previous studies have indicated the association between treatment response and genetic and epigenetic factors,but no effective biomarkers have been identified.Hence,further research is imperative to enhance precision medicine in SCZ treatment.Methods:Participants with SCZ were recruited from two randomized trials.The discovery cohort was recruited from the CAPOC trial(n=2307)involved 6 weeks of treatment and equally randomized the participants to the Olanzapine,Risperidone,Quetiapine,Aripiprazole,Ziprasidone,and Haloperidol/Perphenazine(subsequently equally assigned to one or the other)groups.The external validation cohort was recruited from the CAPEC trial(n=1379),which involved 8 weeks of treatment and equally randomized the participants to the Olanzapine,Risperidone,and Aripiprazole groups.Additionally,healthy controls(n=275)from the local community were utilized as a genetic/epigenetic reference.The genetic and epigenetic(DNA methylation)risks of SCZ were assessed using the polygenic risk score(PRS)and polymethylation score,respectively.The study also examined the genetic-epigenetic interactions with treatment response through differential methylation analysis,methylation quantitative trait loci,colocalization,and promoteranchored chromatin interaction.Machine learning was used to develop a prediction model for treatment response,which was evaluated for accuracy and clinical benefit using the area under curve(AUC)for classification,R^(2) for regression,and decision curve analysis.Results:Six risk genes for SCZ(LINC01795,DDHD2,SBNO1,KCNG2,SEMA7A,and RUFY1)involved in cortical morphology were identified as having a genetic-epigenetic interaction associated with treatment response.The developed and externally validated prediction model,which incorporated clinical information,PRS,genetic risk score(GRS),and proxy methylation level(proxyDNAm),demonstrated positive benefits for a wide range of patients receiving different APDs,regardless of sex[discovery cohort:AUC=0.874(95%CI 0.867-0.881),R^(2)=0.478;external validation cohort:AUC=0.851(95%CI 0.841-0.861),R^(2)=0.507].Conclusions:This study presents a promising precision medicine approach to evaluate treatment response,which has the potential to aid clinicians in making informed decisions about APD treatment for patients with SCZ.Trial registration Chinese Clinical Trial Registry(https://www.chictr.org.cn/),18 Aug 2009 retrospectively registered:CAPOC-ChiCTR-RNC-09000521(https://www.chictr.org.cn/showproj.aspx?proj=9014),CAPEC-ChiCTRRNC-09000522(https://www.chictr.org.cn/showproj.aspx?proj=9013).
基金supported in part by the Intramural Research Program of the National Institute on Agingsupported by the National Cancer Institute(K01 CA234317)+1 种基金the San Diego State University/UC San Diego Comprehensive Cancer Center Partnership(U54 CA132384 and U54 CA132379)the Alzheimer's Disease Resource Center for Minority Aging Research at the University of California San Diego(P30 AG059299)。
文摘Background:There exist few maximal oxygen uptake(VO_(2max))non-exercise-based prediction equations,fewer using machine learning(ML),and none specifically for older adults.Since direct measurement of VO_(2max)is infeasible in large epidemiologic cohort studies,we sought to develop,validate,compare,and assess the transportability of several ML VO_(2max)prediction algorithms.Methods:The Baltimore Longitudinal Study of Aging(BLSA)participants with valid VO2_(max)tests were included(n=1080).Least absolute shrinkage and selection operator,linear-and tree-boosted extreme gradient boosting,random forest,and support vector machine(SVM)algorithms were trained to predict VO_(2max)values.We developed these algorithms for:(a)the overall BLSA,(b)by sex,(c)using all BLSA variables,and(d)variables common in aging cohorts.Finally,we quantified the associations between measured and predicted VO_(2max)and mortality.Results:The age was 69.0±10.4 years(mean±SD)and the measured VO_(2max)was 21.6±5.9 mL/kg/min.Least absolute shrinkage and selection operator,linear-and tree-boosted extreme gradient boosting,random forest,and support vector machine yielded root mean squared errors of 3.4 mL/kg/min,3.6 mL/kg/min,3.4 mL/kg/min,3.6 mL/kg/min,and 3.5 mL/kg/min,respectively.Incremental quartiles of measured VO_(2max)showed an inverse gradient in mortality risk.Predicted VO_(2max)variables yielded similar effect estimates but were not robust to adjustment.Conclusion:Measured VO_(2max)is a strong predictor of mortality.Using ML can improve the accuracy of prediction as compared to simpler approaches but estimates of association with mortality remain sensitive to adjustment.Future studies should seek to reproduce these results so that VO_(2max),an important vital sign,can be more broadly studied as a modifiable target for promoting functional resiliency and healthy aging.
基金JMW,RSS,EP,EK,WM,ZBP,and NRMT have received research funding from a precision trauma care research award from the Combat Casualty Care Research Program of the US Army Medical Research and Materiel Command(DM180044).
文摘We read with interest the recent systematic reviewaArtificial intelligence and machine learning for hemorrhagic trauma careoby Peng et al.[1],which evaluated literature on machine learning(ML)in the management of traumatic haemorrhage.We thank the authors for their contribution to the role of ML in trauma.
文摘The scientific community recognizes the seriousness of rockbursts and the need for effective mitigation measures.The literature reports various successful applications of machine learning(ML)models for rockburst assessment;however,a significant question remains unanswered:How reliable are these models,and at what confidence level are classifications made?Typically,ML models output single rockburst grade even in the face of intricate and out-of-distribution samples,without any associated confidence value.Given the susceptibility of ML models to errors,it becomes imperative to quantify their uncertainty to prevent consequential failures.To address this issue,we propose a conformal prediction(CP)framework built on traditional ML models(extreme gradient boosting and random forest)to generate valid classifications of rockburst while producing a measure of confidence for its output.The proposed framework guarantees marginal coverage and,in most cases,conditional coverage on the test dataset.The CP was evaluated on a rockburst case in the Sanshandao Gold Mine in China,where it achieved high coverage and efficiency at applicable confidence levels.Significantly,the CP identified several“confident”classifications from the traditional ML model as unreliable,necessitating expert verification for informed decision-making.The proposed framework improves the reliability and accuracy of rockburst assessments,with the potential to bolster user confidence.
基金supported by the National Key R&D Program of China(Grant No.2017YFC1501604)the National Natural Science Foundation of China(Grant Nos.41875114 and 41875057).
文摘Accurate prediction of tropical cyclone(TC)intensity is challenging due to the complex physical processes involved.Here,we introduce a new TC intensity prediction scheme for the western North Pacific(WNP)based on a time-dependent theory of TC intensification,termed the energetically based dynamical system(EBDS)model,together with the use of a long short-term memory(LSTM)neural network.In time-dependent theory,TC intensity change is controlled by both the internal dynamics of the TC system and various environmental factors,expressed as environmental dynamical efficiency.The LSTM neural network is used to predict the environmental dynamical efficiency in the EBDS model trained using besttrack TC data and global reanalysis data during 1982–2017.The transfer learning and ensemble methods are used to retrain the scheme using the environmental factors predicted by the Global Forecast System(GFS)of the National Centers for Environmental Prediction during 2017–21.The predicted environmental dynamical efficiency is finally iterated into the EBDS equations to predict TC intensity.The new scheme is evaluated for TC intensity prediction using both reanalysis data and the GFS prediction data.The intensity prediction by the new scheme shows better skill than the official prediction from the China Meteorological Administration(CMA)and those by other state-of-art statistical and dynamical forecast systems,except for the 72-h forecast.Particularly at the longer lead times of 96 h and 120 h,the new scheme has smaller forecast errors,with a more than 30%improvement over the official forecasts.
基金supported by the National Key R&D Program of China (Grant No. 2022YFE0106300)the National Natural Science Foundation of China (Grant Nos. 41941009 and 42006191)+2 种基金the China Postdoctoral Science Foundation (Grant No. 2023M741526)the Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) (Grant Nos. SML2022SP401 and SML2023SP207)the Program of Marine Economy Development Special Fund under Department of Natural Resources of Guangdong Province (Grant No. GDNRC [2022]18)。
文摘The rapidly changing Antarctic sea ice has garnered significant interest. To enhance the prediction skill for sea ice and respond to the Sea Ice Prediction Network-South's latest call, this study presents the reforecast results of Antarctic sea-ice area and extent from December to June of the coming year with a Convolutional Long Short-Term Memory(Conv LSTM)Network. The reforecast experiments demonstrate that Conv LSTM captures the interannual and interseasonal variability of Antarctic sea ice successfully, and performs better than the European Centre for Medium-Range Weather Forecasts. Based on this, we present the prediction from December 2023 to June 2024, indicating that the Antarctic sea ice will remain at lows, but may not create a new record low. This research highlights the promising application of deep learning in Antarctic sea-ice prediction.
文摘Background:According to clinical practice guidelines,transarterial chemoembolization(TACE)is the standard treatment modality for patients with intermediate-stage hepatocellular carcinoma(HCC).Early prediction of treatment response can help patients choose a reasonable treatment plan.This study aimed to investigate the value of the radiomic-clinical model in predicting the efficacy of the first TACE treatment for HCC to prolong patient survival.Methods:A total of 164 patients with HCC who underwent the first TACE from January 2017 to September 2021 were analyzed.The tumor response was assessed by modified response evaluation criteria in solid tumors(mRECIST),and the response of the first TACE to each session and its correlation with overall survival were evaluated.The radiomic signatures associated with the treatment response were identified by the least absolute shrinkage and selection operator(LASSO),and four machine learning models were built with different types of regions of interest(ROIs)(tumor and corresponding tissues)and the model with the best performance was selected.The predictive performance was assessed with receiver operating characteristic(ROC)curves and calibration curves.Results:Of all the models,the random forest(RF)model with peritumor(+10 mm)radiomic signatures had the best performance[area under ROC curve(AUC)=0.964 in the training cohort,AUC=0.949 in the validation cohort].The RF model was used to calculate the radiomic score(Rad-score),and the optimal cutoff value(0.34)was calculated according to the Youden’s index.Patients were then divided into a high-risk group(Rad-score>0.34)and a low-risk group(Rad-score≤0.34),and a nomogram model was successfully established to predict treatment response.The predicted treatment response also allowed for significant discrimination of Kaplan-Meier curves.Multivariate Cox regression identified six independent prognostic factors for overall survival,including male[hazard ratio(HR)=0.500,95%confidence interval(CI):0.260–0.962,P=0.038],alpha-fetoprotein(HR=1.003,95%CI:1.002–1.004,P<0.001),alanine aminotransferase(HR=1.003,95%CI:1.001–1.005,P=0.025),performance status(HR=2.400,95%CI:1.200–4.800,P=0.013),the number of TACE sessions(HR=0.870,95%CI:0.780–0.970,P=0.012)and Rad-score(HR=3.480,95%CI:1.416–8.552,P=0.007).Conclusions:The radiomic signatures and clinical factors can be well-used to predict the response of HCC patients to the first TACE and may help identify the patients most likely to benefit from TACE.
基金supported by the National Natural Science Foundation of China(62333010,61673205).
文摘In the coal-to-ethylene glycol(CTEG)process,precisely estimating quality variables is crucial for process monitoring,optimization,and control.A significant challenge in this regard is relying on offline laboratory analysis to obtain these variables,which often incurs substantial monetary costs and significant time delays.The resulting few-shot learning scenarios present a hurdle to the efficient development of predictive models.To address this issue,our study introduces the transferable adversarial slow feature extraction network(TASF-Net),an innovative approach designed specifically for few-shot quality prediction in the CTEG process.TASF-Net uniquely integrates the slowness principle with a deep Bayesian framework,effectively capturing the nonlinear and inertial characteristics of the CTEG process.Additionally,the model employs a variable attention mechanism to identify quality-related input variables adaptively at each time step.A key strength of TASF-Net lies in its ability to navigate the complex measurement noise,outliers,and system interference typical in CTEG data.Adversarial learning strategy using a min-max game is adopted to improve its robustness and ability to model irregular industrial data accurately and significantly.Furthermore,an incremental refining transfer learning framework is designed to further improve few-shot prediction performance achieved by transferring knowledge from the pretrained model on the source domain to the target domain.The effectiveness and superiority of TASF-Net have been empirically validated using a real-world CTEG dataset.Compared with some state-of-the-art methods,TASF-Net demonstrates exceptional capability in addressing the intricate challenges for few-shot quality prediction in the CTEG process.
基金supported by the Preparation and Characterization of Fogging Agents,Cooperative Project of China(Grant No.1900030040)Preparation and Test of Fogging Agents,Cooperative Project of China(Grant No.2200030085)。
文摘Water-based aerosol is widely used as an effective strategy in electro-optical countermeasure on the battlefield used to the preponderance of high efficiency,low cost and eco-friendly.Unfortunately,the stability of the water-based aerosol is always unsatisfactory due to the rapid evaporation and sedimentation of the aerosol droplets.Great efforts have been devoted to improve the stability of water-based aerosol by using additives with different composition and proportion.However,the lack of the criterion and principle for screening the effective additives results in excessive experimental time consumption and cost.And the stabilization time of the aerosol is still only 30 min,which could not meet the requirements of the perdurable interference.Herein,to improve the stability of water-based aerosol and optimize the complex formulation efficiently,a theoretical calculation method based on thermodynamic entropy theory is proposed.All the factors that influence the shielding effect,including polyol,stabilizer,propellant,water and cosolvent,are considered within calculation.An ultra-stable water-based aerosol with long duration over 120 min is obtained with the optimal fogging agent composition,providing enough time for fighting the electro-optic weapon.Theoretical design guideline for choosing the additives with high phase transition temperature and low phase transition enthalpy is also proposed,which greatly improves the total entropy change and reduce the absolute entropy change of the aerosol cooling process,and gives rise to an enhanced stability of the water-based aerosol.The theoretical calculation methodology contributes to an abstemious time and space for sieving the water-based aerosol with desirable performance and stability,and provides the powerful guarantee to the homeland security.
文摘A recently published modeling approach for the penetration into adobe and previous approaches implicitly criticized are reviewed and discussed.This article contains a note on the paper titled“Ballistic model for the prediction of penetration depth and residual velocity in adobe:A new interpretation of the ballistic resistance of earthen masonry”(DOI:https://doi.org/10.1016/j.dt.2018.07.017).Reply to the Note from Li Piani et al is linked to this article.
文摘Highway safety researchers focus on crash injury severity,utilizing deep learning—specifically,deep neural networks(DNN),deep convolutional neural networks(D-CNN),and deep recurrent neural networks(D-RNN)—as the preferred method for modeling accident severity.Deep learning’s strength lies in handling intricate relation-ships within extensive datasets,making it popular for accident severity level(ASL)prediction and classification.Despite prior success,there is a need for an efficient system recognizing ASL in diverse road conditions.To address this,we present an innovative Accident Severity Level Prediction Deep Learning(ASLP-DL)framework,incorporating DNN,D-CNN,and D-RNN models fine-tuned through iterative hyperparameter selection with Stochastic Gradient Descent.The framework optimizes hidden layers and integrates data augmentation,Gaussian noise,and dropout regularization for improved generalization.Sensitivity and factor contribution analyses identify influential predictors.Evaluated on three diverse crash record databases—NCDB 2018–2019,UK 2015–2020,and US 2016–2021—the D-RNN model excels with an ACC score of 89.0281%,a Roc Area of 0.751,an F-estimate of 0.941,and a Kappa score of 0.0629 over the NCDB dataset.The proposed framework consistently outperforms traditional methods,existing machine learning,and deep learning techniques.
文摘Cardiovascular computed tomography angiography(CTA)is a widely used imaging modality in the diagnosis of cardiovascular disease.Advancements in CT imaging technology have further advanced its applications from high diagnostic value to minimising radiation exposure to patients.In addition to the standard application of assessing vascular lumen changes,CTA-derived applications including 3D printed personalised models,3D visualisations such as virtual endoscopy,virtual reality,augmented reality and mixed reality,as well as CT-derived hemodynamic flow analysis and fractional flow reserve(FFRCT)greatly enhance the diagnostic performance of CTA in cardiovascular disease.The widespread application of artificial intelligence in medicine also significantly contributes to the clinical value of CTA in cardiovascular disease.Clinical value of CTA has extended from the initial diagnosis to identification of vulnerable lesions,and prediction of disease extent,hence improving patient care and management.In this review article,as an active researcher in cardiovascular imaging for more than 20 years,I will provide an overview of cardiovascular CTA in cardiovascular disease.It is expected that this review will provide readers with an update of CTA applications,from the initial lumen assessment to recent developments utilising latest novel imaging and visualisation technologies.It will serve as a useful resource for researchers and clinicians to judiciously use the cardiovascular CT in clinical practice.
基金the financially supported by the National Natural Science Foundation of China(Grant No.52104013)the China Postdoctoral Science Foundation(Grant No.2022T150724)。
文摘Due to the complexity and variability of carbonate formation leakage zones, lost circulation prediction and control is one of the major challenges of carbonate drilling. It raises well-control risks and production expenses. This research utilizes the H oilfield as an example, employs seismic features to analyze mud loss prediction, and produces a complete set of pre-drilling mud loss prediction solutions. Firstly, 16seismic attributes are calculated based on the post-stack seismic data, and the mud loss rate per unit footage is specified. The sample set is constructed by extracting each attribute from the seismic trace surrounding 15 typical wells, with a ratio of 8:2 between the training set and the test set. With the calibration results for mud loss rate per unit footage, the nonlinear mapping relationship between seismic attributes and mud loss rate per unit size is established using the mixed density network model.Then, the influence of the number of sub-Gausses and the uncertainty coefficient on the model's prediction is evaluated. Finally, the model is used in conjunction with downhole drilling conditions to assess the risk of mud loss in various layers and along the wellbore trajectory. The study demonstrates that the mean relative errors of the model for training data and test data are 6.9% and 7.5%, respectively, and that R2is 90% and 88%, respectively, for training data and test data. The accuracy and efficacy of mud loss prediction may be greatly enhanced by combining 16 seismic attributes with the mud loss rate per unit footage and applying machine learning methods. The mud loss prediction model based on the MDN model can not only predict the mud loss rate but also objectively evaluate the prediction based on the quality of the data and the model.