期刊文献+
共找到13,756篇文章
< 1 2 250 >
每页显示 20 50 100
Better use of experience from other reservoirs for accurate production forecasting by learn-to-learn method
1
作者 Hao-Chen Wang Kai Zhang +7 位作者 Nancy Chen Wen-Sheng Zhou Chen Liu Ji-Fu Wang Li-Ming Zhang Zhi-Gang Yu Shi-Ti Cui Mei-Chun Yang 《Petroleum Science》 SCIE EI CAS CSCD 2024年第1期716-728,共13页
To assess whether a development strategy will be profitable enough,production forecasting is a crucial and difficult step in the process.The development history of other reservoirs in the same class tends to be studie... To assess whether a development strategy will be profitable enough,production forecasting is a crucial and difficult step in the process.The development history of other reservoirs in the same class tends to be studied to make predictions accurate.However,the permeability field,well patterns,and development regime must all be similar for two reservoirs to be considered in the same class.This results in very few available experiences from other reservoirs even though there is a lot of historical information on numerous reservoirs because it is difficult to find such similar reservoirs.This paper proposes a learn-to-learn method,which can better utilize a vast amount of historical data from various reservoirs.Intuitively,the proposed method first learns how to learn samples before directly learning rules in samples.Technically,by utilizing gradients from networks with independent parameters and copied structure in each class of reservoirs,the proposed network obtains the optimal shared initial parameters which are regarded as transferable information across different classes.Based on that,the network is able to predict future production indices for the target reservoir by only training with very limited samples collected from reservoirs in the same class.Two cases further demonstrate its superiority in accuracy to other widely-used network methods. 展开更多
关键词 Production forecasting Multiple patterns Few-shot learning Transfer learning
下载PDF
A Tutorial on Federated Learning from Theory to Practice:Foundations,Software Frameworks,Exemplary Use Cases,and Selected Trends
2
作者 M.Victoria Luzón Nuria Rodríguez-Barroso +5 位作者 Alberto Argente-Garrido Daniel Jiménez-López Jose M.Moyano Javier Del Ser Weiping Ding Francisco Herrera 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期824-850,共27页
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ... When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications. 展开更多
关键词 Data privacy distributed machine learning federated learning software frameworks
下载PDF
Machine-learning-assisted efficient reconstruction of the quantum states generated from the Sagnac polarization-entangled photon source
3
作者 毛梦辉 周唯 +3 位作者 李新慧 杨然 龚彦晓 祝世宁 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第8期50-54,共5页
Neural networks are becoming ubiquitous in various areas of physics as a successful machine learning(ML)technique for addressing different tasks.Based on ML technique,we propose and experimentally demonstrate an effic... Neural networks are becoming ubiquitous in various areas of physics as a successful machine learning(ML)technique for addressing different tasks.Based on ML technique,we propose and experimentally demonstrate an efficient method for state reconstruction of the widely used Sagnac polarization-entangled photon source.By properly modeling the target states,a multi-output fully connected neural network is well trained using only six of the sixteen measurement bases in standard tomography technique,and hence our method reduces the resource consumption without loss of accuracy.We demonstrate the ability of the neural network to predict state parameters with a high precision by using both simulated and experimental data.Explicitly,the mean absolute error for all the parameters is below 0.05 for the simulated data and a mean fidelity of 0.99 is achieved for experimentally generated states.Our method could be generalized to estimate other kinds of states,as well as other quantum information tasks. 展开更多
关键词 machine learning state estimation quantum state tomography polarization-entangled photon source
下载PDF
Deep learning approaches to recover the plasma current density profile from the safety factor based on Grad–Shafranov solutions across multiple tokamaks
4
作者 张瀚予 周利娜 +6 位作者 刘钺强 郝广周 王硕 杨旭 苗雨田 段萍 陈龙 《Plasma Science and Technology》 SCIE EI CAS CSCD 2024年第5期17-28,共12页
Many magnetohydrodynamic stability analyses require generation of a set of equilibria with a fixed safety factor q-profile while varying other plasma parameters.A neural network(NN)-based approach is investigated that... Many magnetohydrodynamic stability analyses require generation of a set of equilibria with a fixed safety factor q-profile while varying other plasma parameters.A neural network(NN)-based approach is investigated that facilitates such a process.Both multilayer perceptron(MLP)-based NN and convolutional neural network(CNN)models are trained to map the q-profile to the plasma current density J-profile,and vice versa,while satisfying the Grad–Shafranov radial force balance constraint.When the initial target models are trained,using a database of semianalytically constructed numerical equilibria,an initial CNN with one convolutional layer is found to perform better than an initial MLP model.In particular,a trained initial CNN model can also predict the q-or J-profile for experimental tokamak equilibria.The performance of both initial target models is further improved by fine-tuning the training database,i.e.by adding realistic experimental equilibria with Gaussian noise.The fine-tuned target models,referred to as fine-tuned MLP and fine-tuned CNN,well reproduce the target q-or J-profile across multiple tokamak devices.As an important application,these NN-based equilibrium profile convertors can be utilized to provide a good initial guess for iterative equilibrium solvers,where the desired input quantity is the safety factor instead of the plasma current density. 展开更多
关键词 plasma equilibrium deep learning safety factor profile current density profile TOKAMAK
下载PDF
Terrorism Attack Classification Using Machine Learning: The Effectiveness of Using Textual Features Extracted from GTD Dataset
5
作者 Mohammed Abdalsalam Chunlin Li +1 位作者 Abdelghani Dahou Natalia Kryvinska 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1427-1467,共41页
One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelli... One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier. 展开更多
关键词 Artificial intelligence machine learning natural language processing data analytic DistilBERT feature extraction terrorism classification GTD dataset
下载PDF
Application Strategies of Virtual Reality Technology in the Teaching Design of Vocational Courses from the Perspective of Learning Transfer Theory
6
作者 Shuyu Gong 《Journal of Contemporary Educational Research》 2024年第7期1-6,共6页
With the rapid development of virtual reality technology,it has been widely used in the field of education.It can promote the development of learning transfer,which is an effective method for learners to learn effecti... With the rapid development of virtual reality technology,it has been widely used in the field of education.It can promote the development of learning transfer,which is an effective method for learners to learn effectively.Therefore,this paper describes how to use virtual reality technology to achieve learning transfer in order to achieve teaching goals and improve learning efficiency. 展开更多
关键词 learning transfer Virtual reality technology Application strategy
下载PDF
Astrocytic endothelin-1 overexpression impairs learning and memory ability in ischemic stroke via altered hippocampal neurogenesis and lipid metabolism 被引量:3
7
作者 Jie Li Wen Jiang +9 位作者 Yuefang Cai Zhenqiu Ning Yingying Zhou Chengyi Wang Sookja Ki Chung Yan Huang Jingbo Sun Minzhen Deng Lihua Zhou Xiao Cheng 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第3期650-656,共7页
Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However... Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction. 展开更多
关键词 astrocytic endothelin-1 dentate gyrus differentially expressed proteins HIPPOCAMPUS ischemic stroke learning and memory deficits lipid metabolism neural stem cells NEUROGENESIS proliferation
下载PDF
Toward a Learnable Climate Model in the Artificial Intelligence Era 被引量:2
8
作者 Gang HUANG Ya WANG +3 位作者 Yoo-Geun HAM Bin MU Weichen TAO Chaoyang XIE 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1281-1288,共8页
Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of ... Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal. 展开更多
关键词 artificial intelligence deep learning learnable climate model
下载PDF
A game-theoretic approach for federated learning:A trade-off among privacy,accuracy and energy 被引量:2
9
作者 Lihua Yin Sixin Lin +3 位作者 Zhe Sun Ran Li Yuanyuan He Zhiqiang Hao 《Digital Communications and Networks》 SCIE CSCD 2024年第2期389-403,共15页
Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also ... Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems. 展开更多
关键词 Federated learning Privacy preservation Energy optimization Game theory Distributed communication systems
下载PDF
Low-Cost Federated Broad Learning for Privacy-Preserved Knowledge Sharing in the RIS-Aided Internet of Vehicles 被引量:1
10
作者 Xiaoming Yuan Jiahui Chen +4 位作者 Ning Zhang Qiang(John)Ye Changle Li Chunsheng Zhu Xuemin Sherman Shen 《Engineering》 SCIE EI CAS CSCD 2024年第2期178-189,共12页
High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency... High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV. 展开更多
关键词 Knowledge sharing Internet of Vehicles Federated learning Broad learning Reconfigurable intelligent surfaces Resource allocation
下载PDF
How do the landslide and non-landslide sampling strategies impact landslide susceptibility assessment? d A catchment-scale case study from China 被引量:1
11
作者 Zizheng Guo Bixia Tian +2 位作者 Yuhang Zhu Jun He Taili Zhang 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第3期877-894,共18页
The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenz... The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM. 展开更多
关键词 Landslide susceptibility Sampling strategy Machine learning Random forest China
下载PDF
A Deep Learning Approach for Forecasting Thunderstorm Gusts in the Beijing–Tianjin–Hebei Region 被引量:1
12
作者 Yunqing LIU Lu YANG +3 位作者 Mingxuan CHEN Linye SONG Lei HAN Jingfeng XU 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1342-1363,共22页
Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly b... Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China. 展开更多
关键词 thunderstorm gusts deep learning weather forecasting convolutional neural network TRANSFORMER
下载PDF
High-throughput calculations combining machine learning to investigate the corrosion properties of binary Mg alloys 被引量:1
13
作者 Yaowei Wang Tian Xie +4 位作者 Qingli Tang Mingxu Wang Tao Ying Hong Zhu Xiaoqin Zeng 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第4期1406-1418,共13页
Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experi... Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experiment trial,a high-throughput computational strategy based on first-principles calculations is designed for screening corrosion-resistant binary Mg alloy with intermetallics,from both the thermodynamic and kinetic perspectives.The stable binary Mg intermetallics with low equilibrium potential difference with respect to the Mg matrix are firstly identified.Then,the hydrogen adsorption energies on the surfaces of these Mg intermetallics are calculated,and the corrosion exchange current density is further calculated by a hydrogen evolution reaction(HER)kinetic model.Several intermetallics,e.g.Y_(3)Mg,Y_(2)Mg and La_(5)Mg,are identified to be promising intermetallics which might effectively hinder the cathodic HER.Furthermore,machine learning(ML)models are developed to predict Mg intermetallics with proper hydrogen adsorption energy employing work function(W_(f))and weighted first ionization energy(WFIE).The generalization of the ML models is tested on five new binary Mg intermetallics with the average root mean square error(RMSE)of 0.11 eV.This study not only predicts some promising binary Mg intermetallics which may suppress the galvanic corrosion,but also provides a high-throughput screening strategy and ML models for the design of corrosion-resistant alloy,which can be extended to ternary Mg alloys or other alloy systems. 展开更多
关键词 Mg intermetallics Corrosion property HIGH-THROUGHPUT Density functional theory Machine learning
下载PDF
Use of machine learning models for the prognostication of liver transplantation: A systematic review 被引量:2
14
作者 Gidion Chongo Jonathan Soldera 《World Journal of Transplantation》 2024年第1期164-188,共25页
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p... BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication. 展开更多
关键词 Liver transplantation Machine learning models PROGNOSTICATION Allograft allocation Artificial intelligence
下载PDF
T2-weighted imaging-based radiomic-clinical machine learning model for predicting the differentiation of colorectal adenocarcinoma 被引量:1
15
作者 Hui-Da Zheng Qiao-Yi Huang +4 位作者 Qi-Ming Huang Xiao-Ting Ke Kai Ye Shu Lin Jian-Hua Xu 《World Journal of Gastrointestinal Oncology》 SCIE 2024年第3期819-832,共14页
BACKGROUND The study on predicting the differentiation grade of colorectal cancer(CRC)based on magnetic resonance imaging(MRI)has not been reported yet.Developing a non-invasive model to predict the differentiation gr... BACKGROUND The study on predicting the differentiation grade of colorectal cancer(CRC)based on magnetic resonance imaging(MRI)has not been reported yet.Developing a non-invasive model to predict the differentiation grade of CRC is of great value.AIM To develop and validate machine learning-based models for predicting the differ-entiation grade of CRC based on T2-weighted images(T2WI).METHODS We retrospectively collected the preoperative imaging and clinical data of 315 patients with CRC who underwent surgery from March 2018 to July 2023.Patients were randomly assigned to a training cohort(n=220)or a validation cohort(n=95)at a 7:3 ratio.Lesions were delineated layer by layer on high-resolution T2WI.Least absolute shrinkage and selection operator regression was applied to screen for radiomic features.Radiomics and clinical models were constructed using the multilayer perceptron(MLP)algorithm.These radiomic features and clinically relevant variables(selected based on a significance level of P<0.05 in the training set)were used to construct radiomics-clinical models.The performance of the three models(clinical,radiomic,and radiomic-clinical model)were evaluated using the area under the curve(AUC),calibration curve and decision curve analysis(DCA).RESULTS After feature selection,eight radiomic features were retained from the initial 1781 features to construct the radiomic model.Eight different classifiers,including logistic regression,support vector machine,k-nearest neighbours,random forest,extreme trees,extreme gradient boosting,light gradient boosting machine,and MLP,were used to construct the model,with MLP demonstrating the best diagnostic performance.The AUC of the radiomic-clinical model was 0.862(95%CI:0.796-0.927)in the training cohort and 0.761(95%CI:0.635-0.887)in the validation cohort.The AUC for the radiomic model was 0.796(95%CI:0.723-0.869)in the training cohort and 0.735(95%CI:0.604-0.866)in the validation cohort.The clinical model achieved an AUC of 0.751(95%CI:0.661-0.842)in the training cohort and 0.676(95%CI:0.525-0.827)in the validation cohort.All three models demonstrated good accuracy.In the training cohort,the AUC of the radiomic-clinical model was significantly greater than that of the clinical model(P=0.005)and the radiomic model(P=0.016).DCA confirmed the clinical practicality of incorporating radiomic features into the diagnostic process.CONCLUSION In this study,we successfully developed and validated a T2WI-based machine learning model as an auxiliary tool for the preoperative differentiation between well/moderately and poorly differentiated CRC.This novel approach may assist clinicians in personalizing treatment strategies for patients and improving treatment efficacy. 展开更多
关键词 Radiomics Colorectal cancer Differentiation grade Machine learning T2-weighted imaging
下载PDF
Monitoring seismicity in the southern Sichuan Basin using a machine learning workflow 被引量:1
16
作者 Kang Wang Jie Zhang +2 位作者 Ji Zhang Zhangyu Wang Huiyu Zhu 《Earthquake Research Advances》 CSCD 2024年第1期59-66,共8页
Monitoring seismicity in real time provides significant benefits for timely earthquake warning and analyses.In this study,we propose an automatic workflow based on machine learning(ML)to monitor seismicity in the sout... Monitoring seismicity in real time provides significant benefits for timely earthquake warning and analyses.In this study,we propose an automatic workflow based on machine learning(ML)to monitor seismicity in the southern Sichuan Basin of China.This workflow includes coherent event detection,phase picking,and earthquake location using three-component data from a seismic network.By combining Phase Net,we develop an ML-based earthquake location model called Phase Loc,to conduct real-time monitoring of the local seismicity.The approach allows us to use synthetic samples covering the entire study area to train Phase Loc,addressing the problems of insufficient data samples,imbalanced data distribution,and unreliable labels when training with observed data.We apply the trained model to observed data recorded in the southern Sichuan Basin,China,between September 2018 and March 2019.The results show that the average differences in latitude,longitude,and depth are 5.7 km,6.1 km,and 2 km,respectively,compared to the reference catalog.Phase Loc combines all available phase information to make fast and reliable predictions,even if only a few phases are detected and picked.The proposed workflow may help real-time seismic monitoring in other regions as well. 展开更多
关键词 Earthquake monitoring Machine learning Local seismicity Gaussian waveform Sparse stations
下载PDF
Modulated-ISRJ rejection using online dictionary learning for synthetic aperture radar imagery 被引量:1
17
作者 WEI Shaopeng ZHANG Lei +1 位作者 LU Jingyue LIU Hongwei 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第2期316-329,共14页
In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes consid... In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes considerable coherence with the SAR transmission waveform together with periodical modulation patterns.This paper develops an MISRJ suppression algorithm for SAR imagery with online dictionary learning.In the algorithm,the jamming modulation temporal properties are exploited with extracting and sorting MISRJ slices using fast-time autocorrelation.Online dictionary learning is followed to separate real signals from jamming slices.Under the learned representation,time-varying MISRJs are suppressed effectively.Both simulated and real-measured SAR data are also used to confirm advantages in suppressing time-varying MISRJs over traditional methods. 展开更多
关键词 synthetic aperture radar(SAR) modulated interrupt sampling jamming(MISRJ) online dictionary learning
下载PDF
A Comprehensive Survey on Federated Learning in the Healthcare Area: Concept and Applications
18
作者 Deepak Upreti Eunmok Yang +1 位作者 Hyunil Kim Changho Seo 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2239-2274,共36页
Federated learning is an innovative machine learning technique that deals with centralized data storage issues while maintaining privacy and security.It involves constructing machine learning models using datasets spr... Federated learning is an innovative machine learning technique that deals with centralized data storage issues while maintaining privacy and security.It involves constructing machine learning models using datasets spread across several data centers,including medical facilities,clinical research facilities,Internet of Things devices,and even mobile devices.The main goal of federated learning is to improve robust models that benefit from the collective knowledge of these disparate datasets without centralizing sensitive information,reducing the risk of data loss,privacy breaches,or data exposure.The application of federated learning in the healthcare industry holds significant promise due to the wealth of data generated from various sources,such as patient records,medical imaging,wearable devices,and clinical research surveys.This research conducts a systematic evaluation and highlights essential issues for the selection and implementation of federated learning approaches in healthcare.It evaluates the effectiveness of federated learning strategies in the field of healthcare.It offers a systematic analysis of federated learning in the healthcare domain,encompassing the evaluation metrics employed.In addition,this study highlights the increasing interest in federated learning applications in healthcare among scholars and provides foundations for further studies. 展开更多
关键词 Federated learning artificial intelligence machine learning PRIVACY healthcare
下载PDF
Machine learning for predicting the outcome of terminal ballistics events
19
作者 Shannon Ryan Neeraj Mohan Sushma +4 位作者 Arun Kumar AV Julian Berk Tahrima Hashem Santu Rana Svetha Venkatesh 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第1期14-26,共13页
Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression mode... Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems. 展开更多
关键词 Machine learning Artificial intelligence Physics-informed machine learning Terminal ballistics Armour
下载PDF
Effectiveness of hybrid ensemble machine learning models for landslide susceptibility analysis:Evidence from Shimla district of North-west Indian Himalayan region
20
作者 SHARMA Aastha SAJJAD Haroon +2 位作者 RAHAMAN Md Hibjur SAHA Tamal Kanti BHUYAN Nirsobha 《Journal of Mountain Science》 SCIE CSCD 2024年第7期2368-2393,共26页
The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper ... The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics. 展开更多
关键词 Landslide susceptibility Site-specific factors Machine learning models Hybrid ensemble learning Geospatial techniques Himalayan region
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部