To assess whether a development strategy will be profitable enough,production forecasting is a crucial and difficult step in the process.The development history of other reservoirs in the same class tends to be studie...To assess whether a development strategy will be profitable enough,production forecasting is a crucial and difficult step in the process.The development history of other reservoirs in the same class tends to be studied to make predictions accurate.However,the permeability field,well patterns,and development regime must all be similar for two reservoirs to be considered in the same class.This results in very few available experiences from other reservoirs even though there is a lot of historical information on numerous reservoirs because it is difficult to find such similar reservoirs.This paper proposes a learn-to-learn method,which can better utilize a vast amount of historical data from various reservoirs.Intuitively,the proposed method first learns how to learn samples before directly learning rules in samples.Technically,by utilizing gradients from networks with independent parameters and copied structure in each class of reservoirs,the proposed network obtains the optimal shared initial parameters which are regarded as transferable information across different classes.Based on that,the network is able to predict future production indices for the target reservoir by only training with very limited samples collected from reservoirs in the same class.Two cases further demonstrate its superiority in accuracy to other widely-used network methods.展开更多
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ...When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.展开更多
Neural networks are becoming ubiquitous in various areas of physics as a successful machine learning(ML)technique for addressing different tasks.Based on ML technique,we propose and experimentally demonstrate an effic...Neural networks are becoming ubiquitous in various areas of physics as a successful machine learning(ML)technique for addressing different tasks.Based on ML technique,we propose and experimentally demonstrate an efficient method for state reconstruction of the widely used Sagnac polarization-entangled photon source.By properly modeling the target states,a multi-output fully connected neural network is well trained using only six of the sixteen measurement bases in standard tomography technique,and hence our method reduces the resource consumption without loss of accuracy.We demonstrate the ability of the neural network to predict state parameters with a high precision by using both simulated and experimental data.Explicitly,the mean absolute error for all the parameters is below 0.05 for the simulated data and a mean fidelity of 0.99 is achieved for experimentally generated states.Our method could be generalized to estimate other kinds of states,as well as other quantum information tasks.展开更多
Many magnetohydrodynamic stability analyses require generation of a set of equilibria with a fixed safety factor q-profile while varying other plasma parameters.A neural network(NN)-based approach is investigated that...Many magnetohydrodynamic stability analyses require generation of a set of equilibria with a fixed safety factor q-profile while varying other plasma parameters.A neural network(NN)-based approach is investigated that facilitates such a process.Both multilayer perceptron(MLP)-based NN and convolutional neural network(CNN)models are trained to map the q-profile to the plasma current density J-profile,and vice versa,while satisfying the Grad–Shafranov radial force balance constraint.When the initial target models are trained,using a database of semianalytically constructed numerical equilibria,an initial CNN with one convolutional layer is found to perform better than an initial MLP model.In particular,a trained initial CNN model can also predict the q-or J-profile for experimental tokamak equilibria.The performance of both initial target models is further improved by fine-tuning the training database,i.e.by adding realistic experimental equilibria with Gaussian noise.The fine-tuned target models,referred to as fine-tuned MLP and fine-tuned CNN,well reproduce the target q-or J-profile across multiple tokamak devices.As an important application,these NN-based equilibrium profile convertors can be utilized to provide a good initial guess for iterative equilibrium solvers,where the desired input quantity is the safety factor instead of the plasma current density.展开更多
One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelli...One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.展开更多
With the rapid development of virtual reality technology,it has been widely used in the field of education.It can promote the development of learning transfer,which is an effective method for learners to learn effecti...With the rapid development of virtual reality technology,it has been widely used in the field of education.It can promote the development of learning transfer,which is an effective method for learners to learn effectively.Therefore,this paper describes how to use virtual reality technology to achieve learning transfer in order to achieve teaching goals and improve learning efficiency.展开更多
Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However...Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction.展开更多
Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of ...Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.展开更多
Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also ...Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems.展开更多
High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency...High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.展开更多
The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenz...The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.展开更多
Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly b...Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China.展开更多
Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experi...Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experiment trial,a high-throughput computational strategy based on first-principles calculations is designed for screening corrosion-resistant binary Mg alloy with intermetallics,from both the thermodynamic and kinetic perspectives.The stable binary Mg intermetallics with low equilibrium potential difference with respect to the Mg matrix are firstly identified.Then,the hydrogen adsorption energies on the surfaces of these Mg intermetallics are calculated,and the corrosion exchange current density is further calculated by a hydrogen evolution reaction(HER)kinetic model.Several intermetallics,e.g.Y_(3)Mg,Y_(2)Mg and La_(5)Mg,are identified to be promising intermetallics which might effectively hinder the cathodic HER.Furthermore,machine learning(ML)models are developed to predict Mg intermetallics with proper hydrogen adsorption energy employing work function(W_(f))and weighted first ionization energy(WFIE).The generalization of the ML models is tested on five new binary Mg intermetallics with the average root mean square error(RMSE)of 0.11 eV.This study not only predicts some promising binary Mg intermetallics which may suppress the galvanic corrosion,but also provides a high-throughput screening strategy and ML models for the design of corrosion-resistant alloy,which can be extended to ternary Mg alloys or other alloy systems.展开更多
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p...BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.展开更多
BACKGROUND The study on predicting the differentiation grade of colorectal cancer(CRC)based on magnetic resonance imaging(MRI)has not been reported yet.Developing a non-invasive model to predict the differentiation gr...BACKGROUND The study on predicting the differentiation grade of colorectal cancer(CRC)based on magnetic resonance imaging(MRI)has not been reported yet.Developing a non-invasive model to predict the differentiation grade of CRC is of great value.AIM To develop and validate machine learning-based models for predicting the differ-entiation grade of CRC based on T2-weighted images(T2WI).METHODS We retrospectively collected the preoperative imaging and clinical data of 315 patients with CRC who underwent surgery from March 2018 to July 2023.Patients were randomly assigned to a training cohort(n=220)or a validation cohort(n=95)at a 7:3 ratio.Lesions were delineated layer by layer on high-resolution T2WI.Least absolute shrinkage and selection operator regression was applied to screen for radiomic features.Radiomics and clinical models were constructed using the multilayer perceptron(MLP)algorithm.These radiomic features and clinically relevant variables(selected based on a significance level of P<0.05 in the training set)were used to construct radiomics-clinical models.The performance of the three models(clinical,radiomic,and radiomic-clinical model)were evaluated using the area under the curve(AUC),calibration curve and decision curve analysis(DCA).RESULTS After feature selection,eight radiomic features were retained from the initial 1781 features to construct the radiomic model.Eight different classifiers,including logistic regression,support vector machine,k-nearest neighbours,random forest,extreme trees,extreme gradient boosting,light gradient boosting machine,and MLP,were used to construct the model,with MLP demonstrating the best diagnostic performance.The AUC of the radiomic-clinical model was 0.862(95%CI:0.796-0.927)in the training cohort and 0.761(95%CI:0.635-0.887)in the validation cohort.The AUC for the radiomic model was 0.796(95%CI:0.723-0.869)in the training cohort and 0.735(95%CI:0.604-0.866)in the validation cohort.The clinical model achieved an AUC of 0.751(95%CI:0.661-0.842)in the training cohort and 0.676(95%CI:0.525-0.827)in the validation cohort.All three models demonstrated good accuracy.In the training cohort,the AUC of the radiomic-clinical model was significantly greater than that of the clinical model(P=0.005)and the radiomic model(P=0.016).DCA confirmed the clinical practicality of incorporating radiomic features into the diagnostic process.CONCLUSION In this study,we successfully developed and validated a T2WI-based machine learning model as an auxiliary tool for the preoperative differentiation between well/moderately and poorly differentiated CRC.This novel approach may assist clinicians in personalizing treatment strategies for patients and improving treatment efficacy.展开更多
Monitoring seismicity in real time provides significant benefits for timely earthquake warning and analyses.In this study,we propose an automatic workflow based on machine learning(ML)to monitor seismicity in the sout...Monitoring seismicity in real time provides significant benefits for timely earthquake warning and analyses.In this study,we propose an automatic workflow based on machine learning(ML)to monitor seismicity in the southern Sichuan Basin of China.This workflow includes coherent event detection,phase picking,and earthquake location using three-component data from a seismic network.By combining Phase Net,we develop an ML-based earthquake location model called Phase Loc,to conduct real-time monitoring of the local seismicity.The approach allows us to use synthetic samples covering the entire study area to train Phase Loc,addressing the problems of insufficient data samples,imbalanced data distribution,and unreliable labels when training with observed data.We apply the trained model to observed data recorded in the southern Sichuan Basin,China,between September 2018 and March 2019.The results show that the average differences in latitude,longitude,and depth are 5.7 km,6.1 km,and 2 km,respectively,compared to the reference catalog.Phase Loc combines all available phase information to make fast and reliable predictions,even if only a few phases are detected and picked.The proposed workflow may help real-time seismic monitoring in other regions as well.展开更多
In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes consid...In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes considerable coherence with the SAR transmission waveform together with periodical modulation patterns.This paper develops an MISRJ suppression algorithm for SAR imagery with online dictionary learning.In the algorithm,the jamming modulation temporal properties are exploited with extracting and sorting MISRJ slices using fast-time autocorrelation.Online dictionary learning is followed to separate real signals from jamming slices.Under the learned representation,time-varying MISRJs are suppressed effectively.Both simulated and real-measured SAR data are also used to confirm advantages in suppressing time-varying MISRJs over traditional methods.展开更多
Federated learning is an innovative machine learning technique that deals with centralized data storage issues while maintaining privacy and security.It involves constructing machine learning models using datasets spr...Federated learning is an innovative machine learning technique that deals with centralized data storage issues while maintaining privacy and security.It involves constructing machine learning models using datasets spread across several data centers,including medical facilities,clinical research facilities,Internet of Things devices,and even mobile devices.The main goal of federated learning is to improve robust models that benefit from the collective knowledge of these disparate datasets without centralizing sensitive information,reducing the risk of data loss,privacy breaches,or data exposure.The application of federated learning in the healthcare industry holds significant promise due to the wealth of data generated from various sources,such as patient records,medical imaging,wearable devices,and clinical research surveys.This research conducts a systematic evaluation and highlights essential issues for the selection and implementation of federated learning approaches in healthcare.It evaluates the effectiveness of federated learning strategies in the field of healthcare.It offers a systematic analysis of federated learning in the healthcare domain,encompassing the evaluation metrics employed.In addition,this study highlights the increasing interest in federated learning applications in healthcare among scholars and provides foundations for further studies.展开更多
Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression mode...Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.展开更多
The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper ...The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.展开更多
基金This work is supported by the National Natural Science Foundation of China under Grant 52274057,52074340 and 51874335the Major Scientific and Technological Projects of CNPC under Grant ZD2019-183-008+2 种基金the Major Scientific and Technological Projects of CNOOC under Grant CCL2022RCPS0397RSNthe Science and Technology Support Plan for Youth Innovation of University in Shandong Province under Grant 2019KJH002111 Project under Grant B08028.
文摘To assess whether a development strategy will be profitable enough,production forecasting is a crucial and difficult step in the process.The development history of other reservoirs in the same class tends to be studied to make predictions accurate.However,the permeability field,well patterns,and development regime must all be similar for two reservoirs to be considered in the same class.This results in very few available experiences from other reservoirs even though there is a lot of historical information on numerous reservoirs because it is difficult to find such similar reservoirs.This paper proposes a learn-to-learn method,which can better utilize a vast amount of historical data from various reservoirs.Intuitively,the proposed method first learns how to learn samples before directly learning rules in samples.Technically,by utilizing gradients from networks with independent parameters and copied structure in each class of reservoirs,the proposed network obtains the optimal shared initial parameters which are regarded as transferable information across different classes.Based on that,the network is able to predict future production indices for the target reservoir by only training with very limited samples collected from reservoirs in the same class.Two cases further demonstrate its superiority in accuracy to other widely-used network methods.
基金the R&D&I,Spain grants PID2020-119478GB-I00 and,PID2020-115832GB-I00 funded by MCIN/AEI/10.13039/501100011033.N.Rodríguez-Barroso was supported by the grant FPU18/04475 funded by MCIN/AEI/10.13039/501100011033 and by“ESF Investing in your future”Spain.J.Moyano was supported by a postdoctoral Juan de la Cierva Formación grant FJC2020-043823-I funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGenerationEU/PRTR.J.Del Ser acknowledges funding support from the Spanish Centro para el Desarrollo Tecnológico Industrial(CDTI)through the AI4ES projectthe Department of Education of the Basque Government(consolidated research group MATHMODE,IT1456-22)。
文摘When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.
基金Project supported by the National Key Research and Development Program of China (Grant No.2019YFA0705000)Leading-edge technology Program of Jiangsu Natural Science Foundation (Grant No.BK20192001)the National Natural Science Foundation of China (Grant No.11974178)。
文摘Neural networks are becoming ubiquitous in various areas of physics as a successful machine learning(ML)technique for addressing different tasks.Based on ML technique,we propose and experimentally demonstrate an efficient method for state reconstruction of the widely used Sagnac polarization-entangled photon source.By properly modeling the target states,a multi-output fully connected neural network is well trained using only six of the sixteen measurement bases in standard tomography technique,and hence our method reduces the resource consumption without loss of accuracy.We demonstrate the ability of the neural network to predict state parameters with a high precision by using both simulated and experimental data.Explicitly,the mean absolute error for all the parameters is below 0.05 for the simulated data and a mean fidelity of 0.99 is achieved for experimentally generated states.Our method could be generalized to estimate other kinds of states,as well as other quantum information tasks.
基金supported by National Natural Science Foundation of China (Nos. 12205033, 12105317, 11905022 and 11975062)Dalian Youth Science and Technology Project (No. 2022RQ039)+1 种基金the Fundamental Research Funds for the Central Universities (No. 3132023192)the Young Scientists Fund of the Natural Science Foundation of Sichuan Province (No. 2023NSFSC1291)
文摘Many magnetohydrodynamic stability analyses require generation of a set of equilibria with a fixed safety factor q-profile while varying other plasma parameters.A neural network(NN)-based approach is investigated that facilitates such a process.Both multilayer perceptron(MLP)-based NN and convolutional neural network(CNN)models are trained to map the q-profile to the plasma current density J-profile,and vice versa,while satisfying the Grad–Shafranov radial force balance constraint.When the initial target models are trained,using a database of semianalytically constructed numerical equilibria,an initial CNN with one convolutional layer is found to perform better than an initial MLP model.In particular,a trained initial CNN model can also predict the q-or J-profile for experimental tokamak equilibria.The performance of both initial target models is further improved by fine-tuning the training database,i.e.by adding realistic experimental equilibria with Gaussian noise.The fine-tuned target models,referred to as fine-tuned MLP and fine-tuned CNN,well reproduce the target q-or J-profile across multiple tokamak devices.As an important application,these NN-based equilibrium profile convertors can be utilized to provide a good initial guess for iterative equilibrium solvers,where the desired input quantity is the safety factor instead of the plasma current density.
文摘One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.
文摘With the rapid development of virtual reality technology,it has been widely used in the field of education.It can promote the development of learning transfer,which is an effective method for learners to learn effectively.Therefore,this paper describes how to use virtual reality technology to achieve learning transfer in order to achieve teaching goals and improve learning efficiency.
基金financially supported by the National Natural Science Foundation of China,No.81303115,81774042 (both to XC)the Pearl River S&T Nova Program of Guangzhou,No.201806010025 (to XC)+3 种基金the Specialty Program of Guangdong Province Hospital of Chinese Medicine of China,No.YN2018ZD07 (to XC)the Natural Science Foundatior of Guangdong Province of China,No.2023A1515012174 (to JL)the Science and Technology Program of Guangzhou of China,No.20210201 0268 (to XC),20210201 0339 (to JS)Guangdong Provincial Key Laboratory of Research on Emergency in TCM,Nos.2018-75,2019-140 (to JS)
文摘Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction.
基金supported by the National Natural Science Foundation of China(Grant Nos.42141019 and 42261144687)and STEP(Grant No.2019QZKK0102)supported by the Korea Environmental Industry&Technology Institute(KEITI)through the“Project for developing an observation-based GHG emissions geospatial information map”,funded by the Korea Ministry of Environment(MOE)(Grant No.RS-2023-00232066).
文摘Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.
基金sponsored by the National Key R&D Program of China(No.2018YFB2100400)the National Natural Science Foundation of China(No.62002077,61872100)+4 种基金the Major Research Plan of the National Natural Science Foundation of China(92167203)the Guangdong Basic and Applied Basic Research Foundation(No.2020A1515110385)the China Postdoctoral Science Foundation(No.2022M710860)the Zhejiang Lab(No.2020NF0AB01)Guangzhou Science and Technology Plan Project(202102010440).
文摘Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems.
基金supported in part by the National Natural Science Foundation of China(62371116 and 62231020)in part by the Science and Technology Project of Hebei Province Education Department(ZD2022164)+2 种基金in part by the Fundamental Research Funds for the Central Universities(N2223031)in part by the Open Research Project of Xidian University(ISN24-08)Key Laboratory of Cognitive Radio and Information Processing,Ministry of Education(Guilin University of Electronic Technology,China,CRKL210203)。
文摘High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.
文摘The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.
基金supported in part by the Beijing Natural Science Foundation(Grant No.8222051)the National Key R&D Program of China(Grant No.2022YFC3004103)+2 种基金the National Natural Foundation of China(Grant Nos.42275003 and 42275012)the China Meteorological Administration Key Innovation Team(Grant Nos.CMA2022ZD04 and CMA2022ZD07)the Beijing Science and Technology Program(Grant No.Z221100005222012).
文摘Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China.
基金financially supported by the National Key Research and Development Program of China(No.2016YFB0701202,No.2017YFB0701500 and No.2020YFB1505901)National Natural Science Foundation of China(General Program No.51474149,52072240)+3 种基金Shanghai Science and Technology Committee(No.18511109300)Science and Technology Commission of the CMC(2019JCJQZD27300)financial support from the University of Michigan and Shanghai Jiao Tong University joint funding,China(AE604401)Science and Technology Commission of Shanghai Municipality(No.18511109302).
文摘Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experiment trial,a high-throughput computational strategy based on first-principles calculations is designed for screening corrosion-resistant binary Mg alloy with intermetallics,from both the thermodynamic and kinetic perspectives.The stable binary Mg intermetallics with low equilibrium potential difference with respect to the Mg matrix are firstly identified.Then,the hydrogen adsorption energies on the surfaces of these Mg intermetallics are calculated,and the corrosion exchange current density is further calculated by a hydrogen evolution reaction(HER)kinetic model.Several intermetallics,e.g.Y_(3)Mg,Y_(2)Mg and La_(5)Mg,are identified to be promising intermetallics which might effectively hinder the cathodic HER.Furthermore,machine learning(ML)models are developed to predict Mg intermetallics with proper hydrogen adsorption energy employing work function(W_(f))and weighted first ionization energy(WFIE).The generalization of the ML models is tested on five new binary Mg intermetallics with the average root mean square error(RMSE)of 0.11 eV.This study not only predicts some promising binary Mg intermetallics which may suppress the galvanic corrosion,but also provides a high-throughput screening strategy and ML models for the design of corrosion-resistant alloy,which can be extended to ternary Mg alloys or other alloy systems.
文摘BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.
基金the Fujian Province Clinical Key Specialty Construction Project,No.2022884Quanzhou Science and Technology Plan Project,No.2021N034S+1 种基金The Youth Research Project of Fujian Provincial Health Commission,No.2022QNA067Malignant Tumor Clinical Medicine Research Center,No.2020N090s.
文摘BACKGROUND The study on predicting the differentiation grade of colorectal cancer(CRC)based on magnetic resonance imaging(MRI)has not been reported yet.Developing a non-invasive model to predict the differentiation grade of CRC is of great value.AIM To develop and validate machine learning-based models for predicting the differ-entiation grade of CRC based on T2-weighted images(T2WI).METHODS We retrospectively collected the preoperative imaging and clinical data of 315 patients with CRC who underwent surgery from March 2018 to July 2023.Patients were randomly assigned to a training cohort(n=220)or a validation cohort(n=95)at a 7:3 ratio.Lesions were delineated layer by layer on high-resolution T2WI.Least absolute shrinkage and selection operator regression was applied to screen for radiomic features.Radiomics and clinical models were constructed using the multilayer perceptron(MLP)algorithm.These radiomic features and clinically relevant variables(selected based on a significance level of P<0.05 in the training set)were used to construct radiomics-clinical models.The performance of the three models(clinical,radiomic,and radiomic-clinical model)were evaluated using the area under the curve(AUC),calibration curve and decision curve analysis(DCA).RESULTS After feature selection,eight radiomic features were retained from the initial 1781 features to construct the radiomic model.Eight different classifiers,including logistic regression,support vector machine,k-nearest neighbours,random forest,extreme trees,extreme gradient boosting,light gradient boosting machine,and MLP,were used to construct the model,with MLP demonstrating the best diagnostic performance.The AUC of the radiomic-clinical model was 0.862(95%CI:0.796-0.927)in the training cohort and 0.761(95%CI:0.635-0.887)in the validation cohort.The AUC for the radiomic model was 0.796(95%CI:0.723-0.869)in the training cohort and 0.735(95%CI:0.604-0.866)in the validation cohort.The clinical model achieved an AUC of 0.751(95%CI:0.661-0.842)in the training cohort and 0.676(95%CI:0.525-0.827)in the validation cohort.All three models demonstrated good accuracy.In the training cohort,the AUC of the radiomic-clinical model was significantly greater than that of the clinical model(P=0.005)and the radiomic model(P=0.016).DCA confirmed the clinical practicality of incorporating radiomic features into the diagnostic process.CONCLUSION In this study,we successfully developed and validated a T2WI-based machine learning model as an auxiliary tool for the preoperative differentiation between well/moderately and poorly differentiated CRC.This novel approach may assist clinicians in personalizing treatment strategies for patients and improving treatment efficacy.
基金the financial support of the National Key R&D Program of China(2021YFC3000701)the China Seismic Experimental Site in Sichuan-Yunnan(CSES-SY)。
文摘Monitoring seismicity in real time provides significant benefits for timely earthquake warning and analyses.In this study,we propose an automatic workflow based on machine learning(ML)to monitor seismicity in the southern Sichuan Basin of China.This workflow includes coherent event detection,phase picking,and earthquake location using three-component data from a seismic network.By combining Phase Net,we develop an ML-based earthquake location model called Phase Loc,to conduct real-time monitoring of the local seismicity.The approach allows us to use synthetic samples covering the entire study area to train Phase Loc,addressing the problems of insufficient data samples,imbalanced data distribution,and unreliable labels when training with observed data.We apply the trained model to observed data recorded in the southern Sichuan Basin,China,between September 2018 and March 2019.The results show that the average differences in latitude,longitude,and depth are 5.7 km,6.1 km,and 2 km,respectively,compared to the reference catalog.Phase Loc combines all available phase information to make fast and reliable predictions,even if only a few phases are detected and picked.The proposed workflow may help real-time seismic monitoring in other regions as well.
基金supported by the National Natural Science Foundation of China(61771372,61771367,62101494)the National Outstanding Youth Science Fund Project(61525105)+1 种基金Shenzhen Science and Technology Program(KQTD20190929172704911)the Aeronautic al Science Foundation of China(2019200M1001)。
文摘In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes considerable coherence with the SAR transmission waveform together with periodical modulation patterns.This paper develops an MISRJ suppression algorithm for SAR imagery with online dictionary learning.In the algorithm,the jamming modulation temporal properties are exploited with extracting and sorting MISRJ slices using fast-time autocorrelation.Online dictionary learning is followed to separate real signals from jamming slices.Under the learned representation,time-varying MISRJs are suppressed effectively.Both simulated and real-measured SAR data are also used to confirm advantages in suppressing time-varying MISRJs over traditional methods.
基金This work was supported by a research fund from Chosun University,2023。
文摘Federated learning is an innovative machine learning technique that deals with centralized data storage issues while maintaining privacy and security.It involves constructing machine learning models using datasets spread across several data centers,including medical facilities,clinical research facilities,Internet of Things devices,and even mobile devices.The main goal of federated learning is to improve robust models that benefit from the collective knowledge of these disparate datasets without centralizing sensitive information,reducing the risk of data loss,privacy breaches,or data exposure.The application of federated learning in the healthcare industry holds significant promise due to the wealth of data generated from various sources,such as patient records,medical imaging,wearable devices,and clinical research surveys.This research conducts a systematic evaluation and highlights essential issues for the selection and implementation of federated learning approaches in healthcare.It evaluates the effectiveness of federated learning strategies in the field of healthcare.It offers a systematic analysis of federated learning in the healthcare domain,encompassing the evaluation metrics employed.In addition,this study highlights the increasing interest in federated learning applications in healthcare among scholars and provides foundations for further studies.
文摘Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.
文摘The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.