期刊文献+
共找到221,710篇文章
< 1 2 250 >
每页显示 20 50 100
Machine learning for predicting the outcome of terminal ballistics events 被引量:1
1
作者 Shannon Ryan Neeraj Mohan Sushma +4 位作者 Arun Kumar AV Julian Berk Tahrima Hashem Santu Rana Svetha Venkatesh 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第1期14-26,共13页
Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression mode... Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems. 展开更多
关键词 Machine learning Artificial intelligence Physics-informed machine learning Terminal ballistics Armour
下载PDF
Low-Cost Federated Broad Learning for Privacy-Preserved Knowledge Sharing in the RIS-Aided Internet of Vehicles 被引量:1
2
作者 Xiaoming Yuan Jiahui Chen +4 位作者 Ning Zhang Qiang(John)Ye Changle Li Chunsheng Zhu Xuemin Sherman Shen 《Engineering》 SCIE EI CAS CSCD 2024年第2期178-189,共12页
High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency... High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV. 展开更多
关键词 Knowledge sharing Internet of Vehicles Federated learning Broad learning Reconfigurable intelligent surfaces Resource allocation
下载PDF
A game-theoretic approach for federated learning:A trade-off among privacy,accuracy and energy 被引量:2
3
作者 Lihua Yin Sixin Lin +3 位作者 Zhe Sun Ran Li Yuanyuan He Zhiqiang Hao 《Digital Communications and Networks》 SCIE CSCD 2024年第2期389-403,共15页
Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also ... Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems. 展开更多
关键词 Federated learning Privacy preservation Energy optimization Game theory Distributed communication systems
下载PDF
Use of machine learning models for the prognostication of liver transplantation: A systematic review 被引量:2
4
作者 Gidion Chongo Jonathan Soldera 《World Journal of Transplantation》 2024年第1期164-188,共25页
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p... BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication. 展开更多
关键词 Liver transplantation Machine learning models PROGNOSTICATION Allograft allocation Artificial intelligence
下载PDF
T2-weighted imaging-based radiomic-clinical machine learning model for predicting the differentiation of colorectal adenocarcinoma 被引量:1
5
作者 Hui-Da Zheng Qiao-Yi Huang +4 位作者 Qi-Ming Huang Xiao-Ting Ke Kai Ye Shu Lin Jian-Hua Xu 《World Journal of Gastrointestinal Oncology》 SCIE 2024年第3期819-832,共14页
BACKGROUND The study on predicting the differentiation grade of colorectal cancer(CRC)based on magnetic resonance imaging(MRI)has not been reported yet.Developing a non-invasive model to predict the differentiation gr... BACKGROUND The study on predicting the differentiation grade of colorectal cancer(CRC)based on magnetic resonance imaging(MRI)has not been reported yet.Developing a non-invasive model to predict the differentiation grade of CRC is of great value.AIM To develop and validate machine learning-based models for predicting the differ-entiation grade of CRC based on T2-weighted images(T2WI).METHODS We retrospectively collected the preoperative imaging and clinical data of 315 patients with CRC who underwent surgery from March 2018 to July 2023.Patients were randomly assigned to a training cohort(n=220)or a validation cohort(n=95)at a 7:3 ratio.Lesions were delineated layer by layer on high-resolution T2WI.Least absolute shrinkage and selection operator regression was applied to screen for radiomic features.Radiomics and clinical models were constructed using the multilayer perceptron(MLP)algorithm.These radiomic features and clinically relevant variables(selected based on a significance level of P<0.05 in the training set)were used to construct radiomics-clinical models.The performance of the three models(clinical,radiomic,and radiomic-clinical model)were evaluated using the area under the curve(AUC),calibration curve and decision curve analysis(DCA).RESULTS After feature selection,eight radiomic features were retained from the initial 1781 features to construct the radiomic model.Eight different classifiers,including logistic regression,support vector machine,k-nearest neighbours,random forest,extreme trees,extreme gradient boosting,light gradient boosting machine,and MLP,were used to construct the model,with MLP demonstrating the best diagnostic performance.The AUC of the radiomic-clinical model was 0.862(95%CI:0.796-0.927)in the training cohort and 0.761(95%CI:0.635-0.887)in the validation cohort.The AUC for the radiomic model was 0.796(95%CI:0.723-0.869)in the training cohort and 0.735(95%CI:0.604-0.866)in the validation cohort.The clinical model achieved an AUC of 0.751(95%CI:0.661-0.842)in the training cohort and 0.676(95%CI:0.525-0.827)in the validation cohort.All three models demonstrated good accuracy.In the training cohort,the AUC of the radiomic-clinical model was significantly greater than that of the clinical model(P=0.005)and the radiomic model(P=0.016).DCA confirmed the clinical practicality of incorporating radiomic features into the diagnostic process.CONCLUSION In this study,we successfully developed and validated a T2WI-based machine learning model as an auxiliary tool for the preoperative differentiation between well/moderately and poorly differentiated CRC.This novel approach may assist clinicians in personalizing treatment strategies for patients and improving treatment efficacy. 展开更多
关键词 Radiomics Colorectal cancer Differentiation grade Machine learning T2-weighted imaging
下载PDF
Enhancing Solar Energy Production Forecasting Using Advanced Machine Learning and Deep Learning Techniques: A Comprehensive Study on the Impact of Meteorological Data
6
作者 Nataliya Shakhovska Mykola Medykovskyi +2 位作者 Oleksandr Gurbych Mykhailo Mamchur Mykhailo Melnyk 《Computers, Materials & Continua》 SCIE EI 2024年第11期3147-3163,共17页
The increasing adoption of solar photovoltaic systems necessitates accurate forecasting of solar energy production to enhance grid stability,reliability,and economic benefits.This study explores advanced machine learn... The increasing adoption of solar photovoltaic systems necessitates accurate forecasting of solar energy production to enhance grid stability,reliability,and economic benefits.This study explores advanced machine learning(ML)and deep learning(DL)techniques for predicting solar energy generation,emphasizing the significant impact of meteorological data.A comprehensive dataset,encompassing detailed weather conditions and solar energy metrics,was collected and preprocessed to improve model accuracy.Various models were developed and trained with different preprocessing stages.Finally,three datasets were prepared.A novel hour-based prediction wrapper was introduced,utilizing external sunrise and sunset data to restrict predictions to daylight hours,thereby enhancing model performance.A cascaded stacking model incorporating association rules,weak predictors,and a modified stacking aggregation procedure was proposed,demonstrating enhanced generalization and reduced prediction errors.Results indicated that models trained on raw data generally performed better than those on stripped data.The Long Short-Term Memory(LSTM)with Inception layers’model was the most effective,achieving significant performance improvements through feature selection,data preprocessing,and innovative modeling techniques.The study underscores the potential to combine detailed meteorological data with advanced ML and DL methods to improve the accuracy of solar energy forecasting,thereby optimizing energy management and planning. 展开更多
关键词 Solar energy prediction machine learning deep learning
下载PDF
Advancements in machine learning for material design and process optimization in the field of additive manufacturing
7
作者 Hao-ran Zhou Hao Yang +8 位作者 Huai-qian Li Ying-chun Ma Sen Yu Jian shi Jing-chang Cheng Peng Gao Bo Yu Zhi-quan Miao Yan-peng Wei 《China Foundry》 SCIE EI CAS CSCD 2024年第2期101-115,共15页
Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is co... Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is constrained by issues like unclear fundamental principles,complex experimental cycles,and high costs.Machine learning,as a novel artificial intelligence technology,has the potential to deeply engage in the development of additive manufacturing process,assisting engineers in learning and developing new techniques.This paper provides a comprehensive overview of the research and applications of machine learning in the field of additive manufacturing,particularly in model design and process development.Firstly,it introduces the background and significance of machine learning-assisted design in additive manufacturing process.It then further delves into the application of machine learning in additive manufacturing,focusing on model design and process guidance.Finally,it concludes by summarizing and forecasting the development trends of machine learning technology in the field of additive manufacturing. 展开更多
关键词 additive manufacturing machine learning material design process optimization intersection of disciplines embedded machine learning
下载PDF
A Review of Deep Learning-Based Vulnerability Detection Tools for Ethernet Smart Contracts
8
作者 Huaiguang Wu Yibo Peng +1 位作者 Yaqiong He Jinlin Fan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期77-108,共32页
In recent years,the number of smart contracts deployed on blockchain has exploded.However,the issue of vulnerability has caused incalculable losses.Due to the irreversible and immutability of smart contracts,vulnerabi... In recent years,the number of smart contracts deployed on blockchain has exploded.However,the issue of vulnerability has caused incalculable losses.Due to the irreversible and immutability of smart contracts,vulnerability detection has become particularly important.With the popular use of neural network model,there has been a growing utilization of deep learning-based methods and tools for the identification of vulnerabilities within smart contracts.This paper commences by providing a succinct overview of prevalent categories of vulnerabilities found in smart contracts.Subsequently,it categorizes and presents an overview of contemporary deep learning-based tools developed for smart contract detection.These tools are categorized based on their open-source status,the data format and the type of feature extraction they employ.Then we conduct a comprehensive comparative analysis of these tools,selecting representative tools for experimental validation and comparing them with traditional tools in terms of detection coverage and accuracy.Finally,Based on the insights gained from the experimental results and the current state of research in the field of smart contract vulnerability detection tools,we suppose to provide a reference standard for developers of contract vulnerability detection tools.Meanwhile,forward-looking research directions are also proposed for deep learning-based smart contract vulnerability detection. 展开更多
关键词 Smart contract vulnerability detection deep learning
下载PDF
Combining reinforcement learning with mathematical programming:An approach for optimal design of heat exchanger networks
9
作者 Hui Tan Xiaodong Hong +4 位作者 Zuwei Liao Jingyuan Sun Yao Yang Jingdai Wang Yongrong Yang 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第5期63-71,共9页
Heat integration is important for energy-saving in the process industry.It is linked to the persistently challenging task of optimal design of heat exchanger networks(HEN).Due to the inherent highly nonconvex nonlinea... Heat integration is important for energy-saving in the process industry.It is linked to the persistently challenging task of optimal design of heat exchanger networks(HEN).Due to the inherent highly nonconvex nonlinear and combinatorial nature of the HEN problem,it is not easy to find solutions of high quality for large-scale problems.The reinforcement learning(RL)method,which learns strategies through ongoing exploration and exploitation,reveals advantages in such area.However,due to the complexity of the HEN design problem,the RL method for HEN should be dedicated and designed.A hybrid strategy combining RL with mathematical programming is proposed to take better advantage of both methods.An insightful state representation of the HEN structure as well as a customized reward function is introduced.A Q-learning algorithm is applied to update the HEN structure using theε-greedy strategy.Better results are obtained from three literature cases of different scales. 展开更多
关键词 Heat exchanger network Reinforcement learning Mathematical programming Process design
下载PDF
Machine learning models for the density and heat capacity of ionic liquid-water binary mixtures
10
作者 Yingxue Fu Xinyan Liu +3 位作者 Jingzi Gao Yang Lei Yuqiu Chen Xiangping Zhang 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第9期244-255,共12页
Ionic liquids(ILs),because of the advantages of low volatility,good thermal stability,high gas solubility and easy recovery,can be regarded as the green substitute for traditional solvent.However,the high viscosity an... Ionic liquids(ILs),because of the advantages of low volatility,good thermal stability,high gas solubility and easy recovery,can be regarded as the green substitute for traditional solvent.However,the high viscosity and synthesis cost limits their application,the hybrid solvent which combining ILs together with others especially water can solve this problem.Compared with the pure IL systems,the study of the ILs-H_(2)O binary system is rare,and the experimental data of corresponding thermodynamic properties(such as density,heat capacity,etc.)are less.Moreover,it is also difficult to obtain all the data through experiments.Therefore,this work establishes a predicted model on ILs-water binary systems based on the group contribution(GC)method.Three different machine learning algorithms(ANN,XGBoost,LightBGM)are applied to fit the density and heat capacity of ILs-water binary systems.And then the three models are compared by two index of MAE and R^(2).The results show that the ANN-GC model has the best prediction effect on the density and heat capacity of ionic liquid-water mixed system.Furthermore,the Shapley additive explanations(SHAP)method is harnessed to scrutinize the significance of each structure and parameter within the ANN-GC model in relation to prediction outcomes.The results reveal that system components(XIL)within the ILs-H_(2)O binary system exert the most substantial influence on density,while for the heat capacity,the substituents on the cation exhibit the greatest impact.This study not only introduces a robust prediction model for the density and heat capacity properties of IL-H_(2)O binary mixtures but also provides insight into the influence of mixture features on its density and heat capacity. 展开更多
关键词 Ionic liquids DENSITY Heat capacity Group contribution method Machine learning
下载PDF
Reinforcement Learning-Based Energy Management for Hybrid Power Systems:State-of-the-Art Survey,Review,and Perspectives
11
作者 Xiaolin Tang Jiaxin Chen +4 位作者 Yechen Qin Teng Liu Kai Yang Amir Khajepour Shen Li 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第3期1-25,共25页
The new energy vehicle plays a crucial role in green transportation,and the energy management strategy of hybrid power systems is essential for ensuring energy-efficient driving.This paper presents a state-of-the-art ... The new energy vehicle plays a crucial role in green transportation,and the energy management strategy of hybrid power systems is essential for ensuring energy-efficient driving.This paper presents a state-of-the-art survey and review of reinforcement learning-based energy management strategies for hybrid power systems.Additionally,it envisions the outlook for autonomous intelligent hybrid electric vehicles,with reinforcement learning as the foundational technology.First of all,to provide a macro view of historical development,the brief history of deep learning,reinforcement learning,and deep reinforcement learning is presented in the form of a timeline.Then,the comprehensive survey and review are conducted by collecting papers from mainstream academic databases.Enumerating most of the contributions based on three main directions—algorithm innovation,powertrain innovation,and environment innovation—provides an objective review of the research status.Finally,to advance the application of reinforcement learning in autonomous intelligent hybrid electric vehicles,future research plans positioned as“Alpha HEV”are envisioned,integrating Autopilot and energy-saving control. 展开更多
关键词 New energy vehicle Hybrid power system Reinforcement learning Energy management strategy
下载PDF
Unleashing the Power of Multi-Agent Reinforcement Learning for Algorithmic Trading in the Digital Financial Frontier and Enterprise Information Systems
12
作者 Saket Sarin Sunil K.Singh +4 位作者 Sudhakar Kumar Shivam Goyal Brij Bhooshan Gupta Wadee Alhalabi Varsha Arya 《Computers, Materials & Continua》 SCIE EI 2024年第8期3123-3138,共16页
In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading... In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading.Our in-depth investigation delves into the intricacies of merging Multi-Agent Reinforcement Learning(MARL)and Explainable AI(XAI)within Fintech,aiming to refine Algorithmic Trading strategies.Through meticulous examination,we uncover the nuanced interactions of AI-driven agents as they collaborate and compete within the financial realm,employing sophisticated deep learning techniques to enhance the clarity and adaptability of trading decisions.These AI-infused Fintech platforms harness collective intelligence to unearth trends,mitigate risks,and provide tailored financial guidance,fostering benefits for individuals and enterprises navigating the digital landscape.Our research holds the potential to revolutionize finance,opening doors to fresh avenues for investment and asset management in the digital age.Additionally,our statistical evaluation yields encouraging results,with metrics such as Accuracy=0.85,Precision=0.88,and F1 Score=0.86,reaffirming the efficacy of our approach within Fintech and emphasizing its reliability and innovative prowess. 展开更多
关键词 Neurodynamic Fintech multi-agent reinforcement learning algorithmic trading digital financial frontier
下载PDF
Investigation on the Current Status and Effects of Online Course Learning for Nursing Students
13
作者 Wei LIU Yuan ZHAO +3 位作者 Ye WANG Dan HOU Lirong JIA Weiguang YUE 《Medicinal Plant》 2024年第5期76-78,共3页
[Objectives]To explore the current situation and effects of online learning from the perspective of students,as well as the learning dynamics,and to explore the online teaching methods.It is possible to put forward re... [Objectives]To explore the current situation and effects of online learning from the perspective of students,as well as the learning dynamics,and to explore the online teaching methods.It is possible to put forward relevant suggestions for the problems presented by online teaching,find effective teaching methods,and explore appropriate online teaching methods.[Methods]The nursing students of the 2020,2021,and 2022 grades of Chengde Nursing Vocational College were selected as the research subjects.A self-made questionnaire survey method was adopted.The questionnaire had 5 items and 44 questions:3 questions on personal information,9 questions on the teacher level,12 ques-tions on the student level,7 questions on the technical level,and 13 questions on online learning satisfaction.[Results]In the process of on-line teaching,the cooperation between family and school can be carried out to give full play to the important role of family supervision in online learning and other education,which is conducive to maintaining the discipline of online courses on the Internet,increasing the online learning effect of students,and avoiding the temptation of the Internet to the greatest extent.In the process,we should increase students'self-control in online learning,help students shape realistic goals,and thus improve the effect of online learning.Technical level:increase information-based teaching,enrich teaching content,and introduce virtual simulation software to simulate clinical operations,so as to increase students’interest and enthusiasm in learning.[Conclusions]This study is expected to provide a certain reference for the smooth and efficient development of online teaching and online learning skills. 展开更多
关键词 Online teaching learning status SATISFACTION
下载PDF
Test Case Generation Evaluator for the Implementation of Test Case Generation Algorithms Based on Learning to Rank
14
作者 Zhonghao Guo Xinyue Xu Xiangxian Chen 《Computer Systems Science & Engineering》 2024年第2期479-509,共31页
In software testing,the quality of test cases is crucial,but manual generation is time-consuming.Various automatic test case generation methods exist,requiring careful selection based on program features.Current evalu... In software testing,the quality of test cases is crucial,but manual generation is time-consuming.Various automatic test case generation methods exist,requiring careful selection based on program features.Current evaluation methods compare a limited set of metrics,which does not support a larger number of metrics or consider the relative importance of each metric to the final assessment.To address this,we propose an evaluation tool,the Test Case Generation Evaluator(TCGE),based on the learning to rank(L2R)algorithm.Unlike previous approaches,our method comprehensively evaluates algorithms by considering multiple metrics,resulting in a more reasoned assessment.The main principle of the TCGE is the formation of feature vectors that are of concern by the tester.Through training,the feature vectors are sorted to generate a list,with the order of the methods on the list determined according to their effectiveness on the tested assembly.We implement TCGE using three L2R algorithms:Listnet,LambdaMART,and RFLambdaMART.Evaluation employs a dataset with features of classical test case generation algorithms and three metrics—Normalized Discounted Cumulative Gain(NDCG),Mean Average Precision(MAP),and Mean Reciprocal Rank(MRR).Results demonstrate the TCGE’s superior effectiveness in evaluating test case generation algorithms compared to other methods.Among the three L2R algorithms,RFLambdaMART proves the most effective,achieving an accuracy above 96.5%,surpassing LambdaMART by 2%and Listnet by 1.5%.Consequently,the TCGE framework exhibits significant application value in the evaluation of test case generation algorithms. 展开更多
关键词 Test case generation evaluator learning to rank RFLambdaMART
下载PDF
A Modified Iterative Learning Control Approach for the Active Suppression of Rotor Vibration Induced by Coupled Unbalance and Misalignment
15
作者 Yifan Bao Jianfei Yao +1 位作者 Fabrizio Scarpa Yan Li 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第1期242-253,共12页
This paper proposes a modified iterative learning control(MILC)periodical feedback-feedforward algorithm to reduce the vibration of a rotor caused by coupled unbalance and parallel misalignment.The control of the vibr... This paper proposes a modified iterative learning control(MILC)periodical feedback-feedforward algorithm to reduce the vibration of a rotor caused by coupled unbalance and parallel misalignment.The control of the vibration of the rotor is provided by an active magnetic actuator(AMA).The iterative gain of the MILC algorithm here presented has a self-adjustment based on the magnitude of the vibration.Notch filters are adopted to extract the synchronous(1×Ω)and twice rotational frequency(2×Ω)components of the rotor vibration.Both the notch frequency of the filter and the size of feedforward storage used during the experiment have a real-time adaptation to the rotational speed.The method proposed in this work can provide effective suppression of the vibration of the rotor in case of sudden changes or fluctuations of the rotor speed.Simulations and experiments using the MILC algorithm proposed here are carried out and give evidence to the feasibility and robustness of the technique proposed. 展开更多
关键词 Rotor vibration suppression Modified iterative learning control UNBALANCE Parallel misalignment Active magnetic actuator
下载PDF
A Hybrid Approach for Predicting the Remaining Useful Life of Bearings Based on the RReliefF Algorithm and Extreme Learning Machine
16
作者 Sen-Hui Wang Xi Kang +3 位作者 Cheng Wang Tian-Bing Ma Xiang He Ke Yang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第8期1405-1427,共23页
Accurately predicting the remaining useful life(RUL)of bearings in mining rotating equipment is vital for mining enterprises.This research aims to distinguish the features associated with the RUL of bearings and propo... Accurately predicting the remaining useful life(RUL)of bearings in mining rotating equipment is vital for mining enterprises.This research aims to distinguish the features associated with the RUL of bearings and propose a prediction model based on these selected features.This study proposes a hybrid predictive model to assess the RUL of rolling element bearings.The proposed model begins with the pre-processing of bearing vibration signals to reconstruct sixty time-domain features.The hybrid model selects relevant features from the sixty time-domain features of the vibration signal by adopting the RReliefF feature selection algorithm.Subsequently,the extreme learning machine(ELM)approach is applied to develop a predictive model of RUL based on the optimal features.The model is trained by optimizing its parameters via the grid search approach.The training datasets are adjusted to make them most suitable for the regression model using the cross-validation method.The proposed hybrid model is analyzed and validated using the vibration data taken from the public XJTU-SY rolling element-bearing database.The comparison is constructed with other traditional models.The experimental test results demonstrated that the proposed approach can predict the RUL of bearings with a reliable degree of accuracy. 展开更多
关键词 Bearing degradation remaining useful life estimation RReliefF feature selection extreme learning machine
下载PDF
Improving the Short-Range Precipitation Forecast of Numerical Weather Prediction through a Deep Learning-Based Mask Approach
17
作者 Jiaqi ZHENG Qing LING +1 位作者 Jia LI Yerong FENG 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第8期1601-1613,共13页
Due to various technical issues,existing numerical weather prediction(NWP)models often perform poorly at forecasting rainfall in the first several hours.To correct the bias of an NWP model and improve the accuracy of ... Due to various technical issues,existing numerical weather prediction(NWP)models often perform poorly at forecasting rainfall in the first several hours.To correct the bias of an NWP model and improve the accuracy of short-range precipitation forecasting,we propose a deep learning-based approach called UNet Mask,which combines NWP forecasts with the output of a convolutional neural network called UNet.The UNet Mask involves training the UNet on historical data from the NWP model and gridded rainfall observations for 6-hour precipitation forecasting.The overlap of the UNet output and the NWP forecasts at the same rainfall threshold yields a mask.The UNet Mask blends the UNet output and the NWP forecasts by taking the maximum between them and passing through the mask,which provides the corrected 6-hour rainfall forecasts.We evaluated UNet Mask on a test set and in real-time verification.The results showed that UNet Mask outperforms the NWP model in 6-hour precipitation prediction by reducing the FAR and improving CSI scores.Sensitivity tests also showed that different small rainfall thresholds applied to the UNet and the NWP model have different effects on UNet Mask's forecast performance.This study shows that UNet Mask is a promising approach for improving rainfall forecasting of NWP models. 展开更多
关键词 deep learning numerical weather prediction(NWP) 6-hour quantitative precipitation forecast
下载PDF
Deep Learning for Financial Time Series Prediction:A State-of-the-Art Review of Standalone and HybridModels
18
作者 Weisi Chen Walayat Hussain +1 位作者 Francesco Cauteruccio Xu Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期187-224,共38页
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear... Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions. 展开更多
关键词 Financial time series prediction convolutional neural network long short-term memory deep learning attention mechanism FINANCE
下载PDF
A machine learning-based strategy for predicting the mechanical strength of coral reef limestone using X-ray computed tomography
19
作者 Kai Wu Qingshan Meng +4 位作者 Ruoxin Li Le Luo Qin Ke ChiWang Chenghao Ma 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第7期2790-2800,共11页
Different sedimentary zones in coral reefs lead to significant anisotropy in the pore structure of coral reef limestone(CRL),making it difficult to study mechanical behaviors.With X-ray computed tomography(CT),112 CRL... Different sedimentary zones in coral reefs lead to significant anisotropy in the pore structure of coral reef limestone(CRL),making it difficult to study mechanical behaviors.With X-ray computed tomography(CT),112 CRL samples were utilized for training the support vector machine(SVM)-,random forest(RF)-,and back propagation neural network(BPNN)-based models,respectively.Simultaneously,the machine learning model was embedded into genetic algorithm(GA)for parameter optimization to effectively predict uniaxial compressive strength(UCS)of CRL.Results indicate that the BPNN model with five hidden layers presents the best training effect in the data set of CRL.The SVM-based model shows a tendency to overfitting in the training set and poor generalization ability in the testing set.The RF-based model is suitable for training CRL samples with large data.Analysis of Pearson correlation coefficient matrix and the percentage increment method of performance metrics shows that the dry density,pore structure,and porosity of CRL are strongly correlated to UCS.However,the P-wave velocity is almost uncorrelated to the UCS,which is significantly distinct from the law for homogenous geomaterials.In addition,the pore tensor proposed in this paper can effectively reflect the pore structure of coral framework limestone(CFL)and coral boulder limestone(CBL),realizing the quantitative characterization of the heterogeneity and anisotropy of pore.The pore tensor provides a feasible idea to establish the relationship between pore structure and mechanical behavior of CRL. 展开更多
关键词 Coral reef limestone(CRL) Machine learning Pore tensor X-ray computed tomography(CT)
下载PDF
Application of Machine Learning for Flood Prediction and Evaluation in Southern Nigeria
20
作者 Emeka Bright Ogbuene Chukwumeuche Ambrose Eze +9 位作者 Obianuju Getrude Aloh Andrew Monday Oroke Damian Onuora Udegbunam Josiah Chukwuemeka Ogbuka Fred Emeka Achoru Vivian Amarachi Ozorme Obianuju Anwara Ikechukwu Chukwunonyelum Anthonia Nneka Nebo Obiageli Jacinta Okolo 《Atmospheric and Climate Sciences》 2024年第3期299-316,共18页
This study explored the application of machine learning techniques for flood prediction and analysis in southern Nigeria. Machine learning is an artificial intelligence technique that uses computer-based instructions ... This study explored the application of machine learning techniques for flood prediction and analysis in southern Nigeria. Machine learning is an artificial intelligence technique that uses computer-based instructions to analyze and transform data into useful information to enable systems to make predictions. Traditional methods of flood prediction and analysis often fall short of providing accurate and timely information for effective disaster management. More so, numerical forecasting of flood disasters in the 19th century is not very accurate due to its inability to simplify complex atmospheric dynamics into simple equations. Here, we used Machine learning (ML) techniques including Random Forest (RF), Logistic Regression (LR), Naïve Bayes (NB), Support Vector Machine (SVM), and Neural Networks (NN) to model the complex physical processes that cause floods. The dataset contains 59 cases with the goal feature “Event-Type”, including 39 cases of floods and 20 cases of flood/rainstorms. Based on comparison of assessment metrics from models created using historical records, the result shows that NB performed better than all other techniques, followed by RF. The developed model can be used to predict the frequency of flood incidents. The majority of flood scenarios demonstrate that the event poses a significant risk to people’s lives. Therefore, each of the emergency response elements requires adequate knowledge of the flood incidences, continuous early warning service and accurate prediction model. This study can expand knowledge and research on flood predictive modeling in vulnerable areas to inform effective and sustainable contingency planning, policy, and management actions on flood disaster incidents, especially in other technologically underdeveloped settings. 展开更多
关键词 Machine learning FLOOD PREDICTION EVALUATION Southern Nigeria
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部