期刊文献+
共找到13,891篇文章
< 1 2 250 >
每页显示 20 50 100
Physics-informed machine learning model for prediction of ground reflected wave peak overpressure
1
作者 Haoyu Zhang Yuxin Xu +1 位作者 Lihan Xiao Canjie Zhen 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第11期119-133,共15页
The accurate prediction of peak overpressure of explosion shockwaves is significant in fields such as explosion hazard assessment and structural protection, where explosion shockwaves serve as typical destructive elem... The accurate prediction of peak overpressure of explosion shockwaves is significant in fields such as explosion hazard assessment and structural protection, where explosion shockwaves serve as typical destructive elements. Aiming at the problem of insufficient accuracy of the existing physical models for predicting the peak overpressure of ground reflected waves, two physics-informed machine learning models are constructed. The results demonstrate that the machine learning models, which incorporate physical information by predicting the deviation between the physical model and actual values and adding a physical loss term in the loss function, can accurately predict both the training and out-oftraining dataset. Compared to existing physical models, the average relative error in the predicted training domain is reduced from 17.459%-48.588% to 2%, and the proportion of average relative error less than 20% increased from 0% to 59.4% to more than 99%. In addition, the relative average error outside the prediction training set range is reduced from 14.496%-29.389% to 5%, and the proportion of relative average error less than 20% increased from 0% to 71.39% to more than 99%. The inclusion of a physical loss term enforcing monotonicity in the loss function effectively improves the extrapolation performance of machine learning. The findings of this study provide valuable reference for explosion hazard assessment and anti-explosion structural design in various fields. 展开更多
关键词 Blast shock wave Peak overpressure Machine learning Physics-informed machine learning
下载PDF
SNR and RSSI Based an Optimized Machine Learning Based Indoor Localization Approach:Multistory Round Building Scenario over LoRa Network
2
作者 Muhammad Ayoub Kamal Muhammad Mansoor Alam +1 位作者 Aznida Abu Bakar Sajak Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第8期1927-1945,共19页
In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine ... In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity. 展开更多
关键词 Indoor localization MKNN LoRa machine learning classification RSSI SNR localization
下载PDF
Application of deep learning methods combined with physical background in wide field of view imaging atmospheric Cherenkov telescopes
3
作者 Ao-Yan Cheng Hao Cai +25 位作者 Shi Chen Tian-Lu Chen Xiang Dong You-Liang Feng Qi Gao Quan-Bu Gou Yi-Qing Guo Hong-Bo Hu Ming-Ming Kang Hai-Jin Li Chen Liu Mao-Yuan Liu Wei Liu Fang-Sheng Min Chu-Cheng Pan Bing-Qiang Qiao Xiang-Li Qian Hui-Ying Sun Yu-Chang Sun Ao-Bo Wang Xu Wang Zhen Wang Guang-Guang Xin Yu-Hua Yao Qiang Yuan Yi Zhang 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第4期208-220,共13页
The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of... The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of focused Cherenkov detectors.Its objective is to observe transient sources such as gamma-ray bursts and the counterparts of gravitational waves.This study aims to utilize the latest AI technology to enhance the sensitivity of HADAR experiments.Training datasets and models with distinctive creativity were constructed by incorporating the relevant physical theories for various applications.These models can determine the type,energy,and direction of the incident particles after careful design.We obtained a background identification accuracy of 98.6%,a relative energy reconstruction error of 10.0%,and an angular resolution of 0.22°in a test dataset at 10 TeV.These findings demonstrate the significant potential for enhancing the precision and dependability of detector data analysis in astrophysical research.By using deep learning techniques,the HADAR experiment’s observational sensitivity to the Crab Nebula has surpassed that of MAGIC and H.E.S.S.at energies below 0.5 TeV and remains competitive with conventional narrow-field Cherenkov telescopes at higher energies.In addition,our experiment offers a new approach for dealing with strongly connected,scattered data. 展开更多
关键词 VHE gamma-ray astronomy HADAR Deep learning Convolutional neural networks
下载PDF
PCA-LSTM:An Impulsive Ground-Shaking Identification Method Based on Combined Deep Learning
4
作者 Yizhao Wang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3029-3045,共17页
Near-fault impulsive ground-shaking is highly destructive to engineering structures,so its accurate identification ground-shaking is a top priority in the engineering field.However,due to the lack of a comprehensive c... Near-fault impulsive ground-shaking is highly destructive to engineering structures,so its accurate identification ground-shaking is a top priority in the engineering field.However,due to the lack of a comprehensive consideration of the ground-shaking characteristics in traditional methods,the generalization and accuracy of the identification process are low.To address these problems,an impulsive ground-shaking identification method combined with deep learning named PCA-LSTM is proposed.Firstly,ground-shaking characteristics were analyzed and groundshaking the data was annotated using Baker’smethod.Secondly,the Principal Component Analysis(PCA)method was used to extract the most relevant features related to impulsive ground-shaking.Thirdly,a Long Short-Term Memory network(LSTM)was constructed,and the extracted features were used as the input for training.Finally,the identification results for the Artificial Neural Network(ANN),Convolutional Neural Network(CNN),LSTM,and PCA-LSTMmodels were compared and analyzed.The experimental results showed that the proposed method improved the accuracy of pulsed ground-shaking identification by>8.358%and identification speed by>26.168%,compared to other benchmark models ground-shaking. 展开更多
关键词 Impulsive ground-shaking principal component analysis artificial intelligence deep learning impulse recognition
下载PDF
Evaluation and Prediction of Groundwater Quality in the Municipality of Za-Kpota (South Benin) Using Machine Learning and Remote Sensing
5
作者 Jennifer A. Ahlonsou Firmin M. Adandedji +2 位作者 Abdoukarim Alassane Consolas Adihou Mama Daouda 《Journal of Water Resource and Protection》 CAS 2024年第7期502-522,共21页
Accessing drinking water is a global issue. This study aims to contribute to the assessment of groundwater quality in the municipality of Za-Kpota (southern Benin) using remote sensing and Machine Learning. The method... Accessing drinking water is a global issue. This study aims to contribute to the assessment of groundwater quality in the municipality of Za-Kpota (southern Benin) using remote sensing and Machine Learning. The methodological approach used consisted in linking groundwater physico-chemical parameter data collected in the field and in the laboratory using AFNOR 1994 standardized methods to satellite data (Landsat) in order to sketch out a groundwater quality prediction model. The data was processed using QGis (Semi-Automatic Plugin: SCP) and Python (Jupyter Netebook: Prediction) softwares. The results of water analysis from the sampled wells and boreholes indicated that most of the water is acidic (pH varying between 5.59 and 7.83). The water was moderately mineralized, with conductivity values of less than 1500 μs/cm overall (59 µS/cm to 1344 µS/cm), with high concentrations of nitrates and phosphates in places. The dynamics of groundwater quality in the municipality of Za-Kpota between 2008 and 2022 are also marked by a regression in land use units (a regression in vegetation and marshland formation in favor of built-up areas, bare soil, crops and fallow land) revealed by the diachronic analysis of satellite images from 2008, 2013, 2018 and 2022. Surveys of local residents revealed the use of herbicides and pesticides in agricultural fields, which are the main drivers contributing to the groundwater quality deterioration observed in the study area. Field surveys revealed the use of herbicides and pesticides in agricultural fields, which are factors contributing to the deterioration in groundwater quality observed in the study area. The results of the groundwater quality prediction models (ANN, RF and LR) developed led to the conclusion that the model based on Artificial Neural Networks (ANN: R2 = 0.97 and RMSE = 0) is the best for groundwater quality changes modelling in the Za-Kpota municipality. 展开更多
关键词 GroundWATER Land Use Electrical Conductivity Machine learning Za-Kpota
下载PDF
Machine Learning and Statistical Analysis in Groundwater Monitoring for Total Dissolved Solids Assessment in Winkler County, Texas
6
作者 Azuka I. Udeh Osayamen J. Imarhiagbe +2 位作者 Erepamo J. Omietimi Abdulqudus O. Mohammed Oluwatomilola Andre-Obayanju 《Journal of Geoscience and Environment Protection》 2024年第6期1-29,共29页
This research aims to develop reliable models using machine learning algorithms to precisely predict Total Dissolved Solids (TDS) in wells of the Permian basin, Winkler County, Texas. The data for this contribution wa... This research aims to develop reliable models using machine learning algorithms to precisely predict Total Dissolved Solids (TDS) in wells of the Permian basin, Winkler County, Texas. The data for this contribution was obtained from the Texas Water Development Board website (TWDB). Five hundred and ninety-three samples were obtained from two hundred and ninety-eight wells in the study area. The wells were drilled at different county locations into five aquifers, including Pecos Valley, Dockum, Capitan Reef, Edward Trinity, and Rustler aquifers. A total of fourteen different water quality parameters were used, and they include Potential hydrogen (pH), Sodium, Chloride, Magnesium, Fluoride, TDS, Specific Conductance, Nitrate, Total Hardness, Calcium, Temperature, Well Depth, Sulphate, and Bicarbonates. Four machine learning regression algorithms were developed to get a good model to help predict TDS in this area: Decision Tree regression, Linear regression, Support Vector Regression, and K-nearest neighbor. The study showed that the Decision Tree produced the best model with attributes like the coefficient of determination R2 = 1.00 and 0.96 for the training and testing, respectively. It also produced the lowest score of mean absolute error MAE = 0.00 and 0.04 for training and testing, respectively. This study will reduce the cost of obtaining different water quality parameters in TDS determination by leveraging machine learning to use only the parameters contributing to TDS, thereby helping researchers obtain only the parameters necessary for TDS prediction. It will also help the authorities enact policies that will improve the water quality in areas where drinking water availability is a challenge by providing important information for monitoring and assessing groundwater quality. 展开更多
关键词 Machine learning Regression Aquifers Winkler County SINKHOLES
下载PDF
Task assignment in ground-to-air confrontation based on multiagent deep reinforcement learning 被引量:3
7
作者 Jia-yi Liu Gang Wang +2 位作者 Qiang Fu Shao-hua Yue Si-yuan Wang 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2023年第1期210-219,共10页
The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to... The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to ground-to-air confrontation,there is low efficiency in dealing with complex tasks,and there are interactive conflicts in multiagent systems.This study proposes a multiagent architecture based on a one-general agent with multiple narrow agents(OGMN)to reduce task assignment conflicts.Considering the slow speed of traditional dynamic task assignment algorithms,this paper proposes the proximal policy optimization for task assignment of general and narrow agents(PPOTAGNA)algorithm.The algorithm based on the idea of the optimal assignment strategy algorithm and combined with the training framework of deep reinforcement learning(DRL)adds a multihead attention mechanism and a stage reward mechanism to the bilateral band clipping PPO algorithm to solve the problem of low training efficiency.Finally,simulation experiments are carried out in the digital battlefield.The multiagent architecture based on OGMN combined with the PPO-TAGNA algorithm can obtain higher rewards faster and has a higher win ratio.By analyzing agent behavior,the efficiency,superiority and rationality of resource utilization of this method are verified. 展开更多
关键词 Ground-to-air confrontation Task assignment General and narrow agents Deep reinforcement learning Proximal policy optimization(PPO)
下载PDF
Rock mass structural recognition from drill monitoring technology in underground mining using discontinuity index and machine learning techniques 被引量:1
8
作者 Alberto Fernández JoséA.Sanchidrián +3 位作者 Pablo Segarra Santiago Gómez Enming Li Rafael Navarro 《International Journal of Mining Science and Technology》 SCIE EI CAS CSCD 2023年第5期555-571,共17页
A procedure to recognize individual discontinuities in rock mass from measurement while drilling(MWD)technology is developed,using the binary pattern of structural rock characteristics obtained from in-hole images for... A procedure to recognize individual discontinuities in rock mass from measurement while drilling(MWD)technology is developed,using the binary pattern of structural rock characteristics obtained from in-hole images for calibration.Data from two underground operations with different drilling technology and different rock mass characteristics are considered,which generalizes the application of the methodology to different sites and ensures the full operational integration of MWD data analysis.Two approaches are followed for site-specific structural model building:a discontinuity index(DI)built from variations in MWD parameters,and a machine learning(ML)classifier as function of the drilling parameters and their variability.The prediction ability of the models is quantitatively assessed as the rate of recognition of discontinuities observed in borehole logs.Differences between the parameters involved in the models for each site,and differences in their weights,highlight the site-dependence of the resulting models.The ML approach offers better performance than the classical DI,with recognition rates in the range 89%to 96%.However,the simpler DI still yields fairly accurate results,with recognition rates 70%to 90%.These results validate the adaptive MWD-based methodology as an engineering solution to predict rock structural condition in underground mining operations. 展开更多
关键词 Drill monitoring technology Rock mass characterization Underground mining Similarity metrics of binary vectors Structural rock factor Machine learning
下载PDF
Deep Learning Based Underground Sewer Defect Classification Using a Modified RegNet 被引量:1
9
作者 Yu Chen Sagar A.S.M.Sharifuzzaman +4 位作者 Hangxiang Wang Yanfen Li L.Minh Dang Hyoung-Kyu Song Hyeonjoon Moon 《Computers, Materials & Continua》 SCIE EI 2023年第6期5451-5469,共19页
The sewer system plays an important role in protecting rainfall and treating urban wastewater.Due to the harsh internal environment and complex structure of the sewer,it is difficult to monitor the sewer system.Resear... The sewer system plays an important role in protecting rainfall and treating urban wastewater.Due to the harsh internal environment and complex structure of the sewer,it is difficult to monitor the sewer system.Researchers are developing different methods,such as the Internet of Things and Artificial Intelligence,to monitor and detect the faults in the sewer system.Deep learning is a promising artificial intelligence technology that can effectively identify and classify different sewer system defects.However,the existing deep learning based solution does not provide high accuracy prediction and the defect class considered for classification is very small,which can affect the robustness of the model in the constraint environment.As a result,this paper proposes a sewer condition monitoring framework based on deep learning,which can effectively detect and evaluate defects in sewer pipelines with high accuracy.We also introduce a large dataset of sewer defects with 20 different defect classes found in the sewer pipeline.This study modified the original RegNet model by modifying the squeeze excitation(SE)block and adding the dropout layer and Leaky Rectified Linear Units(LeakyReLU)activation function in the Block structure of RegNet model.This study explored different deep learning methods such as RegNet,ResNet50,very deep convolutional networks(VGG),and GoogleNet to train on the sewer defect dataset.The experimental results indicate that the proposed system framework based on the modified-RegNet(RegNet+)model achieves the highest accuracy of 99.5 compared with the commonly used deep learning models.The proposed model provides a robust deep learning model that can effectively classify 20 different sewer defects and be utilized in real-world sewer condition monitoring applications. 展开更多
关键词 Deep learning defect classification underground sewer computer vision convolutional neural network RegNet
下载PDF
Multiple Object Tracking through Background Learning 被引量:1
10
作者 Deependra Sharma Zainul Abdin Jaffery 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期191-204,共14页
This paper discusses about the new approach of multiple object track-ing relative to background information.The concept of multiple object tracking through background learning is based upon the theory of relativity,th... This paper discusses about the new approach of multiple object track-ing relative to background information.The concept of multiple object tracking through background learning is based upon the theory of relativity,that involves a frame of reference in spatial domain to localize and/or track any object.Thefield of multiple object tracking has seen a lot of research,but researchers have considered the background as redundant.However,in object tracking,the back-ground plays a vital role and leads to definite improvement in the overall process of tracking.In the present work an algorithm is proposed for the multiple object tracking through background learning.The learning framework is based on graph embedding approach for localizing multiple objects.The graph utilizes the inher-ent capabilities of depth modelling that assist in prior to track occlusion avoidance among multiple objects.The proposed algorithm has been compared with the recent work available in literature on numerous performance evaluation measures.It is observed that our proposed algorithm gives better performance. 展开更多
关键词 Object tracking image processing background learning graph embedding algorithm computer vision
下载PDF
Machine learning applications in stroke medicine:advancements,challenges,and future prospectives 被引量:3
11
作者 Mario Daidone Sergio Ferrantelli Antonino Tuttolomondo 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第4期769-773,共5页
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique... Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease. 展开更多
关键词 cerebrovascular disease deep learning machine learning reinforcement learning STROKE stroke therapy supervised learning unsupervised learning
下载PDF
Astrocytic endothelin-1 overexpression impairs learning and memory ability in ischemic stroke via altered hippocampal neurogenesis and lipid metabolism 被引量:5
12
作者 Jie Li Wen Jiang +9 位作者 Yuefang Cai Zhenqiu Ning Yingying Zhou Chengyi Wang Sookja Ki Chung Yan Huang Jingbo Sun Minzhen Deng Lihua Zhou Xiao Cheng 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第3期650-656,共7页
Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However... Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction. 展开更多
关键词 astrocytic endothelin-1 dentate gyrus differentially expressed proteins HIPPOCAMPUS ischemic stroke learning and memory deficits lipid metabolism neural stem cells NEUROGENESIS proliferation
下载PDF
Significant risk factors for intensive care unit-acquired weakness:A processing strategy based on repeated machine learning 被引量:9
13
作者 Ling Wang Deng-Yan Long 《World Journal of Clinical Cases》 SCIE 2024年第7期1235-1242,共8页
BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective pr... BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective preventive measures.AIM To identify significant risk factors for ICU-AW through iterative machine learning techniques and offer recommendations for its prevention and treatment.METHODS Patients were categorized into ICU-AW and non-ICU-AW groups on the 14th day post-ICU admission.Relevant data from the initial 14 d of ICU stay,such as age,comorbidities,sedative dosage,vasopressor dosage,duration of mechanical ventilation,length of ICU stay,and rehabilitation therapy,were gathered.The relationships between these variables and ICU-AW were examined.Utilizing iterative machine learning techniques,a multilayer perceptron neural network model was developed,and its predictive performance for ICU-AW was assessed using the receiver operating characteristic curve.RESULTS Within the ICU-AW group,age,duration of mechanical ventilation,lorazepam dosage,adrenaline dosage,and length of ICU stay were significantly higher than in the non-ICU-AW group.Additionally,sepsis,multiple organ dysfunction syndrome,hypoalbuminemia,acute heart failure,respiratory failure,acute kidney injury,anemia,stress-related gastrointestinal bleeding,shock,hypertension,coronary artery disease,malignant tumors,and rehabilitation therapy ratios were significantly higher in the ICU-AW group,demonstrating statistical significance.The most influential factors contributing to ICU-AW were identified as the length of ICU stay(100.0%)and the duration of mechanical ventilation(54.9%).The neural network model predicted ICU-AW with an area under the curve of 0.941,sensitivity of 92.2%,and specificity of 82.7%.CONCLUSION The main factors influencing ICU-AW are the length of ICU stay and the duration of mechanical ventilation.A primary preventive strategy,when feasible,involves minimizing both ICU stay and mechanical ventilation duration. 展开更多
关键词 Intensive care unit-acquired weakness Risk factors Machine learning PREVENTION Strategies
下载PDF
Recent Progress in Reinforcement Learning and Adaptive Dynamic Programming for Advanced Control Applications 被引量:4
14
作者 Ding Wang Ning Gao +2 位作者 Derong Liu Jinna Li Frank L.Lewis 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期18-36,共19页
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ... Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence. 展开更多
关键词 Adaptive dynamic programming(ADP) advanced control complex environment data-driven control event-triggered design intelligent control neural networks nonlinear systems optimal control reinforcement learning(RL)
下载PDF
IDS-INT:Intrusion detection system using transformer-based transfer learning for imbalanced network traffic 被引量:3
15
作者 Farhan Ullah Shamsher Ullah +1 位作者 Gautam Srivastava Jerry Chun-Wei Lin 《Digital Communications and Networks》 SCIE CSCD 2024年第1期190-204,共15页
A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a... A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a subcategory of attack,host information,malicious scripts,etc.In terms of network perspectives,network traffic may contain an imbalanced number of harmful attacks when compared to normal traffic.It is challenging to identify a specific attack due to complex features and data imbalance issues.To address these issues,this paper proposes an Intrusion Detection System using transformer-based transfer learning for Imbalanced Network Traffic(IDS-INT).IDS-INT uses transformer-based transfer learning to learn feature interactions in both network feature representation and imbalanced data.First,detailed information about each type of attack is gathered from network interaction descriptions,which include network nodes,attack type,reference,host information,etc.Second,the transformer-based transfer learning approach is developed to learn detailed feature representation using their semantic anchors.Third,the Synthetic Minority Oversampling Technique(SMOTE)is implemented to balance abnormal traffic and detect minority attacks.Fourth,the Convolution Neural Network(CNN)model is designed to extract deep features from the balanced network traffic.Finally,the hybrid approach of the CNN-Long Short-Term Memory(CNN-LSTM)model is developed to detect different types of attacks from the deep features.Detailed experiments are conducted to test the proposed approach using three standard datasets,i.e.,UNsWNB15,CIC-IDS2017,and NSL-KDD.An explainable AI approach is implemented to interpret the proposed method and develop a trustable model. 展开更多
关键词 Network intrusion detection Transfer learning Features extraction Imbalance data Explainable AI CYBERSECURITY
下载PDF
High-throughput calculations combining machine learning to investigate the corrosion properties of binary Mg alloys 被引量:3
16
作者 Yaowei Wang Tian Xie +4 位作者 Qingli Tang Mingxu Wang Tao Ying Hong Zhu Xiaoqin Zeng 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第4期1406-1418,共13页
Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experi... Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experiment trial,a high-throughput computational strategy based on first-principles calculations is designed for screening corrosion-resistant binary Mg alloy with intermetallics,from both the thermodynamic and kinetic perspectives.The stable binary Mg intermetallics with low equilibrium potential difference with respect to the Mg matrix are firstly identified.Then,the hydrogen adsorption energies on the surfaces of these Mg intermetallics are calculated,and the corrosion exchange current density is further calculated by a hydrogen evolution reaction(HER)kinetic model.Several intermetallics,e.g.Y_(3)Mg,Y_(2)Mg and La_(5)Mg,are identified to be promising intermetallics which might effectively hinder the cathodic HER.Furthermore,machine learning(ML)models are developed to predict Mg intermetallics with proper hydrogen adsorption energy employing work function(W_(f))and weighted first ionization energy(WFIE).The generalization of the ML models is tested on five new binary Mg intermetallics with the average root mean square error(RMSE)of 0.11 eV.This study not only predicts some promising binary Mg intermetallics which may suppress the galvanic corrosion,but also provides a high-throughput screening strategy and ML models for the design of corrosion-resistant alloy,which can be extended to ternary Mg alloys or other alloy systems. 展开更多
关键词 Mg intermetallics Corrosion property HIGH-THROUGHPUT Density functional theory Machine learning
下载PDF
Machine learning with active pharmaceutical ingredient/polymer interaction mechanism:Prediction for complex phase behaviors of pharmaceuticals and formulations 被引量:2
17
作者 Kai Ge Yiping Huang Yuanhui Ji 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第2期263-272,共10页
The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceu... The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceutical formulations.In this work,a developed machine-learning model efficiently predicts the solubility of APIs in polymers by learning the phase equilibrium principle and using a few molecular descriptors.Under the few-shot learning framework,thermodynamic theory(perturbed-chain statistical associating fluid theory)was used for data augmentation,and computational chemistry was applied for molecular descriptors'screening.The results showed that the developed machine-learning model can predict the API-polymer phase diagram accurately,broaden the solubility data of APIs in polymers,and reproduce the relationship between API solubility and the interaction mechanisms between API and polymer successfully,which provided efficient guidance for the development of pharmaceutical formulations. 展开更多
关键词 Multi-task machine learning Density functional theory Hydrogen bond interaction MISCIBILITY SOLUBILITY
下载PDF
Prediction model for corrosion rate of low-alloy steels under atmospheric conditions using machine learning algorithms 被引量:2
18
作者 Jingou Kuang Zhilin Long 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CAS CSCD 2024年第2期337-350,共14页
This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while ... This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while the corrosion rate as the output.6 dif-ferent ML algorithms were used to construct the proposed model.Through optimization and filtering,the eXtreme gradient boosting(XG-Boost)model exhibited good corrosion rate prediction accuracy.The features of material properties were then transformed into atomic and physical features using the proposed property transformation approach,and the dominant descriptors that affected the corrosion rate were filtered using the recursive feature elimination(RFE)as well as XGBoost methods.The established ML models exhibited better predic-tion performance and generalization ability via property transformation descriptors.In addition,the SHapley additive exPlanations(SHAP)method was applied to analyze the relationship between the descriptors and corrosion rate.The results showed that the property transformation model could effectively help with analyzing the corrosion behavior,thereby significantly improving the generalization ability of corrosion rate prediction models. 展开更多
关键词 machine learning low-alloy steel atmospheric corrosion prediction corrosion rate feature fusion
下载PDF
Deep learning-based inpainting of saturation artifacts in optical coherence tomography images 被引量:2
19
作者 Muyun Hu Zhuoqun Yuan +2 位作者 Di Yang Jingzhu Zhao Yanmei Liang 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第3期1-10,共10页
Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts ... Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images.We proposed a deep learning-based inpainting method of saturation artifacts in this paper.The generation mechanism of saturation artifacts was analyzed,and experimental and simulated datasets were built based on the mechanism.Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs.The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility,strong generalization,and robustness. 展开更多
关键词 Optical coherence tomography saturation artifacts deep learning image inpainting.
下载PDF
A game-theoretic approach for federated learning:A trade-off among privacy,accuracy and energy 被引量:2
20
作者 Lihua Yin Sixin Lin +3 位作者 Zhe Sun Ran Li Yuanyuan He Zhiqiang Hao 《Digital Communications and Networks》 SCIE CSCD 2024年第2期389-403,共15页
Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also ... Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems. 展开更多
关键词 Federated learning Privacy preservation Energy optimization Game theory Distributed communication systems
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部