The accurate prediction of peak overpressure of explosion shockwaves is significant in fields such as explosion hazard assessment and structural protection, where explosion shockwaves serve as typical destructive elem...The accurate prediction of peak overpressure of explosion shockwaves is significant in fields such as explosion hazard assessment and structural protection, where explosion shockwaves serve as typical destructive elements. Aiming at the problem of insufficient accuracy of the existing physical models for predicting the peak overpressure of ground reflected waves, two physics-informed machine learning models are constructed. The results demonstrate that the machine learning models, which incorporate physical information by predicting the deviation between the physical model and actual values and adding a physical loss term in the loss function, can accurately predict both the training and out-oftraining dataset. Compared to existing physical models, the average relative error in the predicted training domain is reduced from 17.459%-48.588% to 2%, and the proportion of average relative error less than 20% increased from 0% to 59.4% to more than 99%. In addition, the relative average error outside the prediction training set range is reduced from 14.496%-29.389% to 5%, and the proportion of relative average error less than 20% increased from 0% to 71.39% to more than 99%. The inclusion of a physical loss term enforcing monotonicity in the loss function effectively improves the extrapolation performance of machine learning. The findings of this study provide valuable reference for explosion hazard assessment and anti-explosion structural design in various fields.展开更多
In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine ...In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity.展开更多
The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of...The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of focused Cherenkov detectors.Its objective is to observe transient sources such as gamma-ray bursts and the counterparts of gravitational waves.This study aims to utilize the latest AI technology to enhance the sensitivity of HADAR experiments.Training datasets and models with distinctive creativity were constructed by incorporating the relevant physical theories for various applications.These models can determine the type,energy,and direction of the incident particles after careful design.We obtained a background identification accuracy of 98.6%,a relative energy reconstruction error of 10.0%,and an angular resolution of 0.22°in a test dataset at 10 TeV.These findings demonstrate the significant potential for enhancing the precision and dependability of detector data analysis in astrophysical research.By using deep learning techniques,the HADAR experiment’s observational sensitivity to the Crab Nebula has surpassed that of MAGIC and H.E.S.S.at energies below 0.5 TeV and remains competitive with conventional narrow-field Cherenkov telescopes at higher energies.In addition,our experiment offers a new approach for dealing with strongly connected,scattered data.展开更多
Near-fault impulsive ground-shaking is highly destructive to engineering structures,so its accurate identification ground-shaking is a top priority in the engineering field.However,due to the lack of a comprehensive c...Near-fault impulsive ground-shaking is highly destructive to engineering structures,so its accurate identification ground-shaking is a top priority in the engineering field.However,due to the lack of a comprehensive consideration of the ground-shaking characteristics in traditional methods,the generalization and accuracy of the identification process are low.To address these problems,an impulsive ground-shaking identification method combined with deep learning named PCA-LSTM is proposed.Firstly,ground-shaking characteristics were analyzed and groundshaking the data was annotated using Baker’smethod.Secondly,the Principal Component Analysis(PCA)method was used to extract the most relevant features related to impulsive ground-shaking.Thirdly,a Long Short-Term Memory network(LSTM)was constructed,and the extracted features were used as the input for training.Finally,the identification results for the Artificial Neural Network(ANN),Convolutional Neural Network(CNN),LSTM,and PCA-LSTMmodels were compared and analyzed.The experimental results showed that the proposed method improved the accuracy of pulsed ground-shaking identification by>8.358%and identification speed by>26.168%,compared to other benchmark models ground-shaking.展开更多
Accessing drinking water is a global issue. This study aims to contribute to the assessment of groundwater quality in the municipality of Za-Kpota (southern Benin) using remote sensing and Machine Learning. The method...Accessing drinking water is a global issue. This study aims to contribute to the assessment of groundwater quality in the municipality of Za-Kpota (southern Benin) using remote sensing and Machine Learning. The methodological approach used consisted in linking groundwater physico-chemical parameter data collected in the field and in the laboratory using AFNOR 1994 standardized methods to satellite data (Landsat) in order to sketch out a groundwater quality prediction model. The data was processed using QGis (Semi-Automatic Plugin: SCP) and Python (Jupyter Netebook: Prediction) softwares. The results of water analysis from the sampled wells and boreholes indicated that most of the water is acidic (pH varying between 5.59 and 7.83). The water was moderately mineralized, with conductivity values of less than 1500 μs/cm overall (59 µS/cm to 1344 µS/cm), with high concentrations of nitrates and phosphates in places. The dynamics of groundwater quality in the municipality of Za-Kpota between 2008 and 2022 are also marked by a regression in land use units (a regression in vegetation and marshland formation in favor of built-up areas, bare soil, crops and fallow land) revealed by the diachronic analysis of satellite images from 2008, 2013, 2018 and 2022. Surveys of local residents revealed the use of herbicides and pesticides in agricultural fields, which are the main drivers contributing to the groundwater quality deterioration observed in the study area. Field surveys revealed the use of herbicides and pesticides in agricultural fields, which are factors contributing to the deterioration in groundwater quality observed in the study area. The results of the groundwater quality prediction models (ANN, RF and LR) developed led to the conclusion that the model based on Artificial Neural Networks (ANN: R2 = 0.97 and RMSE = 0) is the best for groundwater quality changes modelling in the Za-Kpota municipality.展开更多
This research aims to develop reliable models using machine learning algorithms to precisely predict Total Dissolved Solids (TDS) in wells of the Permian basin, Winkler County, Texas. The data for this contribution wa...This research aims to develop reliable models using machine learning algorithms to precisely predict Total Dissolved Solids (TDS) in wells of the Permian basin, Winkler County, Texas. The data for this contribution was obtained from the Texas Water Development Board website (TWDB). Five hundred and ninety-three samples were obtained from two hundred and ninety-eight wells in the study area. The wells were drilled at different county locations into five aquifers, including Pecos Valley, Dockum, Capitan Reef, Edward Trinity, and Rustler aquifers. A total of fourteen different water quality parameters were used, and they include Potential hydrogen (pH), Sodium, Chloride, Magnesium, Fluoride, TDS, Specific Conductance, Nitrate, Total Hardness, Calcium, Temperature, Well Depth, Sulphate, and Bicarbonates. Four machine learning regression algorithms were developed to get a good model to help predict TDS in this area: Decision Tree regression, Linear regression, Support Vector Regression, and K-nearest neighbor. The study showed that the Decision Tree produced the best model with attributes like the coefficient of determination R2 = 1.00 and 0.96 for the training and testing, respectively. It also produced the lowest score of mean absolute error MAE = 0.00 and 0.04 for training and testing, respectively. This study will reduce the cost of obtaining different water quality parameters in TDS determination by leveraging machine learning to use only the parameters contributing to TDS, thereby helping researchers obtain only the parameters necessary for TDS prediction. It will also help the authorities enact policies that will improve the water quality in areas where drinking water availability is a challenge by providing important information for monitoring and assessing groundwater quality.展开更多
The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to...The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to ground-to-air confrontation,there is low efficiency in dealing with complex tasks,and there are interactive conflicts in multiagent systems.This study proposes a multiagent architecture based on a one-general agent with multiple narrow agents(OGMN)to reduce task assignment conflicts.Considering the slow speed of traditional dynamic task assignment algorithms,this paper proposes the proximal policy optimization for task assignment of general and narrow agents(PPOTAGNA)algorithm.The algorithm based on the idea of the optimal assignment strategy algorithm and combined with the training framework of deep reinforcement learning(DRL)adds a multihead attention mechanism and a stage reward mechanism to the bilateral band clipping PPO algorithm to solve the problem of low training efficiency.Finally,simulation experiments are carried out in the digital battlefield.The multiagent architecture based on OGMN combined with the PPO-TAGNA algorithm can obtain higher rewards faster and has a higher win ratio.By analyzing agent behavior,the efficiency,superiority and rationality of resource utilization of this method are verified.展开更多
A procedure to recognize individual discontinuities in rock mass from measurement while drilling(MWD)technology is developed,using the binary pattern of structural rock characteristics obtained from in-hole images for...A procedure to recognize individual discontinuities in rock mass from measurement while drilling(MWD)technology is developed,using the binary pattern of structural rock characteristics obtained from in-hole images for calibration.Data from two underground operations with different drilling technology and different rock mass characteristics are considered,which generalizes the application of the methodology to different sites and ensures the full operational integration of MWD data analysis.Two approaches are followed for site-specific structural model building:a discontinuity index(DI)built from variations in MWD parameters,and a machine learning(ML)classifier as function of the drilling parameters and their variability.The prediction ability of the models is quantitatively assessed as the rate of recognition of discontinuities observed in borehole logs.Differences between the parameters involved in the models for each site,and differences in their weights,highlight the site-dependence of the resulting models.The ML approach offers better performance than the classical DI,with recognition rates in the range 89%to 96%.However,the simpler DI still yields fairly accurate results,with recognition rates 70%to 90%.These results validate the adaptive MWD-based methodology as an engineering solution to predict rock structural condition in underground mining operations.展开更多
The sewer system plays an important role in protecting rainfall and treating urban wastewater.Due to the harsh internal environment and complex structure of the sewer,it is difficult to monitor the sewer system.Resear...The sewer system plays an important role in protecting rainfall and treating urban wastewater.Due to the harsh internal environment and complex structure of the sewer,it is difficult to monitor the sewer system.Researchers are developing different methods,such as the Internet of Things and Artificial Intelligence,to monitor and detect the faults in the sewer system.Deep learning is a promising artificial intelligence technology that can effectively identify and classify different sewer system defects.However,the existing deep learning based solution does not provide high accuracy prediction and the defect class considered for classification is very small,which can affect the robustness of the model in the constraint environment.As a result,this paper proposes a sewer condition monitoring framework based on deep learning,which can effectively detect and evaluate defects in sewer pipelines with high accuracy.We also introduce a large dataset of sewer defects with 20 different defect classes found in the sewer pipeline.This study modified the original RegNet model by modifying the squeeze excitation(SE)block and adding the dropout layer and Leaky Rectified Linear Units(LeakyReLU)activation function in the Block structure of RegNet model.This study explored different deep learning methods such as RegNet,ResNet50,very deep convolutional networks(VGG),and GoogleNet to train on the sewer defect dataset.The experimental results indicate that the proposed system framework based on the modified-RegNet(RegNet+)model achieves the highest accuracy of 99.5 compared with the commonly used deep learning models.The proposed model provides a robust deep learning model that can effectively classify 20 different sewer defects and be utilized in real-world sewer condition monitoring applications.展开更多
This paper discusses about the new approach of multiple object track-ing relative to background information.The concept of multiple object tracking through background learning is based upon the theory of relativity,th...This paper discusses about the new approach of multiple object track-ing relative to background information.The concept of multiple object tracking through background learning is based upon the theory of relativity,that involves a frame of reference in spatial domain to localize and/or track any object.Thefield of multiple object tracking has seen a lot of research,but researchers have considered the background as redundant.However,in object tracking,the back-ground plays a vital role and leads to definite improvement in the overall process of tracking.In the present work an algorithm is proposed for the multiple object tracking through background learning.The learning framework is based on graph embedding approach for localizing multiple objects.The graph utilizes the inher-ent capabilities of depth modelling that assist in prior to track occlusion avoidance among multiple objects.The proposed algorithm has been compared with the recent work available in literature on numerous performance evaluation measures.It is observed that our proposed algorithm gives better performance.展开更多
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique...Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.展开更多
Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However...Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction.展开更多
BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective pr...BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective preventive measures.AIM To identify significant risk factors for ICU-AW through iterative machine learning techniques and offer recommendations for its prevention and treatment.METHODS Patients were categorized into ICU-AW and non-ICU-AW groups on the 14th day post-ICU admission.Relevant data from the initial 14 d of ICU stay,such as age,comorbidities,sedative dosage,vasopressor dosage,duration of mechanical ventilation,length of ICU stay,and rehabilitation therapy,were gathered.The relationships between these variables and ICU-AW were examined.Utilizing iterative machine learning techniques,a multilayer perceptron neural network model was developed,and its predictive performance for ICU-AW was assessed using the receiver operating characteristic curve.RESULTS Within the ICU-AW group,age,duration of mechanical ventilation,lorazepam dosage,adrenaline dosage,and length of ICU stay were significantly higher than in the non-ICU-AW group.Additionally,sepsis,multiple organ dysfunction syndrome,hypoalbuminemia,acute heart failure,respiratory failure,acute kidney injury,anemia,stress-related gastrointestinal bleeding,shock,hypertension,coronary artery disease,malignant tumors,and rehabilitation therapy ratios were significantly higher in the ICU-AW group,demonstrating statistical significance.The most influential factors contributing to ICU-AW were identified as the length of ICU stay(100.0%)and the duration of mechanical ventilation(54.9%).The neural network model predicted ICU-AW with an area under the curve of 0.941,sensitivity of 92.2%,and specificity of 82.7%.CONCLUSION The main factors influencing ICU-AW are the length of ICU stay and the duration of mechanical ventilation.A primary preventive strategy,when feasible,involves minimizing both ICU stay and mechanical ventilation duration.展开更多
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ...Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.展开更多
A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a...A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a subcategory of attack,host information,malicious scripts,etc.In terms of network perspectives,network traffic may contain an imbalanced number of harmful attacks when compared to normal traffic.It is challenging to identify a specific attack due to complex features and data imbalance issues.To address these issues,this paper proposes an Intrusion Detection System using transformer-based transfer learning for Imbalanced Network Traffic(IDS-INT).IDS-INT uses transformer-based transfer learning to learn feature interactions in both network feature representation and imbalanced data.First,detailed information about each type of attack is gathered from network interaction descriptions,which include network nodes,attack type,reference,host information,etc.Second,the transformer-based transfer learning approach is developed to learn detailed feature representation using their semantic anchors.Third,the Synthetic Minority Oversampling Technique(SMOTE)is implemented to balance abnormal traffic and detect minority attacks.Fourth,the Convolution Neural Network(CNN)model is designed to extract deep features from the balanced network traffic.Finally,the hybrid approach of the CNN-Long Short-Term Memory(CNN-LSTM)model is developed to detect different types of attacks from the deep features.Detailed experiments are conducted to test the proposed approach using three standard datasets,i.e.,UNsWNB15,CIC-IDS2017,and NSL-KDD.An explainable AI approach is implemented to interpret the proposed method and develop a trustable model.展开更多
Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experi...Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experiment trial,a high-throughput computational strategy based on first-principles calculations is designed for screening corrosion-resistant binary Mg alloy with intermetallics,from both the thermodynamic and kinetic perspectives.The stable binary Mg intermetallics with low equilibrium potential difference with respect to the Mg matrix are firstly identified.Then,the hydrogen adsorption energies on the surfaces of these Mg intermetallics are calculated,and the corrosion exchange current density is further calculated by a hydrogen evolution reaction(HER)kinetic model.Several intermetallics,e.g.Y_(3)Mg,Y_(2)Mg and La_(5)Mg,are identified to be promising intermetallics which might effectively hinder the cathodic HER.Furthermore,machine learning(ML)models are developed to predict Mg intermetallics with proper hydrogen adsorption energy employing work function(W_(f))and weighted first ionization energy(WFIE).The generalization of the ML models is tested on five new binary Mg intermetallics with the average root mean square error(RMSE)of 0.11 eV.This study not only predicts some promising binary Mg intermetallics which may suppress the galvanic corrosion,but also provides a high-throughput screening strategy and ML models for the design of corrosion-resistant alloy,which can be extended to ternary Mg alloys or other alloy systems.展开更多
The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceu...The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceutical formulations.In this work,a developed machine-learning model efficiently predicts the solubility of APIs in polymers by learning the phase equilibrium principle and using a few molecular descriptors.Under the few-shot learning framework,thermodynamic theory(perturbed-chain statistical associating fluid theory)was used for data augmentation,and computational chemistry was applied for molecular descriptors'screening.The results showed that the developed machine-learning model can predict the API-polymer phase diagram accurately,broaden the solubility data of APIs in polymers,and reproduce the relationship between API solubility and the interaction mechanisms between API and polymer successfully,which provided efficient guidance for the development of pharmaceutical formulations.展开更多
This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while ...This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while the corrosion rate as the output.6 dif-ferent ML algorithms were used to construct the proposed model.Through optimization and filtering,the eXtreme gradient boosting(XG-Boost)model exhibited good corrosion rate prediction accuracy.The features of material properties were then transformed into atomic and physical features using the proposed property transformation approach,and the dominant descriptors that affected the corrosion rate were filtered using the recursive feature elimination(RFE)as well as XGBoost methods.The established ML models exhibited better predic-tion performance and generalization ability via property transformation descriptors.In addition,the SHapley additive exPlanations(SHAP)method was applied to analyze the relationship between the descriptors and corrosion rate.The results showed that the property transformation model could effectively help with analyzing the corrosion behavior,thereby significantly improving the generalization ability of corrosion rate prediction models.展开更多
Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts ...Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images.We proposed a deep learning-based inpainting method of saturation artifacts in this paper.The generation mechanism of saturation artifacts was analyzed,and experimental and simulated datasets were built based on the mechanism.Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs.The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility,strong generalization,and robustness.展开更多
Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also ...Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems.展开更多
文摘The accurate prediction of peak overpressure of explosion shockwaves is significant in fields such as explosion hazard assessment and structural protection, where explosion shockwaves serve as typical destructive elements. Aiming at the problem of insufficient accuracy of the existing physical models for predicting the peak overpressure of ground reflected waves, two physics-informed machine learning models are constructed. The results demonstrate that the machine learning models, which incorporate physical information by predicting the deviation between the physical model and actual values and adding a physical loss term in the loss function, can accurately predict both the training and out-oftraining dataset. Compared to existing physical models, the average relative error in the predicted training domain is reduced from 17.459%-48.588% to 2%, and the proportion of average relative error less than 20% increased from 0% to 59.4% to more than 99%. In addition, the relative average error outside the prediction training set range is reduced from 14.496%-29.389% to 5%, and the proportion of relative average error less than 20% increased from 0% to 71.39% to more than 99%. The inclusion of a physical loss term enforcing monotonicity in the loss function effectively improves the extrapolation performance of machine learning. The findings of this study provide valuable reference for explosion hazard assessment and anti-explosion structural design in various fields.
基金The research will be funded by the Multimedia University,Department of Information Technology,Persiaran Multimedia,63100,Cyberjaya,Selangor,Malaysia.
文摘In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity.
文摘The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of focused Cherenkov detectors.Its objective is to observe transient sources such as gamma-ray bursts and the counterparts of gravitational waves.This study aims to utilize the latest AI technology to enhance the sensitivity of HADAR experiments.Training datasets and models with distinctive creativity were constructed by incorporating the relevant physical theories for various applications.These models can determine the type,energy,and direction of the incident particles after careful design.We obtained a background identification accuracy of 98.6%,a relative energy reconstruction error of 10.0%,and an angular resolution of 0.22°in a test dataset at 10 TeV.These findings demonstrate the significant potential for enhancing the precision and dependability of detector data analysis in astrophysical research.By using deep learning techniques,the HADAR experiment’s observational sensitivity to the Crab Nebula has surpassed that of MAGIC and H.E.S.S.at energies below 0.5 TeV and remains competitive with conventional narrow-field Cherenkov telescopes at higher energies.In addition,our experiment offers a new approach for dealing with strongly connected,scattered data.
文摘Near-fault impulsive ground-shaking is highly destructive to engineering structures,so its accurate identification ground-shaking is a top priority in the engineering field.However,due to the lack of a comprehensive consideration of the ground-shaking characteristics in traditional methods,the generalization and accuracy of the identification process are low.To address these problems,an impulsive ground-shaking identification method combined with deep learning named PCA-LSTM is proposed.Firstly,ground-shaking characteristics were analyzed and groundshaking the data was annotated using Baker’smethod.Secondly,the Principal Component Analysis(PCA)method was used to extract the most relevant features related to impulsive ground-shaking.Thirdly,a Long Short-Term Memory network(LSTM)was constructed,and the extracted features were used as the input for training.Finally,the identification results for the Artificial Neural Network(ANN),Convolutional Neural Network(CNN),LSTM,and PCA-LSTMmodels were compared and analyzed.The experimental results showed that the proposed method improved the accuracy of pulsed ground-shaking identification by>8.358%and identification speed by>26.168%,compared to other benchmark models ground-shaking.
文摘Accessing drinking water is a global issue. This study aims to contribute to the assessment of groundwater quality in the municipality of Za-Kpota (southern Benin) using remote sensing and Machine Learning. The methodological approach used consisted in linking groundwater physico-chemical parameter data collected in the field and in the laboratory using AFNOR 1994 standardized methods to satellite data (Landsat) in order to sketch out a groundwater quality prediction model. The data was processed using QGis (Semi-Automatic Plugin: SCP) and Python (Jupyter Netebook: Prediction) softwares. The results of water analysis from the sampled wells and boreholes indicated that most of the water is acidic (pH varying between 5.59 and 7.83). The water was moderately mineralized, with conductivity values of less than 1500 μs/cm overall (59 µS/cm to 1344 µS/cm), with high concentrations of nitrates and phosphates in places. The dynamics of groundwater quality in the municipality of Za-Kpota between 2008 and 2022 are also marked by a regression in land use units (a regression in vegetation and marshland formation in favor of built-up areas, bare soil, crops and fallow land) revealed by the diachronic analysis of satellite images from 2008, 2013, 2018 and 2022. Surveys of local residents revealed the use of herbicides and pesticides in agricultural fields, which are the main drivers contributing to the groundwater quality deterioration observed in the study area. Field surveys revealed the use of herbicides and pesticides in agricultural fields, which are factors contributing to the deterioration in groundwater quality observed in the study area. The results of the groundwater quality prediction models (ANN, RF and LR) developed led to the conclusion that the model based on Artificial Neural Networks (ANN: R2 = 0.97 and RMSE = 0) is the best for groundwater quality changes modelling in the Za-Kpota municipality.
文摘This research aims to develop reliable models using machine learning algorithms to precisely predict Total Dissolved Solids (TDS) in wells of the Permian basin, Winkler County, Texas. The data for this contribution was obtained from the Texas Water Development Board website (TWDB). Five hundred and ninety-three samples were obtained from two hundred and ninety-eight wells in the study area. The wells were drilled at different county locations into five aquifers, including Pecos Valley, Dockum, Capitan Reef, Edward Trinity, and Rustler aquifers. A total of fourteen different water quality parameters were used, and they include Potential hydrogen (pH), Sodium, Chloride, Magnesium, Fluoride, TDS, Specific Conductance, Nitrate, Total Hardness, Calcium, Temperature, Well Depth, Sulphate, and Bicarbonates. Four machine learning regression algorithms were developed to get a good model to help predict TDS in this area: Decision Tree regression, Linear regression, Support Vector Regression, and K-nearest neighbor. The study showed that the Decision Tree produced the best model with attributes like the coefficient of determination R2 = 1.00 and 0.96 for the training and testing, respectively. It also produced the lowest score of mean absolute error MAE = 0.00 and 0.04 for training and testing, respectively. This study will reduce the cost of obtaining different water quality parameters in TDS determination by leveraging machine learning to use only the parameters contributing to TDS, thereby helping researchers obtain only the parameters necessary for TDS prediction. It will also help the authorities enact policies that will improve the water quality in areas where drinking water availability is a challenge by providing important information for monitoring and assessing groundwater quality.
基金the Project of National Natural Science Foundation of China(Grant No.62106283)the Project of National Natural Science Foundation of China(Grant No.72001214)to provide fund for conducting experimentsthe Project of Natural Science Foundation of Shaanxi Province(Grant No.2020JQ-484)。
文摘The scale of ground-to-air confrontation task assignments is large and needs to deal with many concurrent task assignments and random events.Aiming at the problems where existing task assignment methods are applied to ground-to-air confrontation,there is low efficiency in dealing with complex tasks,and there are interactive conflicts in multiagent systems.This study proposes a multiagent architecture based on a one-general agent with multiple narrow agents(OGMN)to reduce task assignment conflicts.Considering the slow speed of traditional dynamic task assignment algorithms,this paper proposes the proximal policy optimization for task assignment of general and narrow agents(PPOTAGNA)algorithm.The algorithm based on the idea of the optimal assignment strategy algorithm and combined with the training framework of deep reinforcement learning(DRL)adds a multihead attention mechanism and a stage reward mechanism to the bilateral band clipping PPO algorithm to solve the problem of low training efficiency.Finally,simulation experiments are carried out in the digital battlefield.The multiagent architecture based on OGMN combined with the PPO-TAGNA algorithm can obtain higher rewards faster and has a higher win ratio.By analyzing agent behavior,the efficiency,superiority and rationality of resource utilization of this method are verified.
基金conducted under the illu MINEation project, funded by the European Union’s Horizon 2020 research and innovation program under grant agreement (No. 869379)supported by the China Scholarship Council (No. 202006370006)
文摘A procedure to recognize individual discontinuities in rock mass from measurement while drilling(MWD)technology is developed,using the binary pattern of structural rock characteristics obtained from in-hole images for calibration.Data from two underground operations with different drilling technology and different rock mass characteristics are considered,which generalizes the application of the methodology to different sites and ensures the full operational integration of MWD data analysis.Two approaches are followed for site-specific structural model building:a discontinuity index(DI)built from variations in MWD parameters,and a machine learning(ML)classifier as function of the drilling parameters and their variability.The prediction ability of the models is quantitatively assessed as the rate of recognition of discontinuities observed in borehole logs.Differences between the parameters involved in the models for each site,and differences in their weights,highlight the site-dependence of the resulting models.The ML approach offers better performance than the classical DI,with recognition rates in the range 89%to 96%.However,the simpler DI still yields fairly accurate results,with recognition rates 70%to 90%.These results validate the adaptive MWD-based methodology as an engineering solution to predict rock structural condition in underground mining operations.
基金supported by Basic ScienceResearch Program through the National Research Foundation ofKorea(NRF)funded by the Ministry of Education(2020R1A6A1A03038540)by Korea Institute of Planning and Evaluation for Technology in Food,Agriculture,Forestry and Fisheries(IPET)through Digital Breeding Transformation Technology Development Program,funded by Ministry of Agriculture,Food and Rural Affairs(MAFRA)(322063-03-1-SB010)by the Technology development Program(RS-2022-00156456)funded by the Ministry of SMEs and Startups(MSS,Korea).
文摘The sewer system plays an important role in protecting rainfall and treating urban wastewater.Due to the harsh internal environment and complex structure of the sewer,it is difficult to monitor the sewer system.Researchers are developing different methods,such as the Internet of Things and Artificial Intelligence,to monitor and detect the faults in the sewer system.Deep learning is a promising artificial intelligence technology that can effectively identify and classify different sewer system defects.However,the existing deep learning based solution does not provide high accuracy prediction and the defect class considered for classification is very small,which can affect the robustness of the model in the constraint environment.As a result,this paper proposes a sewer condition monitoring framework based on deep learning,which can effectively detect and evaluate defects in sewer pipelines with high accuracy.We also introduce a large dataset of sewer defects with 20 different defect classes found in the sewer pipeline.This study modified the original RegNet model by modifying the squeeze excitation(SE)block and adding the dropout layer and Leaky Rectified Linear Units(LeakyReLU)activation function in the Block structure of RegNet model.This study explored different deep learning methods such as RegNet,ResNet50,very deep convolutional networks(VGG),and GoogleNet to train on the sewer defect dataset.The experimental results indicate that the proposed system framework based on the modified-RegNet(RegNet+)model achieves the highest accuracy of 99.5 compared with the commonly used deep learning models.The proposed model provides a robust deep learning model that can effectively classify 20 different sewer defects and be utilized in real-world sewer condition monitoring applications.
文摘This paper discusses about the new approach of multiple object track-ing relative to background information.The concept of multiple object tracking through background learning is based upon the theory of relativity,that involves a frame of reference in spatial domain to localize and/or track any object.Thefield of multiple object tracking has seen a lot of research,but researchers have considered the background as redundant.However,in object tracking,the back-ground plays a vital role and leads to definite improvement in the overall process of tracking.In the present work an algorithm is proposed for the multiple object tracking through background learning.The learning framework is based on graph embedding approach for localizing multiple objects.The graph utilizes the inher-ent capabilities of depth modelling that assist in prior to track occlusion avoidance among multiple objects.The proposed algorithm has been compared with the recent work available in literature on numerous performance evaluation measures.It is observed that our proposed algorithm gives better performance.
文摘Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.
基金financially supported by the National Natural Science Foundation of China,No.81303115,81774042 (both to XC)the Pearl River S&T Nova Program of Guangzhou,No.201806010025 (to XC)+3 种基金the Specialty Program of Guangdong Province Hospital of Chinese Medicine of China,No.YN2018ZD07 (to XC)the Natural Science Foundatior of Guangdong Province of China,No.2023A1515012174 (to JL)the Science and Technology Program of Guangzhou of China,No.20210201 0268 (to XC),20210201 0339 (to JS)Guangdong Provincial Key Laboratory of Research on Emergency in TCM,Nos.2018-75,2019-140 (to JS)
文摘Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction.
基金Supported by Science and Technology Support Program of Qiandongnan Prefecture,No.Qiandongnan Sci-Tech Support[2021]12Guizhou Province High-Level Innovative Talent Training Program,No.Qiannan Thousand Talents[2022]201701.
文摘BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective preventive measures.AIM To identify significant risk factors for ICU-AW through iterative machine learning techniques and offer recommendations for its prevention and treatment.METHODS Patients were categorized into ICU-AW and non-ICU-AW groups on the 14th day post-ICU admission.Relevant data from the initial 14 d of ICU stay,such as age,comorbidities,sedative dosage,vasopressor dosage,duration of mechanical ventilation,length of ICU stay,and rehabilitation therapy,were gathered.The relationships between these variables and ICU-AW were examined.Utilizing iterative machine learning techniques,a multilayer perceptron neural network model was developed,and its predictive performance for ICU-AW was assessed using the receiver operating characteristic curve.RESULTS Within the ICU-AW group,age,duration of mechanical ventilation,lorazepam dosage,adrenaline dosage,and length of ICU stay were significantly higher than in the non-ICU-AW group.Additionally,sepsis,multiple organ dysfunction syndrome,hypoalbuminemia,acute heart failure,respiratory failure,acute kidney injury,anemia,stress-related gastrointestinal bleeding,shock,hypertension,coronary artery disease,malignant tumors,and rehabilitation therapy ratios were significantly higher in the ICU-AW group,demonstrating statistical significance.The most influential factors contributing to ICU-AW were identified as the length of ICU stay(100.0%)and the duration of mechanical ventilation(54.9%).The neural network model predicted ICU-AW with an area under the curve of 0.941,sensitivity of 92.2%,and specificity of 82.7%.CONCLUSION The main factors influencing ICU-AW are the length of ICU stay and the duration of mechanical ventilation.A primary preventive strategy,when feasible,involves minimizing both ICU stay and mechanical ventilation duration.
基金supported in part by the National Natural Science Foundation of China(62222301, 62073085, 62073158, 61890930-5, 62021003)the National Key Research and Development Program of China (2021ZD0112302, 2021ZD0112301, 2018YFC1900800-5)Beijing Natural Science Foundation (JQ19013)。
文摘Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.
文摘A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a subcategory of attack,host information,malicious scripts,etc.In terms of network perspectives,network traffic may contain an imbalanced number of harmful attacks when compared to normal traffic.It is challenging to identify a specific attack due to complex features and data imbalance issues.To address these issues,this paper proposes an Intrusion Detection System using transformer-based transfer learning for Imbalanced Network Traffic(IDS-INT).IDS-INT uses transformer-based transfer learning to learn feature interactions in both network feature representation and imbalanced data.First,detailed information about each type of attack is gathered from network interaction descriptions,which include network nodes,attack type,reference,host information,etc.Second,the transformer-based transfer learning approach is developed to learn detailed feature representation using their semantic anchors.Third,the Synthetic Minority Oversampling Technique(SMOTE)is implemented to balance abnormal traffic and detect minority attacks.Fourth,the Convolution Neural Network(CNN)model is designed to extract deep features from the balanced network traffic.Finally,the hybrid approach of the CNN-Long Short-Term Memory(CNN-LSTM)model is developed to detect different types of attacks from the deep features.Detailed experiments are conducted to test the proposed approach using three standard datasets,i.e.,UNsWNB15,CIC-IDS2017,and NSL-KDD.An explainable AI approach is implemented to interpret the proposed method and develop a trustable model.
基金financially supported by the National Key Research and Development Program of China(No.2016YFB0701202,No.2017YFB0701500 and No.2020YFB1505901)National Natural Science Foundation of China(General Program No.51474149,52072240)+3 种基金Shanghai Science and Technology Committee(No.18511109300)Science and Technology Commission of the CMC(2019JCJQZD27300)financial support from the University of Michigan and Shanghai Jiao Tong University joint funding,China(AE604401)Science and Technology Commission of Shanghai Municipality(No.18511109302).
文摘Magnesium(Mg)alloys have shown great prospects as both structural and biomedical materials,while poor corrosion resistance limits their further application.In this work,to avoid the time-consuming and laborious experiment trial,a high-throughput computational strategy based on first-principles calculations is designed for screening corrosion-resistant binary Mg alloy with intermetallics,from both the thermodynamic and kinetic perspectives.The stable binary Mg intermetallics with low equilibrium potential difference with respect to the Mg matrix are firstly identified.Then,the hydrogen adsorption energies on the surfaces of these Mg intermetallics are calculated,and the corrosion exchange current density is further calculated by a hydrogen evolution reaction(HER)kinetic model.Several intermetallics,e.g.Y_(3)Mg,Y_(2)Mg and La_(5)Mg,are identified to be promising intermetallics which might effectively hinder the cathodic HER.Furthermore,machine learning(ML)models are developed to predict Mg intermetallics with proper hydrogen adsorption energy employing work function(W_(f))and weighted first ionization energy(WFIE).The generalization of the ML models is tested on five new binary Mg intermetallics with the average root mean square error(RMSE)of 0.11 eV.This study not only predicts some promising binary Mg intermetallics which may suppress the galvanic corrosion,but also provides a high-throughput screening strategy and ML models for the design of corrosion-resistant alloy,which can be extended to ternary Mg alloys or other alloy systems.
基金the financial support from the National Natural Science Foundation of China(22278070,21978047,21776046)。
文摘The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceutical formulations.In this work,a developed machine-learning model efficiently predicts the solubility of APIs in polymers by learning the phase equilibrium principle and using a few molecular descriptors.Under the few-shot learning framework,thermodynamic theory(perturbed-chain statistical associating fluid theory)was used for data augmentation,and computational chemistry was applied for molecular descriptors'screening.The results showed that the developed machine-learning model can predict the API-polymer phase diagram accurately,broaden the solubility data of APIs in polymers,and reproduce the relationship between API solubility and the interaction mechanisms between API and polymer successfully,which provided efficient guidance for the development of pharmaceutical formulations.
基金the National Key R&D Program of China(No.2021YFB3701705).
文摘This work constructed a machine learning(ML)model to predict the atmospheric corrosion rate of low-alloy steels(LAS).The material properties of LAS,environmental factors,and exposure time were used as the input,while the corrosion rate as the output.6 dif-ferent ML algorithms were used to construct the proposed model.Through optimization and filtering,the eXtreme gradient boosting(XG-Boost)model exhibited good corrosion rate prediction accuracy.The features of material properties were then transformed into atomic and physical features using the proposed property transformation approach,and the dominant descriptors that affected the corrosion rate were filtered using the recursive feature elimination(RFE)as well as XGBoost methods.The established ML models exhibited better predic-tion performance and generalization ability via property transformation descriptors.In addition,the SHapley additive exPlanations(SHAP)method was applied to analyze the relationship between the descriptors and corrosion rate.The results showed that the property transformation model could effectively help with analyzing the corrosion behavior,thereby significantly improving the generalization ability of corrosion rate prediction models.
基金supported by the National Natural Science Foundation of China(62375144 and 61875092)Tianjin Foundation of Natural Science(21JCYBJC00260)Beijing-Tianjin-Hebei Basic Research Cooperation Special Program(19JCZDJC65300).
文摘Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images.We proposed a deep learning-based inpainting method of saturation artifacts in this paper.The generation mechanism of saturation artifacts was analyzed,and experimental and simulated datasets were built based on the mechanism.Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs.The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility,strong generalization,and robustness.
基金sponsored by the National Key R&D Program of China(No.2018YFB2100400)the National Natural Science Foundation of China(No.62002077,61872100)+4 种基金the Major Research Plan of the National Natural Science Foundation of China(92167203)the Guangdong Basic and Applied Basic Research Foundation(No.2020A1515110385)the China Postdoctoral Science Foundation(No.2022M710860)the Zhejiang Lab(No.2020NF0AB01)Guangzhou Science and Technology Plan Project(202102010440).
文摘Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems.