期刊文献+
共找到224,919篇文章
< 1 2 250 >
每页显示 20 50 100
Recombinant chitinase-3-like protein 1 alleviates learning and memory impairments via M2 microglia polarization in postoperative cognitive dysfunction mice
1
作者 Yujia Liu Xue Han +6 位作者 Yan Su Yiming Zhou Minhui Xu Jiyan Xu Zhengliang Ma Xiaoping Gu Tianjiao Xia 《Neural Regeneration Research》 SCIE CAS 2025年第9期2727-2736,共10页
Postoperative cognitive dysfunction is a seve re complication of the central nervous system that occurs after anesthesia and surgery,and has received attention for its high incidence and effect on the quality of life ... Postoperative cognitive dysfunction is a seve re complication of the central nervous system that occurs after anesthesia and surgery,and has received attention for its high incidence and effect on the quality of life of patients.To date,there are no viable treatment options for postoperative cognitive dysfunction.The identification of postoperative cognitive dysfunction hub genes could provide new research directions and therapeutic targets for future research.To identify the signaling mechanisms contributing to postoperative cognitive dysfunction,we first conducted Gene Ontology and Kyoto Encyclopedia of Genes and Genomes pathway enrichment analyses of the Gene Expression Omnibus GSE95426 dataset,which consists of mRNAs and long non-coding RNAs differentially expressed in mouse hippocampus3 days after tibial fracture.The dataset was enriched in genes associated with the biological process"regulation of immune cells,"of which Chill was identified as a hub gene.Therefore,we investigated the contribution of chitinase-3-like protein 1 protein expression changes to postoperative cognitive dysfunction in the mouse model of tibial fractu re surgery.Mice were intraperitoneally injected with vehicle or recombinant chitinase-3-like protein 124 hours post-surgery,and the injection groups were compared with untreated control mice for learning and memory capacities using the Y-maze and fear conditioning tests.In addition,protein expression levels of proinflammatory factors(interleukin-1βand inducible nitric oxide synthase),M2-type macrophage markers(CD206 and arginase-1),and cognition-related proteins(brain-derived neurotropic factor and phosphorylated NMDA receptor subunit NR2B)were measured in hippocampus by western blotting.Treatment with recombinant chitinase-3-like protein 1 prevented surgery-induced cognitive impairment,downregulated interleukin-1βand nducible nitric oxide synthase expression,and upregulated CD206,arginase-1,pNR2B,and brain-derived neurotropic factor expression compared with vehicle treatment.Intraperitoneal administration of the specific ERK inhibitor PD98059 diminished the effects of recombinant chitinase-3-like protein 1.Collectively,our findings suggest that recombinant chitinase-3-like protein 1 ameliorates surgery-induced cognitive decline by attenuating neuroinflammation via M2 microglial polarization in the hippocampus.Therefore,recombinant chitinase-3-like protein1 may have therapeutic potential fo r postoperative cognitive dysfunction. 展开更多
关键词 Chil1 hippocampus learning and memory M2 microglia NEUROINFLAMMATION postoperative cognitive dysfunction(POCD) recombinant CHI3L1
下载PDF
Regulator of G protein signaling 6 mediates exercise-induced recovery of hippocampal neurogenesis,learning,and memory in a mouse model of Alzheimer’s disease
2
作者 Mackenzie M.Spicer Jianqi Yang +5 位作者 Daniel Fu Alison N.DeVore Marisol Lauffer Nilufer S.Atasoy Deniz Atasoy Rory A.Fisher 《Neural Regeneration Research》 SCIE CAS 2025年第10期2969-2981,共13页
Hippocampal neuronal loss causes cognitive dysfunction in Alzheimer’s disease.Adult hippocampal neurogenesis is reduced in patients with Alzheimer’s disease.Exercise stimulates adult hippocampal neurogenesis in rode... Hippocampal neuronal loss causes cognitive dysfunction in Alzheimer’s disease.Adult hippocampal neurogenesis is reduced in patients with Alzheimer’s disease.Exercise stimulates adult hippocampal neurogenesis in rodents and improves memory and slows cognitive decline in patients with Alzheimer’s disease.However,the molecular pathways for exercise-induced adult hippocampal neurogenesis and improved cognition in Alzheimer’s disease are poorly understood.Recently,regulator of G protein signaling 6(RGS6)was identified as the mediator of voluntary running-induced adult hippocampal neurogenesis in mice.Here,we generated novel RGS6fl/fl;APP_(SWE) mice and used retroviral approaches to examine the impact of RGS6 deletion from dentate gyrus neuronal progenitor cells on voluntary running-induced adult hippocampal neurogenesis and cognition in an amyloid-based Alzheimer’s disease mouse model.We found that voluntary running in APP_(SWE) mice restored their hippocampal cognitive impairments to that of control mice.This cognitive rescue was abolished by RGS6 deletion in dentate gyrus neuronal progenitor cells,which also abolished running-mediated increases in adult hippocampal neurogenesis.Adult hippocampal neurogenesis was reduced in sedentary APP_(SWE) mice versus control mice,with basal adult hippocampal neurogenesis reduced by RGS6 deletion in dentate gyrus neural precursor cells.RGS6 was expressed in neurons within the dentate gyrus of patients with Alzheimer’s disease with significant loss of these RGS6-expressing neurons.Thus,RGS6 mediated voluntary running-induced rescue of impaired cognition and adult hippocampal neurogenesis in APP_(SWE) mice,identifying RGS6 in dentate gyrus neural precursor cells as a possible therapeutic target in Alzheimer’s disease. 展开更多
关键词 adult hippocampal neurogenesis Alzheimer’s disease dentate gyrus EXERCISE learning/memory neural precursor cells regulator of G protein signaling 6(RGS6)
下载PDF
Early identification of stroke through deep learning with multi-modal human speech and movement data
3
作者 Zijun Ou Haitao Wang +9 位作者 Bin Zhang Haobang Liang Bei Hu Longlong Ren Yanjuan Liu Yuhu Zhang Chengbo Dai Hejun Wu Weifeng Li Xin Li 《Neural Regeneration Research》 SCIE CAS 2025年第1期234-241,共8页
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are... Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting. 展开更多
关键词 artificial intelligence deep learning DIAGNOSIS early detection FAST SCREENING STROKE
下载PDF
Assessments of Data-Driven Deep Learning Models on One-Month Predictions of Pan-Arctic Sea Ice Thickness 被引量:1
4
作者 Chentao SONG Jiang ZHU Xichen LI 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1379-1390,共12页
In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,ma... In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,mainly due to the limited time coverage of observations and reanalysis data.Meanwhile,deep learning predictions of sea ice thickness(SIT)have yet to receive ample attention.In this study,two data-driven deep learning(DL)models are built based on the ConvLSTM and fully convolutional U-net(FC-Unet)algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations.These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved.Through comprehensive assessments of prediction skills by season and region,the results suggest that using a broader set of CMIP6 data for transfer learning,as well as incorporating multiple climate variables as predictors,contribute to better prediction results,although both DL models can effectively predict the spatiotemporal features of SIT anomalies.Regarding the predicted SIT anomalies of the FC-Unet model,the spatial correlations with reanalysis reach an average level of 89%over all months,while the temporal anomaly correlation coefficients are close to unity in most cases.The models also demonstrate robust performances in predicting SIT and SIE during extreme events.The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions,aiding climate change research and real-time business applications. 展开更多
关键词 Arctic sea ice thickness deep learning spatiotemporal sequence prediction transfer learning
下载PDF
A game-theoretic approach for federated learning:A trade-off among privacy,accuracy and energy 被引量:2
5
作者 Lihua Yin Sixin Lin +3 位作者 Zhe Sun Ran Li Yuanyuan He Zhiqiang Hao 《Digital Communications and Networks》 SCIE CSCD 2024年第2期389-403,共15页
Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also ... Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems. 展开更多
关键词 Federated learning Privacy preservation Energy optimization Game theory Distributed communication systems
下载PDF
Machine learning applications in healthcare clinical practice and research
6
作者 Nikolaos-Achilleas Arkoudis Stavros P Papadakos 《World Journal of Clinical Cases》 SCIE 2025年第1期16-21,共6页
Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligen... Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligence.Among its various applications,it has proven groundbreaking in healthcare as well,both in clinical practice and research.In this editorial,we succinctly introduce ML applications and present a study,featured in the latest issue of the World Journal of Clinical Cases.The authors of this study conducted an analysis using both multiple linear regression(MLR)and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease(NAFLD).Their results implicated age as the most important determining factor in both groups,followed by lactic dehydrogenase,uric acid,forced expiratory volume in one second,and albumin.In addition,for the NAFLD-group,the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure,as compared to plasma calcium and body fat for the NAFLD+group.However,the study's distinctive contribution lies in its adoption of ML methodologies,showcasing their superiority over traditional statistical approaches(herein MLR),thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research. 展开更多
关键词 Machine learning Artificial INTELLIGENCE CLINICAL Practice RESEARCH Glomerular filtration rate Non-alcoholic fatty liver disease MEDICINE
下载PDF
A performance-based hybrid deep learning model for predicting TBM advance rate using Attention-ResNet-LSTM 被引量:1
7
作者 Sihao Yu Zixin Zhang +2 位作者 Shuaifeng Wang Xin Huang Qinghua Lei 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第1期65-80,共16页
The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a k... The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a key parameter of TBM operation and reflects the TBM-ground interaction,for which a reliable prediction helps optimize the TBM performance.Here,we develop a hybrid neural network model,called Attention-ResNet-LSTM,for accurate prediction of the TBM advance rate.A database including geological properties and TBM operational parameters from the Yangtze River Natural Gas Pipeline Project is used to train and test this deep learning model.The evolutionary polynomial regression method is adopted to aid the selection of input parameters.The results of numerical exper-iments show that our Attention-ResNet-LSTM model outperforms other commonly-used intelligent models with a lower root mean square error and a lower mean absolute percentage error.Further,parametric analyses are conducted to explore the effects of the sequence length of historical data and the model architecture on the prediction accuracy.A correlation analysis between the input and output parameters is also implemented to provide guidance for adjusting relevant TBM operational parameters.The performance of our hybrid intelligent model is demonstrated in a case study of TBM tunneling through a complex ground with variable strata.Finally,data collected from the Baimang River Tunnel Project in Shenzhen of China are used to further test the generalization of our model.The results indicate that,compared to the conventional ResNet-LSTM model,our model has a better predictive capability for scenarios with unknown datasets due to its self-adaptive characteristic. 展开更多
关键词 Tunnel boring machine(TBM) Advance rate Deep learning Attention-ResNet-LSTM Evolutionary polynomial regression
下载PDF
A Two-Layer Encoding Learning Swarm Optimizer Based on Frequent Itemsets for Sparse Large-Scale Multi-Objective Optimization 被引量:1
8
作者 Sheng Qi Rui Wang +3 位作者 Tao Zhang Xu Yang Ruiqing Sun Ling Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第6期1342-1357,共16页
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.... Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed. 展开更多
关键词 Evolutionary algorithms learning swarm optimiza-tion sparse large-scale optimization sparse large-scale multi-objec-tive problems two-layer encoding.
下载PDF
Knowledge-reused transfer learning for molecular and materials science
9
作者 An Chen Zhilong Wang +6 位作者 Karl Luigi Loza Vidaurre Yanqiang Han Simin Ye Kehao Tao Shiwei Wang Jing Gao Jinjin Li 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2024年第11期149-168,共20页
Leveraging big data analytics and advanced algorithms to accelerate and optimize the process of molecular and materials design, synthesis, and application has revolutionized the field of molecular and materials scienc... Leveraging big data analytics and advanced algorithms to accelerate and optimize the process of molecular and materials design, synthesis, and application has revolutionized the field of molecular and materials science, allowing researchers to gain a deeper understanding of material properties and behaviors,leading to the development of new materials that are more efficient and reliable. However, the difficulty in constructing large-scale datasets of new molecules/materials due to the high cost of data acquisition and annotation limits the development of conventional machine learning(ML) approaches. Knowledgereused transfer learning(TL) methods are expected to break this dilemma. The application of TL lowers the data requirements for model training, which makes TL stand out in researches addressing data quality issues. In this review, we summarize recent progress in TL related to molecular and materials. We focus on the application of TL methods for the discovery of advanced molecules/materials, particularly, the construction of TL frameworks for different systems, and how TL can enhance the performance of models. In addition, the challenges of TL are also discussed. 展开更多
关键词 Machine learning Transfer learning Small data MOLECULE Material science
下载PDF
Semi-supervised learning based hybrid beamforming under time-varying propagation environments
10
作者 Yin Long Hang Ding Simon Murphy 《Digital Communications and Networks》 SCIE CSCD 2024年第4期1168-1177,共10页
Hybrid precoding is considered as a promising low-cost technique for millimeter wave(mm-wave)massive Multi-Input Multi-Output(MIMO)systems.In this work,referring to the time-varying propagation circumstances,with semi... Hybrid precoding is considered as a promising low-cost technique for millimeter wave(mm-wave)massive Multi-Input Multi-Output(MIMO)systems.In this work,referring to the time-varying propagation circumstances,with semi-supervised Incremental Learning(IL),we propose an online hybrid beamforming scheme.Firstly,given the constraint of constant modulus on analog beamformer and combiner,we propose a new broadnetwork-based structure for the design model of hybrid beamforming.Compared with the existing network structure,the proposed network structure can achieve better transmission performance and lower complexity.Moreover,to enhance the efficiency of IL further,by combining the semi-supervised graph with IL,we propose a hybrid beamforming scheme based on chunk-by-chunk semi-supervised learning,where only few transmissions are required to calculate the label and all other unlabelled transmissions would also be put into a training data chunk.Unlike the existing single-by-single approach where transmissions during the model update are not taken into the consideration of model update,all transmissions,even the ones during the model update,would make contributions to model update in the proposed method.During the model update,the amount of unlabelled transmissions is very large and they also carry some information,the prediction performance can be enhanced to some extent by these unlabelled channel data.Simulation results demonstrate the spectral efficiency of the proposed method outperforms that of the existing single-by-single approach.Besides,we prove the general complexity of the proposed method is lower than that of the existing approach and give the condition under which its absolute complexity outperforms that of the existing approach. 展开更多
关键词 Hybrid beamforming Time-varying environments Broad network Semi-supervised learning Online learning
下载PDF
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
11
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 Khursheed Aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
下载PDF
Position-Aware and Subgraph Enhanced Dynamic Graph Contrastive Learning on Discrete-Time Dynamic Graph
12
作者 Jian Feng Tian Liu Cailing Du 《Computers, Materials & Continua》 SCIE EI 2024年第11期2895-2909,共15页
Unsupervised learning methods such as graph contrastive learning have been used for dynamic graph represen-tation learning to eliminate the dependence of labels.However,existing studies neglect positional information ... Unsupervised learning methods such as graph contrastive learning have been used for dynamic graph represen-tation learning to eliminate the dependence of labels.However,existing studies neglect positional information when learning discrete snapshots,resulting in insufficient network topology learning.At the same time,due to the lack of appropriate data augmentation methods,it is difficult to capture the evolving patterns of the network effectively.To address the above problems,a position-aware and subgraph enhanced dynamic graph contrastive learning method is proposed for discrete-time dynamic graphs.Firstly,the global snapshot is built based on the historical snapshots to express the stable pattern of the dynamic graph,and the random walk is used to obtain the position representation by learning the positional information of the nodes.Secondly,a new data augmentation method is carried out from the perspectives of short-term changes and long-term stable structures of dynamic graphs.Specifically,subgraph sampling based on snapshots and global snapshots is used to obtain two structural augmentation views,and node structures and evolving patterns are learned by combining graph neural network,gated recurrent unit,and attention mechanism.Finally,the quality of node representation is improved by combining the contrastive learning between different structural augmentation views and between the two representations of structure and position.Experimental results on four real datasets show that the performance of the proposed method is better than the existing unsupervised methods,and it is more competitive than the supervised learning method under a semi-supervised setting. 展开更多
关键词 Dynamic graph representation learning graph contrastive learning structure representation position representation evolving pattern
下载PDF
Securing Cloud-Encrypted Data:Detecting Ransomware-as-a-Service(RaaS)Attacks through Deep Learning Ensemble
13
作者 Amardeep Singh Hamad Ali Abosaq +5 位作者 Saad Arif Zohaib Mushtaq Muhammad Irfan Ghulam Abbas Arshad Ali Alanoud Al Mazroa 《Computers, Materials & Continua》 SCIE EI 2024年第4期857-873,共17页
Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and ... Data security assurance is crucial due to the increasing prevalence of cloud computing and its widespread use across different industries,especially in light of the growing number of cybersecurity threats.A major and everpresent threat is Ransomware-as-a-Service(RaaS)assaults,which enable even individuals with minimal technical knowledge to conduct ransomware operations.This study provides a new approach for RaaS attack detection which uses an ensemble of deep learning models.For this purpose,the network intrusion detection dataset“UNSWNB15”from the Intelligent Security Group of the University of New South Wales,Australia is analyzed.In the initial phase,the rectified linear unit-,scaled exponential linear unit-,and exponential linear unit-based three separate Multi-Layer Perceptron(MLP)models are developed.Later,using the combined predictive power of these three MLPs,the RansoDetect Fusion ensemble model is introduced in the suggested methodology.The proposed ensemble technique outperforms previous studieswith impressive performance metrics results,including 98.79%accuracy and recall,98.85%precision,and 98.80%F1-score.The empirical results of this study validate the ensemble model’s ability to improve cybersecurity defenses by showing that it outperforms individual MLPmodels.In expanding the field of cybersecurity strategy,this research highlights the significance of combined deep learning models in strengthening intrusion detection systems against sophisticated cyber threats. 展开更多
关键词 Cloud encryption RAAS ENSEMBLE threat detection deep learning CYBERSECURITY
下载PDF
Machine learning-guided accelerated discovery of structure-property correlations in lean magnesium alloys for biomedical applications
14
作者 Sreenivas Raguraman Maitreyee Sharma Priyadarshini +5 位作者 Tram Nguyen Ryan McGovern Andrew Kim Adam J.Griebel Paulette Clancy Timothy P.Weihs 《Journal of Magnesium and Alloys》 SCIE EI CAS CSCD 2024年第6期2267-2283,共17页
Magnesium alloys are emerging as promising alternatives to traditional orthopedic implant materials thanks to their biodegradability,biocompatibility,and impressive mechanical characteristics.However,their rapid in-vi... Magnesium alloys are emerging as promising alternatives to traditional orthopedic implant materials thanks to their biodegradability,biocompatibility,and impressive mechanical characteristics.However,their rapid in-vivo degradation presents challenges,notably in upholding mechanical integrity over time.This study investigates the impact of high-temperature thermal processing on the mechanical and degradation attributes of a lean Mg-Zn-Ca-Mn alloy,ZX10.Utilizing rapid,cost-efficient characterization methods like X-ray diffraction and optical microscopy,we swiftly examine microstructural changes post-thermal treatment.Employing Pearson correlation coefficient analysis,we unveil the relationship between microstructural properties and critical targets(properties):hardness and corrosion resistance.Additionally,leveraging the least absolute shrinkage and selection operator(LASSO),we pinpoint the dominant microstructural factors among closely correlated variables.Our findings underscore the significant role of grain size refinement in strengthening and the predominance of the ternary Ca_(2)Mg_(6)Zn_(3)phase in corrosion behavior.This suggests that achieving an optimal blend of strength and corrosion resistance is attainable through fine grains and reduced concentration of ternary phases.This thorough investigation furnishes valuable insights into the intricate interplay of processing,structure,and properties in magnesium alloys,thereby advancing the development of superior biodegradable implant materials. 展开更多
关键词 Magnesium alloys Machine learning Corrosion Mechanical properties Rapid characterization
下载PDF
Fundamental error in tree-based machine learning model selection for reservoir characterisation
15
作者 Daniel Asante Otchere 《Energy Geoscience》 EI 2024年第2期214-224,共11页
Over the past two decades,machine learning techniques have been extensively used in predicting reservoir properties.While this approach has significantly contributed to the industry,selecting an appropriate model is s... Over the past two decades,machine learning techniques have been extensively used in predicting reservoir properties.While this approach has significantly contributed to the industry,selecting an appropriate model is still challenging for most researchers.Relying solely on statistical metrics to select the best model for a particular problem may not always be the most effective approach.This study encourages researchers to incorporate data visualization in their analysis and model selection process.To evaluate the suitability of different models in predicting horizontal permeability in the Volve field,wireline logs were used to train Extra-Trees,Ridge,Bagging,and XGBoost models.The Random Forest feature selection technique was applied to select the relevant logs as inputs for the models.Based on statistical metrics,the Extra-Trees model achieved the highest test accuracy of 0.996,RMSE of 19.54 mD,and MAE of 3.18 mD,with XGBoost coming in second.However,when the results were visualised,it was discovered that the XGBoost model was more suitable for the problem being tackled.The XGBoost model was a better predictor within the sandstone interval,while the Extra-Trees model was more appropriate in non-sandstone intervals.Since this study aims to predict permeability in the reservoir interval,the XGBoost model is the most suitable.These contrasting results demonstrate the importance of incorporating data visualisation techniques as an evaluation metric.Given the heterogeneity of the subsurface,relying solely on statistical metrics may not be sufficient to determine which model is best suited for a particular problem. 展开更多
关键词 Data visualisation PERMEABILITY Machine learning Statistical metrics
下载PDF
A deep learning framework for suppressing prestack seismic random noise without noise-free labels
16
作者 Han Wang Jie Zhang 《Energy Geoscience》 EI 2024年第3期261-274,共14页
Random noise attenuation is significant in seismic data processing.Supervised deep learning-based denoising methods have been widely developed and applied in recent years.In practice,it is often time-consuming and lab... Random noise attenuation is significant in seismic data processing.Supervised deep learning-based denoising methods have been widely developed and applied in recent years.In practice,it is often time-consuming and laborious to obtain noise-free data for supervised learning.Therefore,we propose a novel deep learning framework to denoise prestack seismic data without clean labels,which trains a high-resolution residual neural network(SRResnet)with noisy data for input and the same valid data with different noise for output.Since valid signals in noisy sample pairs are spatially correlated and random noise is spatially independent and unpredictable,the model can learn the features of valid data while suppressing random noise.Noisy data targets are generated by a simple conventional method without fine-tuning parameters.The initial estimates allow signal or noise leakage as the network does not require clean labels.The Monte Carlo strategy is applied to select training patches for increasing valid patches and expanding training datasets.Transfer learning is used to improve the generalization of real data processing.The synthetic and real data tests perform better than the commonly used state-of-the-art denoising methods. 展开更多
关键词 Data processing DENOISING Signal processing SEISMICS Deep learning
下载PDF
Reinforcement Learning-Based Energy Management for Hybrid Power Systems:State-of-the-Art Survey,Review,and Perspectives
17
作者 Xiaolin Tang Jiaxin Chen +4 位作者 Yechen Qin Teng Liu Kai Yang Amir Khajepour Shen Li 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第3期1-25,共25页
The new energy vehicle plays a crucial role in green transportation,and the energy management strategy of hybrid power systems is essential for ensuring energy-efficient driving.This paper presents a state-of-the-art ... The new energy vehicle plays a crucial role in green transportation,and the energy management strategy of hybrid power systems is essential for ensuring energy-efficient driving.This paper presents a state-of-the-art survey and review of reinforcement learning-based energy management strategies for hybrid power systems.Additionally,it envisions the outlook for autonomous intelligent hybrid electric vehicles,with reinforcement learning as the foundational technology.First of all,to provide a macro view of historical development,the brief history of deep learning,reinforcement learning,and deep reinforcement learning is presented in the form of a timeline.Then,the comprehensive survey and review are conducted by collecting papers from mainstream academic databases.Enumerating most of the contributions based on three main directions—algorithm innovation,powertrain innovation,and environment innovation—provides an objective review of the research status.Finally,to advance the application of reinforcement learning in autonomous intelligent hybrid electric vehicles,future research plans positioned as“Alpha HEV”are envisioned,integrating Autopilot and energy-saving control. 展开更多
关键词 New energy vehicle Hybrid power system Reinforcement learning Energy management strategy
下载PDF
Machine-learning-assisted efficient reconstruction of the quantum states generated from the Sagnac polarization-entangled photon source
18
作者 毛梦辉 周唯 +3 位作者 李新慧 杨然 龚彦晓 祝世宁 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第8期50-54,共5页
Neural networks are becoming ubiquitous in various areas of physics as a successful machine learning(ML)technique for addressing different tasks.Based on ML technique,we propose and experimentally demonstrate an effic... Neural networks are becoming ubiquitous in various areas of physics as a successful machine learning(ML)technique for addressing different tasks.Based on ML technique,we propose and experimentally demonstrate an efficient method for state reconstruction of the widely used Sagnac polarization-entangled photon source.By properly modeling the target states,a multi-output fully connected neural network is well trained using only six of the sixteen measurement bases in standard tomography technique,and hence our method reduces the resource consumption without loss of accuracy.We demonstrate the ability of the neural network to predict state parameters with a high precision by using both simulated and experimental data.Explicitly,the mean absolute error for all the parameters is below 0.05 for the simulated data and a mean fidelity of 0.99 is achieved for experimentally generated states.Our method could be generalized to estimate other kinds of states,as well as other quantum information tasks. 展开更多
关键词 machine learning state estimation quantum state tomography polarization-entangled photon source
下载PDF
Olive Leaf Disease Detection via Wavelet Transform and Feature Fusion of Pre-Trained Deep Learning Models
19
作者 Mahmood A.Mahmood Khalaf Alsalem 《Computers, Materials & Continua》 SCIE EI 2024年第3期3431-3448,共18页
Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wa... Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wavelet,feature-fused,pre-trained deep learning model for detecting olive leaf diseases.The proposed model combines wavelet transforms with pre-trained deep-learning models to extract discriminative features from olive leaf images.The model has four main phases:preprocessing using data augmentation,three-level wavelet transformation,learning using pre-trained deep learning models,and a fused deep learning model.In the preprocessing phase,the image dataset is augmented using techniques such as resizing,rescaling,flipping,rotation,zooming,and contrasting.In wavelet transformation,the augmented images are decomposed into three frequency levels.Three pre-trained deep learning models,EfficientNet-B7,DenseNet-201,and ResNet-152-V2,are used in the learning phase.The models were trained using the approximate images of the third-level sub-band of the wavelet transform.In the fused phase,the fused model consists of a merge layer,three dense layers,and two dropout layers.The proposed model was evaluated using a dataset of images of healthy and infected olive leaves.It achieved an accuracy of 99.72%in the diagnosis of olive leaf diseases,which exceeds the accuracy of other methods reported in the literature.This finding suggests that our proposed method is a promising tool for the early detection of olive leaf diseases. 展开更多
关键词 Olive leaf diseases wavelet transform deep learning feature fusion
下载PDF
Applications of deep learning for detecting ophthalmic diseases with ultrawide-field fundus
20
作者 Qing-Qing Tang Xiang-Gang Yang +2 位作者 Hong-Qiu Wang Da-Wen Wu Mei-Xia Zhang 《International Journal of Ophthalmology(English edition)》 SCIE CAS 2024年第1期188-200,共13页
AIM:To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages,limitations,and possible solutions common to all tasks.METHODS:We searche... AIM:To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages,limitations,and possible solutions common to all tasks.METHODS:We searched three academic databases,including PubMed,Web of Science,and Ovid,with the date of August 2022.We matched and screened according to the target keywords and publication year and retrieved a total of 4358 research papers according to the keywords,of which 23 studies were retrieved on applying deep learning in diagnosing ophthalmic disease with ultrawide-field images.RESULTS:Deep learning in ultrawide-field images can detect various ophthalmic diseases and achieve great performance,including diabetic retinopathy,glaucoma,age-related macular degeneration,retinal vein occlusions,retinal detachment,and other peripheral retinal diseases.Compared to fundus images,the ultrawide-field fundus scanning laser ophthalmoscopy enables the capture of the ocular fundus up to 200°in a single exposure,which can observe more areas of the retina.CONCLUSION:The combination of ultrawide-field fundus images and artificial intelligence will achieve great performance in diagnosing multiple ophthalmic diseases in the future. 展开更多
关键词 ultrawide-field fundus images deep learning disease diagnosis ophthalmic disease
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部