期刊文献+
共找到2,422篇文章
< 1 2 122 >
每页显示 20 50 100
A Deep Learning-Based Computational Algorithm for Identifying Damage Load Condition: An Artificial Intelligence Inverse Problem Solution for Failure Analysis 被引量:6
1
作者 Shaofei Ren Guorong Chen +2 位作者 Tiange Li Qijun Chen Shaofan Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2018年第12期287-307,共21页
In this work,we have developed a novel machine(deep)learning computational framework to determine and identify damage loading parameters(conditions)for structures and materials based on the permanent or residual plast... In this work,we have developed a novel machine(deep)learning computational framework to determine and identify damage loading parameters(conditions)for structures and materials based on the permanent or residual plastic deformation distribution or damage state of the structure.We have shown that the developed machine learning algorithm can accurately and(practically)uniquely identify both prior static as well as impact loading conditions in an inverse manner,based on the residual plastic strain and plastic deformation as forensic signatures.The paper presents the detailed machine learning algorithm,data acquisition and learning processes,and validation/verification examples.This development may have significant impacts on forensic material analysis and structure failure analysis,and it provides a powerful tool for material and structure forensic diagnosis,determination,and identification of damage loading conditions in accidental failure events,such as car crashes and infrastructure or building structure collapses. 展开更多
关键词 artificial intelligence(ai) deep learning forensic materials engineering PLASTIC DEFORMATION structural FaiLURE analysis.
下载PDF
Artificial Intelligence in Steam Cracking Modeling: A Deep Learning Algorithm for Detailed Effluent Prediction 被引量:11
2
作者 Pieter PPlehiers Steffen HSymoens +3 位作者 Ismaël Amghizar Guy B.Marin Christian V.Stevens Kevin M.Van Geem 《Engineering》 SCIE EI 2019年第6期1027-1040,共14页
Chemical processes can bene t tremendously from fast and accurate ef uent composition prediction for plant design, control, and optimization. The Industry 4.0 revolution claims that by introducing machine learning int... Chemical processes can bene t tremendously from fast and accurate ef uent composition prediction for plant design, control, and optimization. The Industry 4.0 revolution claims that by introducing machine learning into these elds, substantial economic and environmental gains can be achieved. The bottleneck for high-frequency optimization and process control is often the time necessary to perform the required detailed analyses of, for example, feed and product. To resolve these issues, a framework of four deep learning arti cial neural networks (DL ANNs) has been developed for the largest chemicals production process steam cracking. The proposed methodology allows both a detailed characterization of a naphtha feedstock and a detailed composition of the steam cracker ef uent to be determined, based on a limited number of commercial naphtha indices and rapidly accessible process characteristics. The detailed char- acterization of a naphtha is predicted from three points on the boiling curve and paraf ns, iso-paraf ns, ole ns, naphthenes, and aronatics (PIONA) characterization. If unavailable, the boiling points are also estimated. Even with estimated boiling points, the developed DL ANN outperforms several established methods such as maximization of Shannon entropy and traditional ANNs. For feedstock reconstruction, a mean absolute error (MAE) of 0.3 wt% is achieved on the test set, while the MAE of the ef uent predic- tion is 0.1 wt%. When combining all networks using the output of the previous as input to the next the ef uent MAE increases to 0.19 wt%. In addition to the high accuracy of the networks, a major bene t is the negligible computational cost required to obtain the predictions. On a standard Intel i7 processor, predictions are made in the order of milliseconds. Commercial software such as COILSIM1D performs slightly better in terms of accuracy, but the required central processing unit time per reaction is in the order of seconds. This tremendous speed-up and minimal accuracy loss make the presented framework highly suitable for the continuous monitoring of dif cult-to-access process parameters and for the envi- sioned, high-frequency real-time optimization (RTO) strategy or process control. Nevertheless, the lack of a fundamental basis implies that fundamental understanding is almost completely lost, which is not always well-accepted by the engineering community. In addition, the performance of the developed net- works drops signi cantly for naphthas that are highly dissimilar to those in the training set. 展开更多
关键词 artificial intelligence deep learning Steam cracking artificial neural networks
下载PDF
Toward Artificial General Intelligence: Deep Reinforcement Learning Method to AI in Medicine
3
作者 Daniel Schilling Weiss Nguyen Richard Odigie 《Journal of Computer and Communications》 2023年第9期84-120,共37页
Artificial general intelligence (AGI) is the ability of an artificial intelligence (AI) agent to solve somewhat-arbitrary tasks in somewhat-arbitrary environments. Despite being a long-standing goal in the field of AI... Artificial general intelligence (AGI) is the ability of an artificial intelligence (AI) agent to solve somewhat-arbitrary tasks in somewhat-arbitrary environments. Despite being a long-standing goal in the field of AI, achieving AGI remains elusive. In this study, we empirically assessed the generalizability of AI agents by applying a deep reinforcement learning (DRL) approach to the medical domain. Our investigation involved examining how modifying the agent’s structure, task, and environment impacts its generality. Sample: An NIH chest X-ray dataset with 112,120 images and 15 medical conditions. We evaluated the agent’s performance on binary and multiclass classification tasks through a baseline model, a convolutional neural network model, a deep Q network model, and a proximal policy optimization model. Results: Our results suggest that DRL agents with the algorithmic flexibility to autonomously vary their macro/microstructures can generalize better across given tasks and environments. 展开更多
关键词 artificial intelligence deep learning General-Purpose learning Agent GENERALIZABILITY Algorithmic Flexibility Internal Autonomy
下载PDF
Early identification of stroke through deep learning with multi-modal human speech and movement data
4
作者 Zijun Ou Haitao Wang +9 位作者 Bin Zhang Haobang Liang Bei Hu Longlong Ren Yanjuan Liu Yuhu Zhang Chengbo Dai Hejun Wu Weifeng Li Xin Li 《Neural Regeneration Research》 SCIE CAS 2025年第1期234-241,共8页
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are... Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting. 展开更多
关键词 artificial intelligence deep learning DIAGNOSIS early detection FAST SCREENING STROKE
下载PDF
Artificial intelligence-assisted repair of peripheral nerve injury: a new research hotspot and associated challenges 被引量:2
5
作者 Yang Guo Liying Sun +3 位作者 Wenyao Zhong Nan Zhang Zongxuan Zhao Wen Tian 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第3期663-670,共8页
Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on p... Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies. 展开更多
关键词 artificial intelligence artificial prosthesis medical-industrial integration brain-machine interface deep learning machine learning networked hand prosthesis neural interface neural network neural regeneration peripheral nerve
下载PDF
Comparative analysis of empirical and deep learning models for ionospheric sporadic E layer prediction
6
作者 BingKun Yu PengHao Tian +6 位作者 XiangHui Xue Christopher JScott HaiLun Ye JianFei Wu Wen Yi TingDi Chen XianKang Dou 《Earth and Planetary Physics》 EI CAS 2025年第1期10-19,共10页
Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,... Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular. 展开更多
关键词 ionospheric sporadic E layer radio occultation ionosondes numerical model deep learning model artificial intelligence
下载PDF
Artificial Intelligence-Driven Vehicle Fault Diagnosis to Revolutionize Automotive Maintenance:A Review
7
作者 Md Naeem Hossain Md Mustafizur Rahman Devarajan Ramasamy 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第11期951-996,共46页
Conventional fault diagnosis systems have constrained the automotive industry to damage vehicle maintenance and component longevity critically.Hence,there is a growing demand for advanced fault diagnosis technologies ... Conventional fault diagnosis systems have constrained the automotive industry to damage vehicle maintenance and component longevity critically.Hence,there is a growing demand for advanced fault diagnosis technologies to mitigate the impact of these limitations on unplanned vehicular downtime caused by unanticipated vehicle break-downs.Due to vehicles’increasingly complex and autonomous nature,there is a growing urgency to investigate novel diagnosis methodologies for improving safety,reliability,and maintainability.While Artificial Intelligence(AI)has provided a great opportunity in this area,a systematic review of the feasibility and application of AI for Vehicle Fault Diagnosis(VFD)systems is unavailable.Therefore,this review brings new insights into the potential of AI in VFD methodologies and offers a broad analysis using multiple techniques.We focus on reviewing relevant literature in the field of machine learning as well as deep learning algorithms for fault diagnosis in engines,lifting systems(suspensions and tires),gearboxes,and brakes,among other vehicular subsystems.We then delve into some examples of the use of AI in fault diagnosis and maintenance for electric vehicles and autonomous cars.The review elucidates the transformation of VFD systems that consequently increase accuracy,economization,and prediction in most vehicular sub-systems due to AI applications.Indeed,the limited performance of systems based on only one of these AI techniques is likely to be addressed by combinations:The integration shows that a single technique or method fails its expectations,which can lead to more reliable and versatile diagnostic support.By synthesizing current information and distinguishing forthcoming patterns,this work aims to accelerate advancement in smart automotive innovations,conforming with the requests of Industry 4.0 and adding to the progression of more secure,more dependable vehicles.The findings underscored the necessity for cross-disciplinary cooperation and examined the total potential of AI in vehicle default analysis. 展开更多
关键词 artificial intelligence machine learning deep learning vehicle fault diagnosis predictive maintenance
下载PDF
Gradient Optimizer Algorithm with Hybrid Deep Learning Based Failure Detection and Classification in the Industrial Environment
8
作者 Mohamed Zarouan Ibrahim M.Mehedi +1 位作者 Shaikh Abdul Latif Md.Masud Rana 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1341-1364,共24页
Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Indu... Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects. 展开更多
关键词 Fault detection Industry 4.0 gradient optimizer algorithm deep learning rotating machineries artificial intelligence
下载PDF
SkinSage XAI: An explainable deep learning solution for skin lesion diagnosis
9
作者 Geetika Munjal Paarth Bhardwaj +2 位作者 Vaibhav Bhargava Shivendra Singh Nimish Nagpal 《Health Care Science》 2024年第6期438-455,共18页
Background:Skin cancer poses a significant global health threat,with early detection being essential for successful treatment.While deep learning algorithms have greatly enhanced the categorization of skin lesions,the... Background:Skin cancer poses a significant global health threat,with early detection being essential for successful treatment.While deep learning algorithms have greatly enhanced the categorization of skin lesions,the black-box nature of many models limits interpretability,posing challenges for dermatologists.Methods:To address these limitations,SkinSage XAI utilizes advanced explainable artificial intelligence(XAI)techniques for skin lesion categorization.A data set of around 50,000 images from the Customized HAM10000,selected for diversity,serves as the foundation.The Inception v3 model is used for classification,supported by gradient-weighted class activation mapping and local interpretable model-agnostic explanations algorithms,which provide clear visual explanations for model outputs.Results:SkinSage XAI demonstrated high performance,accurately categorizing seven types of skin lesions—dermatofibroma,benign keratosis,melanocytic nevus,vascular lesion,actinic keratosis,basal cell carcinoma,and melanoma.It achieved an accuracy of 96%,with precision at 96.42%,recall at 96.28%,F1 score at 96.14%,and an area under the curve of 99.83%.Conclusions:SkinSage XAI represents a significant advancement in dermatology and artificial intelligence by bridging gaps in accuracy and explainability.The system provides transparent,accurate diagnoses,improving decision-making for dermatologists and potentially enhancing patient outcomes. 展开更多
关键词 deep learning skin lesions explainable artificial intelligence HAM10000 GradCAM LIME
下载PDF
Innovative deep learning artificial intelligence applications for predicting relationships between individual tree height and diameter at breast height 被引量:7
10
作者 ilker Ercanli 《Forest Ecosystems》 SCIE CSCD 2020年第2期141-158,共18页
Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree he... Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree height(ITH)and the diameter at breast height(DBH).Methods:A set of 2024 pairs of individual height and diameter at breast height measurements,originating from 150 sample plots located in stands of even aged and pure Anatolian Crimean Pine(Pinus nigra J.F.Arnold ssp.pallasiana(Lamb.)Holmboe)in Konya Forest Enterprise.The present study primarily investigated the capability and usability of DLA models for predicting the relationships between the ITH and the DBH sampled from some stands with different growth structures.The 80 different DLA models,which involve different the alternatives for the numbers of hidden layers and neuron,have been trained and compared to determine optimum and best predictive DLAs network structure.Results:It was determined that the DLA model with 9 layers and 100 neurons has been the best predictive network model compared as those by other different DLA,Artificial Neural Network,Nonlinear Regression and Nonlinear Mixed Effect models.The alternative of 100#neurons and 9#hidden layers in deep learning algorithms resulted in best predictive ITH values with root mean squared error(RMSE,0.5575),percent of the root mean squared error(RMSE%,4.9504%),Akaike information criterion(AIC,-998.9540),Bayesian information criterion(BIC,884.6591),fit index(Fl,0.9436),average absolute error(AAE,0.4077),maximum absolute error(max.AE,2.5106),Bias(0.0057)and percent Bias(Bias%,0.0502%).In addition,these predictive results with DLAs were further validated by the Equivalence tests that showed the DLA models successfully predicted the tree height in the independent dataset.Conclusion:This study has emphasized the capability of the DLA models,novel artificial intelligence technique,for predicting the relationships between individual tree height and the diameter at breast height that can be required information for the management of forests. 展开更多
关键词 artificial intelligence PREDICTION deep learning algorithms INDIVIDUAL tree height
下载PDF
Evaluation of artificial intelligence models for osteoarthritis of the knee using deep learning algorithms for orthopedic radiographs
11
作者 Anjali Tiwari Murali Poduval Vaibhav Bagaria 《World Journal of Orthopedics》 2022年第6期603-614,共12页
BACKGROUND Deep learning,a form of artificial intelligence,has shown promising results for interpreting radiographs.In order to develop this niche machine learning(ML)program of interpreting orthopedic radiographs wit... BACKGROUND Deep learning,a form of artificial intelligence,has shown promising results for interpreting radiographs.In order to develop this niche machine learning(ML)program of interpreting orthopedic radiographs with accuracy,a project named deep learning algorithm for orthopedic radiographs was conceived.In the first phase,the diagnosis of knee osteoarthritis(KOA)as per the standard Kellgren-Lawrence(KL)scale in medical images was conducted using the deep learning algorithm for orthopedic radiographs.AIM To compare efficacy and accuracy of eight different transfer learning deep learning models for detecting the grade of KOA from a radiograph and identify the most appropriate ML-based model for the detecting grade of KOA.METHODS The study was performed on 2068 radiograph exams conducted at the Department of Orthopedic Surgery,Sir HN Reliance Hospital and Research Centre(Mumbai,India)during 2019-2021.Three orthopedic surgeons reviewed these independently,graded them for the severity of KOA as per the KL scale and settled disagreement through a consensus session.Eight models,namely ResNet50,VGG-16,InceptionV3,MobilnetV2,EfficientnetB7,DenseNet201,Xception and NasNetMobile,were used to evaluate the efficacy of ML in accurately classifying radiographs for KOA as per the KL scale.Out of the 2068 images,70%were used initially to train the model,10%were used subsequently to test the model,and 20%were used finally to determine the accuracy of and validate each model.The idea behind transfer learning for KOA grade image classification is that if the existing models are already trained on a large and general dataset,these models will effectively serve as generic models to fulfill the study’s objectives.Finally,in order to benchmark the efficacy,the results of the models were also compared to a first-year orthopedic trainee who independently classified these models according to the KL scale.RESULTS Our network yielded an overall high accuracy for detecting KOA,ranging from 54%to 93%.The most successful of these was the DenseNet model,with accuracy up to 93%;interestingly,it even outperformed the human first-year trainee who had an accuracy of 74%.CONCLUSION The study paves the way for extrapolating the learning using ML to develop an automated KOA classification tool and enable healthcare professionals with better decision-making. 展开更多
关键词 OSTEOARTHRITIS artificial intelligence KNEE Computer vision Machine leaning deep learning
下载PDF
Artificial Intelligence Driven Resiliency with Machine Learning and Deep Learning Components
12
作者 Bahman Zohuri Farhang Mossavar Rahmani 《通讯和计算机(中英文版)》 2019年第1期1-13,共13页
The future of any business from banking,e-commerce,real estate,homeland security,healthcare,marketing,the stock market,manufacturing,education,retail to government organizations depends on the data and analytics capab... The future of any business from banking,e-commerce,real estate,homeland security,healthcare,marketing,the stock market,manufacturing,education,retail to government organizations depends on the data and analytics capabilities that are built and scaled.The speed of change in technology in recent years has been a real challenge for all businesses.To manage that,a significant number of organizations are exploring the Big Data(BD)infrastructure that helps them to take advantage of new opportunities while saving costs.Timely transformation of information is also critical for the survivability of an organization.Having the right information at the right time will enhance not only the knowledge of stakeholders within an organization but also providing them with a tool to make the right decision at the right moment.It is no longer enough to rely on a sampling of information about the organizations'customers.The decision-makers need to get vital insights into the customers'actual behavior,which requires enormous volumes of data to be processed.We believe that Big Data infrastructure is the key to successful Artificial Intelligence(AI)deployments and accurate,unbiased real-time insights.Big data solutions have a direct impact and changing the way the organization needs to work with help from AI and its components ML and DL.In this article,we discuss these topics. 展开更多
关键词 artificial intelligence RESILIENCE system MACHINE learning deep learning BIG data.
下载PDF
Artificial intelligence,machine learning and deep learning for eye care specialists
13
作者 Rory Sayres Naama Hammel Yun Liu 《Annals of Eye Science》 2020年第2期82-94,共13页
Artificial intelligence(AI)methods have become a focus of intense interest within the eye care community.This parallels a wider interest in AI,which has started impacting many facets of society.However,understanding a... Artificial intelligence(AI)methods have become a focus of intense interest within the eye care community.This parallels a wider interest in AI,which has started impacting many facets of society.However,understanding across the community has not kept pace with technical developments.What is AI,and how does it relate to other terms like machine learning or deep learning?How is AI currently used within eye care,and how might it be used in the future?This review paper provides an overview of these concepts for eye care specialists.We explain core concepts in AI,describe how these methods have been applied in ophthalmology,and consider future directions and challenges.We walk through the steps needed to develop an AI system for eye disease,and discuss the challenges in validating and deploying such technology.We argue that among medical fields,ophthalmology may be uniquely positioned to benefit from the thoughtful deployment of AI to improve patient care. 展开更多
关键词 artificial intelligence(ai) OPHTHALMOLOGY deep learning eye diseases
下载PDF
Research on the Application of Deep Learning in Artificial Intelligence Courses
14
作者 Ruijue Wang 《Journal of Electronic Research and Application》 2021年第6期14-18,共5页
With the in-depth reform of education,taking students as the center,letting students master the basic knowledge of the theory,but also training students’practical skills,is an important goal of the current artificial... With the in-depth reform of education,taking students as the center,letting students master the basic knowledge of the theory,but also training students’practical skills,is an important goal of the current artificial intelligence curriculum teaching reform.As a new learning method,deep learning is applied to the teaching of artificial intelligence courses,which can not only give play to students’subjective initiative,but also improve the efficiency of students’classroom learning.In order to explore the specific application of deep learning in the teaching of artificial intelligence courses,this article analyzes the key points of the application of deep learning in artificial intelligence courses.In addition,further explores the application strategies of deep learning in artificial intelligence courses.As it aims to provide some useful references to improve the actual efficiency of artificial intelligence course teaching. 展开更多
关键词 deep learning artificial intelligence courses Application strategies
下载PDF
Toward a Learnable Climate Model in the Artificial Intelligence Era 被引量:3
15
作者 Gang HUANG Ya WANG +3 位作者 Yoo-Geun HAM Bin MU Weichen TAO Chaoyang XIE 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1281-1288,共8页
Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of ... Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal. 展开更多
关键词 artificial intelligence deep learning learnable climate model
下载PDF
Research on simulation of gun muzzle flow field empowered by artificial intelligence 被引量:1
16
作者 Mengdi Zhou Linfang Qian +3 位作者 Congyong Cao Guangsong Chen Jin Kong Ming-hao Tong 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第2期196-208,共13页
Artificial intelligence technology is introduced into the simulation of muzzle flow field to improve its simulation efficiency in this paper.A data-physical fusion driven framework is proposed.First,the known flow fie... Artificial intelligence technology is introduced into the simulation of muzzle flow field to improve its simulation efficiency in this paper.A data-physical fusion driven framework is proposed.First,the known flow field data is used to initialize the model parameters,so that the parameters to be trained are close to the optimal value.Then physical prior knowledge is introduced into the training process so that the prediction results not only meet the known flow field information but also meet the physical conservation laws.Through two examples,it is proved that the model under the fusion driven framework can solve the strongly nonlinear flow field problems,and has stronger generalization and expansion.The proposed model is used to solve a muzzle flow field,and the safety clearance behind the barrel side is divided.It is pointed out that the shape of the safety clearance under different launch speeds is roughly the same,and the pressure disturbance in the area within 9.2 m behind the muzzle section exceeds the safety threshold,which is a dangerous area.Comparison with the CFD results shows that the calculation efficiency of the proposed model is greatly improved under the condition of the same calculation accuracy.The proposed model can quickly and accurately simulate the muzzle flow field under various launch conditions. 展开更多
关键词 Muzzle flow field artificial intelligence deep learning Data-physical fusion driven Shock wave
下载PDF
Automatic detection of small bowel lesions with different bleeding risks based on deep learning models 被引量:1
17
作者 Rui-Ya Zhang Peng-Peng Qiang +5 位作者 Ling-Jun Cai Tao Li Yan Qin Yu Zhang Yi-Qing Zhao Jun-Ping Wang 《World Journal of Gastroenterology》 SCIE CAS 2024年第2期170-183,共14页
BACKGROUND Deep learning provides an efficient automatic image recognition method for small bowel(SB)capsule endoscopy(CE)that can assist physicians in diagnosis.However,the existing deep learning models present some ... BACKGROUND Deep learning provides an efficient automatic image recognition method for small bowel(SB)capsule endoscopy(CE)that can assist physicians in diagnosis.However,the existing deep learning models present some unresolved challenges.AIM To propose a novel and effective classification and detection model to automatically identify various SB lesions and their bleeding risks,and label the lesions accurately so as to enhance the diagnostic efficiency of physicians and the ability to identify high-risk bleeding groups.METHODS The proposed model represents a two-stage method that combined image classification with object detection.First,we utilized the improved ResNet-50 classification model to classify endoscopic images into SB lesion images,normal SB mucosa images,and invalid images.Then,the improved YOLO-V5 detection model was utilized to detect the type of lesion and its risk of bleeding,and the location of the lesion was marked.We constructed training and testing sets and compared model-assisted reading with physician reading.RESULTS The accuracy of the model constructed in this study reached 98.96%,which was higher than the accuracy of other systems using only a single module.The sensitivity,specificity,and accuracy of the model-assisted reading detection of all images were 99.17%,99.92%,and 99.86%,which were significantly higher than those of the endoscopists’diagnoses.The image processing time of the model was 48 ms/image,and the image processing time of the physicians was 0.40±0.24 s/image(P<0.001).CONCLUSION The deep learning model of image classification combined with object detection exhibits a satisfactory diagnostic effect on a variety of SB lesions and their bleeding risks in CE images,which enhances the diagnostic efficiency of physicians and improves the ability of physicians to identify high-risk bleeding groups. 展开更多
关键词 artificial intelligence deep learning Capsule endoscopy Image classification Object detection Bleeding risk
下载PDF
Exploring deep learning for landslide mapping:A comprehensive review 被引量:1
18
作者 Zhi-qiang Yang Wen-wen Qi +1 位作者 Chong Xu Xiao-yi Shao 《China Geology》 CAS CSCD 2024年第2期330-350,共21页
A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized f... A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized for their dependence on expert knowledge and subjective factors.Recent advancements in highresolution satellite imagery,coupled with the rapid development of artificial intelligence,particularly datadriven deep learning algorithms(DL)such as convolutional neural networks(CNN),have provided rich feature indicators for landslide mapping,overcoming previous limitations.In this review paper,77representative DL-based landslide detection methods applied in various environments over the past seven years were examined.This study analyzed the structures of different DL networks,discussed five main application scenarios,and assessed both the advancements and limitations of DL in geological hazard analysis.The results indicated that the increasing number of articles per year reflects growing interest in landslide mapping by artificial intelligence,with U-Net-based structures gaining prominence due to their flexibility in feature extraction and generalization.Finally,we explored the hindrances of DL in landslide hazard research based on the above research content.Challenges such as black-box operations and sample dependence persist,warranting further theoretical research and future application of DL in landslide detection. 展开更多
关键词 Landslide Mapping Quantitative hazard assessment deep learning artificial intelligence Neural network Big data Geological hazard survery engineering
下载PDF
面向AIoT的协同智能综述
19
作者 罗宇哲 李玲 +5 位作者 侯朋朋 于佳耕 程丽敏 张常有 武延军 赵琛 《计算机研究与发展》 北大核心 2025年第1期179-206,共28页
深度学习和物联网的融合发展有力地促进了AIoT生态的繁荣.一方面AIoT设备为深度学习提供了海量数据资源,另一方面深度学习使得AIoT设备更加智能化.为保护用户数据隐私和克服单个AIoT设备的资源瓶颈,联邦学习和协同推理成为了深度学习在A... 深度学习和物联网的融合发展有力地促进了AIoT生态的繁荣.一方面AIoT设备为深度学习提供了海量数据资源,另一方面深度学习使得AIoT设备更加智能化.为保护用户数据隐私和克服单个AIoT设备的资源瓶颈,联邦学习和协同推理成为了深度学习在AIoT应用场景中广泛应用的重要支撑.联邦学习能在保护隐私的前提下有效利用用户的数据资源来训练深度学习模型,协同推理能借助多个设备的计算资源来提升推理的性能.引入了面向AIoT的协同智能的基本概念,围绕实现高效、安全的知识传递与算力供给,总结了近十年来联邦学习和协同推理算法以及架构和隐私安全3个方面的相关技术进展,介绍了联邦学习和协同推理在AIoT应用场景中的内在联系.从设备共用、模型共用、隐私安全机制协同和激励机制协同等方面展望了面向AIoT的协同智能的未来发展. 展开更多
关键词 协同智能 联邦学习 协同推理 智能物联网 智能计算系统
下载PDF
Artificial intelligence and liver transplantation:Looking for the best donor-recipient pairing 被引量:5
20
作者 Javier Briceno Rafael Calleja César Hervás 《Hepatobiliary & Pancreatic Diseases International》 SCIE CAS CSCD 2022年第4期347-353,共7页
Decision-making based on artificial intelligence(AI)methodology is increasingly present in all areas of modern medicine.In recent years,models based on deep-learning have begun to be used in organ transplantation.Taki... Decision-making based on artificial intelligence(AI)methodology is increasingly present in all areas of modern medicine.In recent years,models based on deep-learning have begun to be used in organ transplantation.Taking into account the huge number of factors and variables involved in donor-recipient(DR)matching,AI models may be well suited to improve organ allocation.AI-based models should provide two solutions:complement decision-making with current metrics based on logistic regression and improve their predictability.Hundreds of classifiers could be used to address this problem.However,not all of them are really useful for D-R pairing.Basically,in the decision to assign a given donor to a candidate in waiting list,a multitude of variables are handled,including donor,recipient,logistic and perioperative variables.Of these last two,some of them can be inferred indirectly from the team’s previous experience.Two groups of AI models have been used in the D-R matching:artificial neural networks(ANN)and random forest(RF).The former mimics the functional architecture of neurons,with input layers and output layers.The algorithms can be uni-or multi-objective.In general,ANNs can be used with large databases,where their generalizability is improved.However,they are models that are very sensitive to the quality of the databases and,in essence,they are black-box models in which all variables are important.Unfortunately,these models do not allow to know safely the weight of each variable.On the other hand,RF builds decision trees and works well with small cohorts.In addition,they can select top variables as with logistic regression.However,they are not useful with large databases,due to the extreme number of decision trees that they would generate,making them impractical.Both ANN and RF allow a successful donor allocation in over 80%of D-R pairing,a number much higher than that obtained with the best statistical metrics such as model for end-stage liver disease,balance of risk score,and survival outcomes following liver transplantation scores.Many barriers need to be overcome before these deeplearning-based models can be included for D-R matching.The main one of them is the resistance of the clinicians to leave their own decision to autonomous computational models. 展开更多
关键词 Donor-recipient matching artificial intelligence deep learning artificial neural networks Random forest Liver transplantation outcome
下载PDF
上一页 1 2 122 下一页 到第
使用帮助 返回顶部