In this work,we have developed a novel machine(deep)learning computational framework to determine and identify damage loading parameters(conditions)for structures and materials based on the permanent or residual plast...In this work,we have developed a novel machine(deep)learning computational framework to determine and identify damage loading parameters(conditions)for structures and materials based on the permanent or residual plastic deformation distribution or damage state of the structure.We have shown that the developed machine learning algorithm can accurately and(practically)uniquely identify both prior static as well as impact loading conditions in an inverse manner,based on the residual plastic strain and plastic deformation as forensic signatures.The paper presents the detailed machine learning algorithm,data acquisition and learning processes,and validation/verification examples.This development may have significant impacts on forensic material analysis and structure failure analysis,and it provides a powerful tool for material and structure forensic diagnosis,determination,and identification of damage loading conditions in accidental failure events,such as car crashes and infrastructure or building structure collapses.展开更多
Chemical processes can bene t tremendously from fast and accurate ef uent composition prediction for plant design, control, and optimization. The Industry 4.0 revolution claims that by introducing machine learning int...Chemical processes can bene t tremendously from fast and accurate ef uent composition prediction for plant design, control, and optimization. The Industry 4.0 revolution claims that by introducing machine learning into these elds, substantial economic and environmental gains can be achieved. The bottleneck for high-frequency optimization and process control is often the time necessary to perform the required detailed analyses of, for example, feed and product. To resolve these issues, a framework of four deep learning arti cial neural networks (DL ANNs) has been developed for the largest chemicals production process steam cracking. The proposed methodology allows both a detailed characterization of a naphtha feedstock and a detailed composition of the steam cracker ef uent to be determined, based on a limited number of commercial naphtha indices and rapidly accessible process characteristics. The detailed char- acterization of a naphtha is predicted from three points on the boiling curve and paraf ns, iso-paraf ns, ole ns, naphthenes, and aronatics (PIONA) characterization. If unavailable, the boiling points are also estimated. Even with estimated boiling points, the developed DL ANN outperforms several established methods such as maximization of Shannon entropy and traditional ANNs. For feedstock reconstruction, a mean absolute error (MAE) of 0.3 wt% is achieved on the test set, while the MAE of the ef uent predic- tion is 0.1 wt%. When combining all networks using the output of the previous as input to the next the ef uent MAE increases to 0.19 wt%. In addition to the high accuracy of the networks, a major bene t is the negligible computational cost required to obtain the predictions. On a standard Intel i7 processor, predictions are made in the order of milliseconds. Commercial software such as COILSIM1D performs slightly better in terms of accuracy, but the required central processing unit time per reaction is in the order of seconds. This tremendous speed-up and minimal accuracy loss make the presented framework highly suitable for the continuous monitoring of dif cult-to-access process parameters and for the envi- sioned, high-frequency real-time optimization (RTO) strategy or process control. Nevertheless, the lack of a fundamental basis implies that fundamental understanding is almost completely lost, which is not always well-accepted by the engineering community. In addition, the performance of the developed net- works drops signi cantly for naphthas that are highly dissimilar to those in the training set.展开更多
Artificial general intelligence (AGI) is the ability of an artificial intelligence (AI) agent to solve somewhat-arbitrary tasks in somewhat-arbitrary environments. Despite being a long-standing goal in the field of AI...Artificial general intelligence (AGI) is the ability of an artificial intelligence (AI) agent to solve somewhat-arbitrary tasks in somewhat-arbitrary environments. Despite being a long-standing goal in the field of AI, achieving AGI remains elusive. In this study, we empirically assessed the generalizability of AI agents by applying a deep reinforcement learning (DRL) approach to the medical domain. Our investigation involved examining how modifying the agent’s structure, task, and environment impacts its generality. Sample: An NIH chest X-ray dataset with 112,120 images and 15 medical conditions. We evaluated the agent’s performance on binary and multiclass classification tasks through a baseline model, a convolutional neural network model, a deep Q network model, and a proximal policy optimization model. Results: Our results suggest that DRL agents with the algorithmic flexibility to autonomously vary their macro/microstructures can generalize better across given tasks and environments.展开更多
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on p...Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.展开更多
Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,...Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.展开更多
Conventional fault diagnosis systems have constrained the automotive industry to damage vehicle maintenance and component longevity critically.Hence,there is a growing demand for advanced fault diagnosis technologies ...Conventional fault diagnosis systems have constrained the automotive industry to damage vehicle maintenance and component longevity critically.Hence,there is a growing demand for advanced fault diagnosis technologies to mitigate the impact of these limitations on unplanned vehicular downtime caused by unanticipated vehicle break-downs.Due to vehicles’increasingly complex and autonomous nature,there is a growing urgency to investigate novel diagnosis methodologies for improving safety,reliability,and maintainability.While Artificial Intelligence(AI)has provided a great opportunity in this area,a systematic review of the feasibility and application of AI for Vehicle Fault Diagnosis(VFD)systems is unavailable.Therefore,this review brings new insights into the potential of AI in VFD methodologies and offers a broad analysis using multiple techniques.We focus on reviewing relevant literature in the field of machine learning as well as deep learning algorithms for fault diagnosis in engines,lifting systems(suspensions and tires),gearboxes,and brakes,among other vehicular subsystems.We then delve into some examples of the use of AI in fault diagnosis and maintenance for electric vehicles and autonomous cars.The review elucidates the transformation of VFD systems that consequently increase accuracy,economization,and prediction in most vehicular sub-systems due to AI applications.Indeed,the limited performance of systems based on only one of these AI techniques is likely to be addressed by combinations:The integration shows that a single technique or method fails its expectations,which can lead to more reliable and versatile diagnostic support.By synthesizing current information and distinguishing forthcoming patterns,this work aims to accelerate advancement in smart automotive innovations,conforming with the requests of Industry 4.0 and adding to the progression of more secure,more dependable vehicles.The findings underscored the necessity for cross-disciplinary cooperation and examined the total potential of AI in vehicle default analysis.展开更多
Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Indu...Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects.展开更多
Background:Skin cancer poses a significant global health threat,with early detection being essential for successful treatment.While deep learning algorithms have greatly enhanced the categorization of skin lesions,the...Background:Skin cancer poses a significant global health threat,with early detection being essential for successful treatment.While deep learning algorithms have greatly enhanced the categorization of skin lesions,the black-box nature of many models limits interpretability,posing challenges for dermatologists.Methods:To address these limitations,SkinSage XAI utilizes advanced explainable artificial intelligence(XAI)techniques for skin lesion categorization.A data set of around 50,000 images from the Customized HAM10000,selected for diversity,serves as the foundation.The Inception v3 model is used for classification,supported by gradient-weighted class activation mapping and local interpretable model-agnostic explanations algorithms,which provide clear visual explanations for model outputs.Results:SkinSage XAI demonstrated high performance,accurately categorizing seven types of skin lesions—dermatofibroma,benign keratosis,melanocytic nevus,vascular lesion,actinic keratosis,basal cell carcinoma,and melanoma.It achieved an accuracy of 96%,with precision at 96.42%,recall at 96.28%,F1 score at 96.14%,and an area under the curve of 99.83%.Conclusions:SkinSage XAI represents a significant advancement in dermatology and artificial intelligence by bridging gaps in accuracy and explainability.The system provides transparent,accurate diagnoses,improving decision-making for dermatologists and potentially enhancing patient outcomes.展开更多
Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree he...Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree height(ITH)and the diameter at breast height(DBH).Methods:A set of 2024 pairs of individual height and diameter at breast height measurements,originating from 150 sample plots located in stands of even aged and pure Anatolian Crimean Pine(Pinus nigra J.F.Arnold ssp.pallasiana(Lamb.)Holmboe)in Konya Forest Enterprise.The present study primarily investigated the capability and usability of DLA models for predicting the relationships between the ITH and the DBH sampled from some stands with different growth structures.The 80 different DLA models,which involve different the alternatives for the numbers of hidden layers and neuron,have been trained and compared to determine optimum and best predictive DLAs network structure.Results:It was determined that the DLA model with 9 layers and 100 neurons has been the best predictive network model compared as those by other different DLA,Artificial Neural Network,Nonlinear Regression and Nonlinear Mixed Effect models.The alternative of 100#neurons and 9#hidden layers in deep learning algorithms resulted in best predictive ITH values with root mean squared error(RMSE,0.5575),percent of the root mean squared error(RMSE%,4.9504%),Akaike information criterion(AIC,-998.9540),Bayesian information criterion(BIC,884.6591),fit index(Fl,0.9436),average absolute error(AAE,0.4077),maximum absolute error(max.AE,2.5106),Bias(0.0057)and percent Bias(Bias%,0.0502%).In addition,these predictive results with DLAs were further validated by the Equivalence tests that showed the DLA models successfully predicted the tree height in the independent dataset.Conclusion:This study has emphasized the capability of the DLA models,novel artificial intelligence technique,for predicting the relationships between individual tree height and the diameter at breast height that can be required information for the management of forests.展开更多
BACKGROUND Deep learning,a form of artificial intelligence,has shown promising results for interpreting radiographs.In order to develop this niche machine learning(ML)program of interpreting orthopedic radiographs wit...BACKGROUND Deep learning,a form of artificial intelligence,has shown promising results for interpreting radiographs.In order to develop this niche machine learning(ML)program of interpreting orthopedic radiographs with accuracy,a project named deep learning algorithm for orthopedic radiographs was conceived.In the first phase,the diagnosis of knee osteoarthritis(KOA)as per the standard Kellgren-Lawrence(KL)scale in medical images was conducted using the deep learning algorithm for orthopedic radiographs.AIM To compare efficacy and accuracy of eight different transfer learning deep learning models for detecting the grade of KOA from a radiograph and identify the most appropriate ML-based model for the detecting grade of KOA.METHODS The study was performed on 2068 radiograph exams conducted at the Department of Orthopedic Surgery,Sir HN Reliance Hospital and Research Centre(Mumbai,India)during 2019-2021.Three orthopedic surgeons reviewed these independently,graded them for the severity of KOA as per the KL scale and settled disagreement through a consensus session.Eight models,namely ResNet50,VGG-16,InceptionV3,MobilnetV2,EfficientnetB7,DenseNet201,Xception and NasNetMobile,were used to evaluate the efficacy of ML in accurately classifying radiographs for KOA as per the KL scale.Out of the 2068 images,70%were used initially to train the model,10%were used subsequently to test the model,and 20%were used finally to determine the accuracy of and validate each model.The idea behind transfer learning for KOA grade image classification is that if the existing models are already trained on a large and general dataset,these models will effectively serve as generic models to fulfill the study’s objectives.Finally,in order to benchmark the efficacy,the results of the models were also compared to a first-year orthopedic trainee who independently classified these models according to the KL scale.RESULTS Our network yielded an overall high accuracy for detecting KOA,ranging from 54%to 93%.The most successful of these was the DenseNet model,with accuracy up to 93%;interestingly,it even outperformed the human first-year trainee who had an accuracy of 74%.CONCLUSION The study paves the way for extrapolating the learning using ML to develop an automated KOA classification tool and enable healthcare professionals with better decision-making.展开更多
The future of any business from banking,e-commerce,real estate,homeland security,healthcare,marketing,the stock market,manufacturing,education,retail to government organizations depends on the data and analytics capab...The future of any business from banking,e-commerce,real estate,homeland security,healthcare,marketing,the stock market,manufacturing,education,retail to government organizations depends on the data and analytics capabilities that are built and scaled.The speed of change in technology in recent years has been a real challenge for all businesses.To manage that,a significant number of organizations are exploring the Big Data(BD)infrastructure that helps them to take advantage of new opportunities while saving costs.Timely transformation of information is also critical for the survivability of an organization.Having the right information at the right time will enhance not only the knowledge of stakeholders within an organization but also providing them with a tool to make the right decision at the right moment.It is no longer enough to rely on a sampling of information about the organizations'customers.The decision-makers need to get vital insights into the customers'actual behavior,which requires enormous volumes of data to be processed.We believe that Big Data infrastructure is the key to successful Artificial Intelligence(AI)deployments and accurate,unbiased real-time insights.Big data solutions have a direct impact and changing the way the organization needs to work with help from AI and its components ML and DL.In this article,we discuss these topics.展开更多
Artificial intelligence(AI)methods have become a focus of intense interest within the eye care community.This parallels a wider interest in AI,which has started impacting many facets of society.However,understanding a...Artificial intelligence(AI)methods have become a focus of intense interest within the eye care community.This parallels a wider interest in AI,which has started impacting many facets of society.However,understanding across the community has not kept pace with technical developments.What is AI,and how does it relate to other terms like machine learning or deep learning?How is AI currently used within eye care,and how might it be used in the future?This review paper provides an overview of these concepts for eye care specialists.We explain core concepts in AI,describe how these methods have been applied in ophthalmology,and consider future directions and challenges.We walk through the steps needed to develop an AI system for eye disease,and discuss the challenges in validating and deploying such technology.We argue that among medical fields,ophthalmology may be uniquely positioned to benefit from the thoughtful deployment of AI to improve patient care.展开更多
With the in-depth reform of education,taking students as the center,letting students master the basic knowledge of the theory,but also training students’practical skills,is an important goal of the current artificial...With the in-depth reform of education,taking students as the center,letting students master the basic knowledge of the theory,but also training students’practical skills,is an important goal of the current artificial intelligence curriculum teaching reform.As a new learning method,deep learning is applied to the teaching of artificial intelligence courses,which can not only give play to students’subjective initiative,but also improve the efficiency of students’classroom learning.In order to explore the specific application of deep learning in the teaching of artificial intelligence courses,this article analyzes the key points of the application of deep learning in artificial intelligence courses.In addition,further explores the application strategies of deep learning in artificial intelligence courses.As it aims to provide some useful references to improve the actual efficiency of artificial intelligence course teaching.展开更多
Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of ...Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.展开更多
Artificial intelligence technology is introduced into the simulation of muzzle flow field to improve its simulation efficiency in this paper.A data-physical fusion driven framework is proposed.First,the known flow fie...Artificial intelligence technology is introduced into the simulation of muzzle flow field to improve its simulation efficiency in this paper.A data-physical fusion driven framework is proposed.First,the known flow field data is used to initialize the model parameters,so that the parameters to be trained are close to the optimal value.Then physical prior knowledge is introduced into the training process so that the prediction results not only meet the known flow field information but also meet the physical conservation laws.Through two examples,it is proved that the model under the fusion driven framework can solve the strongly nonlinear flow field problems,and has stronger generalization and expansion.The proposed model is used to solve a muzzle flow field,and the safety clearance behind the barrel side is divided.It is pointed out that the shape of the safety clearance under different launch speeds is roughly the same,and the pressure disturbance in the area within 9.2 m behind the muzzle section exceeds the safety threshold,which is a dangerous area.Comparison with the CFD results shows that the calculation efficiency of the proposed model is greatly improved under the condition of the same calculation accuracy.The proposed model can quickly and accurately simulate the muzzle flow field under various launch conditions.展开更多
BACKGROUND Deep learning provides an efficient automatic image recognition method for small bowel(SB)capsule endoscopy(CE)that can assist physicians in diagnosis.However,the existing deep learning models present some ...BACKGROUND Deep learning provides an efficient automatic image recognition method for small bowel(SB)capsule endoscopy(CE)that can assist physicians in diagnosis.However,the existing deep learning models present some unresolved challenges.AIM To propose a novel and effective classification and detection model to automatically identify various SB lesions and their bleeding risks,and label the lesions accurately so as to enhance the diagnostic efficiency of physicians and the ability to identify high-risk bleeding groups.METHODS The proposed model represents a two-stage method that combined image classification with object detection.First,we utilized the improved ResNet-50 classification model to classify endoscopic images into SB lesion images,normal SB mucosa images,and invalid images.Then,the improved YOLO-V5 detection model was utilized to detect the type of lesion and its risk of bleeding,and the location of the lesion was marked.We constructed training and testing sets and compared model-assisted reading with physician reading.RESULTS The accuracy of the model constructed in this study reached 98.96%,which was higher than the accuracy of other systems using only a single module.The sensitivity,specificity,and accuracy of the model-assisted reading detection of all images were 99.17%,99.92%,and 99.86%,which were significantly higher than those of the endoscopists’diagnoses.The image processing time of the model was 48 ms/image,and the image processing time of the physicians was 0.40±0.24 s/image(P<0.001).CONCLUSION The deep learning model of image classification combined with object detection exhibits a satisfactory diagnostic effect on a variety of SB lesions and their bleeding risks in CE images,which enhances the diagnostic efficiency of physicians and improves the ability of physicians to identify high-risk bleeding groups.展开更多
A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized f...A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized for their dependence on expert knowledge and subjective factors.Recent advancements in highresolution satellite imagery,coupled with the rapid development of artificial intelligence,particularly datadriven deep learning algorithms(DL)such as convolutional neural networks(CNN),have provided rich feature indicators for landslide mapping,overcoming previous limitations.In this review paper,77representative DL-based landslide detection methods applied in various environments over the past seven years were examined.This study analyzed the structures of different DL networks,discussed five main application scenarios,and assessed both the advancements and limitations of DL in geological hazard analysis.The results indicated that the increasing number of articles per year reflects growing interest in landslide mapping by artificial intelligence,with U-Net-based structures gaining prominence due to their flexibility in feature extraction and generalization.Finally,we explored the hindrances of DL in landslide hazard research based on the above research content.Challenges such as black-box operations and sample dependence persist,warranting further theoretical research and future application of DL in landslide detection.展开更多
Decision-making based on artificial intelligence(AI)methodology is increasingly present in all areas of modern medicine.In recent years,models based on deep-learning have begun to be used in organ transplantation.Taki...Decision-making based on artificial intelligence(AI)methodology is increasingly present in all areas of modern medicine.In recent years,models based on deep-learning have begun to be used in organ transplantation.Taking into account the huge number of factors and variables involved in donor-recipient(DR)matching,AI models may be well suited to improve organ allocation.AI-based models should provide two solutions:complement decision-making with current metrics based on logistic regression and improve their predictability.Hundreds of classifiers could be used to address this problem.However,not all of them are really useful for D-R pairing.Basically,in the decision to assign a given donor to a candidate in waiting list,a multitude of variables are handled,including donor,recipient,logistic and perioperative variables.Of these last two,some of them can be inferred indirectly from the team’s previous experience.Two groups of AI models have been used in the D-R matching:artificial neural networks(ANN)and random forest(RF).The former mimics the functional architecture of neurons,with input layers and output layers.The algorithms can be uni-or multi-objective.In general,ANNs can be used with large databases,where their generalizability is improved.However,they are models that are very sensitive to the quality of the databases and,in essence,they are black-box models in which all variables are important.Unfortunately,these models do not allow to know safely the weight of each variable.On the other hand,RF builds decision trees and works well with small cohorts.In addition,they can select top variables as with logistic regression.However,they are not useful with large databases,due to the extreme number of decision trees that they would generate,making them impractical.Both ANN and RF allow a successful donor allocation in over 80%of D-R pairing,a number much higher than that obtained with the best statistical metrics such as model for end-stage liver disease,balance of risk score,and survival outcomes following liver transplantation scores.Many barriers need to be overcome before these deeplearning-based models can be included for D-R matching.The main one of them is the resistance of the clinicians to leave their own decision to autonomous computational models.展开更多
文摘In this work,we have developed a novel machine(deep)learning computational framework to determine and identify damage loading parameters(conditions)for structures and materials based on the permanent or residual plastic deformation distribution or damage state of the structure.We have shown that the developed machine learning algorithm can accurately and(practically)uniquely identify both prior static as well as impact loading conditions in an inverse manner,based on the residual plastic strain and plastic deformation as forensic signatures.The paper presents the detailed machine learning algorithm,data acquisition and learning processes,and validation/verification examples.This development may have significant impacts on forensic material analysis and structure failure analysis,and it provides a powerful tool for material and structure forensic diagnosis,determination,and identification of damage loading conditions in accidental failure events,such as car crashes and infrastructure or building structure collapses.
文摘Chemical processes can bene t tremendously from fast and accurate ef uent composition prediction for plant design, control, and optimization. The Industry 4.0 revolution claims that by introducing machine learning into these elds, substantial economic and environmental gains can be achieved. The bottleneck for high-frequency optimization and process control is often the time necessary to perform the required detailed analyses of, for example, feed and product. To resolve these issues, a framework of four deep learning arti cial neural networks (DL ANNs) has been developed for the largest chemicals production process steam cracking. The proposed methodology allows both a detailed characterization of a naphtha feedstock and a detailed composition of the steam cracker ef uent to be determined, based on a limited number of commercial naphtha indices and rapidly accessible process characteristics. The detailed char- acterization of a naphtha is predicted from three points on the boiling curve and paraf ns, iso-paraf ns, ole ns, naphthenes, and aronatics (PIONA) characterization. If unavailable, the boiling points are also estimated. Even with estimated boiling points, the developed DL ANN outperforms several established methods such as maximization of Shannon entropy and traditional ANNs. For feedstock reconstruction, a mean absolute error (MAE) of 0.3 wt% is achieved on the test set, while the MAE of the ef uent predic- tion is 0.1 wt%. When combining all networks using the output of the previous as input to the next the ef uent MAE increases to 0.19 wt%. In addition to the high accuracy of the networks, a major bene t is the negligible computational cost required to obtain the predictions. On a standard Intel i7 processor, predictions are made in the order of milliseconds. Commercial software such as COILSIM1D performs slightly better in terms of accuracy, but the required central processing unit time per reaction is in the order of seconds. This tremendous speed-up and minimal accuracy loss make the presented framework highly suitable for the continuous monitoring of dif cult-to-access process parameters and for the envi- sioned, high-frequency real-time optimization (RTO) strategy or process control. Nevertheless, the lack of a fundamental basis implies that fundamental understanding is almost completely lost, which is not always well-accepted by the engineering community. In addition, the performance of the developed net- works drops signi cantly for naphthas that are highly dissimilar to those in the training set.
文摘Artificial general intelligence (AGI) is the ability of an artificial intelligence (AI) agent to solve somewhat-arbitrary tasks in somewhat-arbitrary environments. Despite being a long-standing goal in the field of AI, achieving AGI remains elusive. In this study, we empirically assessed the generalizability of AI agents by applying a deep reinforcement learning (DRL) approach to the medical domain. Our investigation involved examining how modifying the agent’s structure, task, and environment impacts its generality. Sample: An NIH chest X-ray dataset with 112,120 images and 15 medical conditions. We evaluated the agent’s performance on binary and multiclass classification tasks through a baseline model, a convolutional neural network model, a deep Q network model, and a proximal policy optimization model. Results: Our results suggest that DRL agents with the algorithmic flexibility to autonomously vary their macro/microstructures can generalize better across given tasks and environments.
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
基金supported by the Capital’s Funds for Health Improvement and Research,No.2022-2-2072(to YG).
文摘Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.
基金supported by the Project of Stable Support for Youth Team in Basic Research Field,CAS(grant No.YSBR-018)the National Natural Science Foundation of China(grant Nos.42188101,42130204)+4 种基金the B-type Strategic Priority Program of CAS(grant no.XDB41000000)the National Natural Science Foundation of China(NSFC)Distinguished Overseas Young Talents Program,Innovation Program for Quantum Science and Technology(2021ZD0300301)the Open Research Project of Large Research Infrastructures of CAS-“Study on the interaction between low/mid-latitude atmosphere and ionosphere based on the Chinese Meridian Project”.The project was supported also by the National Key Laboratory of Deep Space Exploration(Grant No.NKLDSE2023A002)the Open Fund of Anhui Provincial Key Laboratory of Intelligent Underground Detection(Grant No.APKLIUD23KF01)the China National Space Administration(CNSA)pre-research Project on Civil Aerospace Technologies No.D010305,D010301.
文摘Sporadic E(Es)layers in the ionosphere are characterized by intense plasma irregularities in the E region at altitudes of 90-130 km.Because they can significantly influence radio communications and navigation systems,accurate forecasting of Es layers is crucial for ensuring the precision and dependability of navigation satellite systems.In this study,we present Es predictions made by an empirical model and by a deep learning model,and analyze their differences comprehensively by comparing the model predictions to satellite RO measurements and ground-based ionosonde observations.The deep learning model exhibited significantly better performance,as indicated by its high coefficient of correlation(r=0.87)with RO observations and predictions,than did the empirical model(r=0.53).This study highlights the importance of integrating artificial intelligence technology into ionosphere modelling generally,and into predicting Es layer occurrences and characteristics,in particular.
基金funding provided through University Distinguished Research Grants(Project No.RDU223016)as well as financial assistance provided through the Fundamental Research Grant Scheme(No.FRGS/1/2022/TK10/UMP/02/35).
文摘Conventional fault diagnosis systems have constrained the automotive industry to damage vehicle maintenance and component longevity critically.Hence,there is a growing demand for advanced fault diagnosis technologies to mitigate the impact of these limitations on unplanned vehicular downtime caused by unanticipated vehicle break-downs.Due to vehicles’increasingly complex and autonomous nature,there is a growing urgency to investigate novel diagnosis methodologies for improving safety,reliability,and maintainability.While Artificial Intelligence(AI)has provided a great opportunity in this area,a systematic review of the feasibility and application of AI for Vehicle Fault Diagnosis(VFD)systems is unavailable.Therefore,this review brings new insights into the potential of AI in VFD methodologies and offers a broad analysis using multiple techniques.We focus on reviewing relevant literature in the field of machine learning as well as deep learning algorithms for fault diagnosis in engines,lifting systems(suspensions and tires),gearboxes,and brakes,among other vehicular subsystems.We then delve into some examples of the use of AI in fault diagnosis and maintenance for electric vehicles and autonomous cars.The review elucidates the transformation of VFD systems that consequently increase accuracy,economization,and prediction in most vehicular sub-systems due to AI applications.Indeed,the limited performance of systems based on only one of these AI techniques is likely to be addressed by combinations:The integration shows that a single technique or method fails its expectations,which can lead to more reliable and versatile diagnostic support.By synthesizing current information and distinguishing forthcoming patterns,this work aims to accelerate advancement in smart automotive innovations,conforming with the requests of Industry 4.0 and adding to the progression of more secure,more dependable vehicles.The findings underscored the necessity for cross-disciplinary cooperation and examined the total potential of AI in vehicle default analysis.
基金The Deanship of Scientific Research(DSR)at King Abdulaziz University(KAU),Jeddah,Saudi Arabia has funded this project under Grant No.(G:651-135-1443).
文摘Failure detection is an essential task in industrial systems for preventing costly downtime and ensuring the seamlessoperation of the system. Current industrial processes are getting smarter with the emergence of Industry 4.0.Specifically, various modernized industrial processes have been equipped with quite a few sensors to collectprocess-based data to find faults arising or prevailing in processes along with monitoring the status of processes.Fault diagnosis of rotating machines serves a main role in the engineering field and industrial production. Dueto the disadvantages of existing fault, diagnosis approaches, which greatly depend on professional experienceand human knowledge, intellectual fault diagnosis based on deep learning (DL) has attracted the researcher’sinterest. DL reaches the desired fault classification and automatic feature learning. Therefore, this article designs a Gradient Optimizer Algorithm with Hybrid Deep Learning-based Failure Detection and Classification (GOAHDLFDC)in the industrial environment. The presented GOAHDL-FDC technique initially applies continuous wavelettransform (CWT) for preprocessing the actual vibrational signals of the rotating machinery. Next, the residualnetwork (ResNet18) model was exploited for the extraction of features from the vibration signals which are thenfed into theHDLmodel for automated fault detection. Finally, theGOA-based hyperparameter tuning is performedtoadjust the parameter valuesof theHDLmodel accurately.The experimental result analysis of the GOAHDL-FD Calgorithm takes place using a series of simulations and the experimentation outcomes highlight the better resultsof the GOAHDL-FDC technique under different aspects.
文摘Background:Skin cancer poses a significant global health threat,with early detection being essential for successful treatment.While deep learning algorithms have greatly enhanced the categorization of skin lesions,the black-box nature of many models limits interpretability,posing challenges for dermatologists.Methods:To address these limitations,SkinSage XAI utilizes advanced explainable artificial intelligence(XAI)techniques for skin lesion categorization.A data set of around 50,000 images from the Customized HAM10000,selected for diversity,serves as the foundation.The Inception v3 model is used for classification,supported by gradient-weighted class activation mapping and local interpretable model-agnostic explanations algorithms,which provide clear visual explanations for model outputs.Results:SkinSage XAI demonstrated high performance,accurately categorizing seven types of skin lesions—dermatofibroma,benign keratosis,melanocytic nevus,vascular lesion,actinic keratosis,basal cell carcinoma,and melanoma.It achieved an accuracy of 96%,with precision at 96.42%,recall at 96.28%,F1 score at 96.14%,and an area under the curve of 99.83%.Conclusions:SkinSage XAI represents a significant advancement in dermatology and artificial intelligence by bridging gaps in accuracy and explainability.The system provides transparent,accurate diagnoses,improving decision-making for dermatologists and potentially enhancing patient outcomes.
文摘Background:Deep Learning Algorithms(DLA)have become prominent as an application of Artificial Intelligence(Al)Techniques since 2010.This paper introduces the DLA to predict the relationships between individual tree height(ITH)and the diameter at breast height(DBH).Methods:A set of 2024 pairs of individual height and diameter at breast height measurements,originating from 150 sample plots located in stands of even aged and pure Anatolian Crimean Pine(Pinus nigra J.F.Arnold ssp.pallasiana(Lamb.)Holmboe)in Konya Forest Enterprise.The present study primarily investigated the capability and usability of DLA models for predicting the relationships between the ITH and the DBH sampled from some stands with different growth structures.The 80 different DLA models,which involve different the alternatives for the numbers of hidden layers and neuron,have been trained and compared to determine optimum and best predictive DLAs network structure.Results:It was determined that the DLA model with 9 layers and 100 neurons has been the best predictive network model compared as those by other different DLA,Artificial Neural Network,Nonlinear Regression and Nonlinear Mixed Effect models.The alternative of 100#neurons and 9#hidden layers in deep learning algorithms resulted in best predictive ITH values with root mean squared error(RMSE,0.5575),percent of the root mean squared error(RMSE%,4.9504%),Akaike information criterion(AIC,-998.9540),Bayesian information criterion(BIC,884.6591),fit index(Fl,0.9436),average absolute error(AAE,0.4077),maximum absolute error(max.AE,2.5106),Bias(0.0057)and percent Bias(Bias%,0.0502%).In addition,these predictive results with DLAs were further validated by the Equivalence tests that showed the DLA models successfully predicted the tree height in the independent dataset.Conclusion:This study has emphasized the capability of the DLA models,novel artificial intelligence technique,for predicting the relationships between individual tree height and the diameter at breast height that can be required information for the management of forests.
文摘BACKGROUND Deep learning,a form of artificial intelligence,has shown promising results for interpreting radiographs.In order to develop this niche machine learning(ML)program of interpreting orthopedic radiographs with accuracy,a project named deep learning algorithm for orthopedic radiographs was conceived.In the first phase,the diagnosis of knee osteoarthritis(KOA)as per the standard Kellgren-Lawrence(KL)scale in medical images was conducted using the deep learning algorithm for orthopedic radiographs.AIM To compare efficacy and accuracy of eight different transfer learning deep learning models for detecting the grade of KOA from a radiograph and identify the most appropriate ML-based model for the detecting grade of KOA.METHODS The study was performed on 2068 radiograph exams conducted at the Department of Orthopedic Surgery,Sir HN Reliance Hospital and Research Centre(Mumbai,India)during 2019-2021.Three orthopedic surgeons reviewed these independently,graded them for the severity of KOA as per the KL scale and settled disagreement through a consensus session.Eight models,namely ResNet50,VGG-16,InceptionV3,MobilnetV2,EfficientnetB7,DenseNet201,Xception and NasNetMobile,were used to evaluate the efficacy of ML in accurately classifying radiographs for KOA as per the KL scale.Out of the 2068 images,70%were used initially to train the model,10%were used subsequently to test the model,and 20%were used finally to determine the accuracy of and validate each model.The idea behind transfer learning for KOA grade image classification is that if the existing models are already trained on a large and general dataset,these models will effectively serve as generic models to fulfill the study’s objectives.Finally,in order to benchmark the efficacy,the results of the models were also compared to a first-year orthopedic trainee who independently classified these models according to the KL scale.RESULTS Our network yielded an overall high accuracy for detecting KOA,ranging from 54%to 93%.The most successful of these was the DenseNet model,with accuracy up to 93%;interestingly,it even outperformed the human first-year trainee who had an accuracy of 74%.CONCLUSION The study paves the way for extrapolating the learning using ML to develop an automated KOA classification tool and enable healthcare professionals with better decision-making.
文摘The future of any business from banking,e-commerce,real estate,homeland security,healthcare,marketing,the stock market,manufacturing,education,retail to government organizations depends on the data and analytics capabilities that are built and scaled.The speed of change in technology in recent years has been a real challenge for all businesses.To manage that,a significant number of organizations are exploring the Big Data(BD)infrastructure that helps them to take advantage of new opportunities while saving costs.Timely transformation of information is also critical for the survivability of an organization.Having the right information at the right time will enhance not only the knowledge of stakeholders within an organization but also providing them with a tool to make the right decision at the right moment.It is no longer enough to rely on a sampling of information about the organizations'customers.The decision-makers need to get vital insights into the customers'actual behavior,which requires enormous volumes of data to be processed.We believe that Big Data infrastructure is the key to successful Artificial Intelligence(AI)deployments and accurate,unbiased real-time insights.Big data solutions have a direct impact and changing the way the organization needs to work with help from AI and its components ML and DL.In this article,we discuss these topics.
文摘Artificial intelligence(AI)methods have become a focus of intense interest within the eye care community.This parallels a wider interest in AI,which has started impacting many facets of society.However,understanding across the community has not kept pace with technical developments.What is AI,and how does it relate to other terms like machine learning or deep learning?How is AI currently used within eye care,and how might it be used in the future?This review paper provides an overview of these concepts for eye care specialists.We explain core concepts in AI,describe how these methods have been applied in ophthalmology,and consider future directions and challenges.We walk through the steps needed to develop an AI system for eye disease,and discuss the challenges in validating and deploying such technology.We argue that among medical fields,ophthalmology may be uniquely positioned to benefit from the thoughtful deployment of AI to improve patient care.
文摘With the in-depth reform of education,taking students as the center,letting students master the basic knowledge of the theory,but also training students’practical skills,is an important goal of the current artificial intelligence curriculum teaching reform.As a new learning method,deep learning is applied to the teaching of artificial intelligence courses,which can not only give play to students’subjective initiative,but also improve the efficiency of students’classroom learning.In order to explore the specific application of deep learning in the teaching of artificial intelligence courses,this article analyzes the key points of the application of deep learning in artificial intelligence courses.In addition,further explores the application strategies of deep learning in artificial intelligence courses.As it aims to provide some useful references to improve the actual efficiency of artificial intelligence course teaching.
基金supported by the National Natural Science Foundation of China(Grant Nos.42141019 and 42261144687)and STEP(Grant No.2019QZKK0102)supported by the Korea Environmental Industry&Technology Institute(KEITI)through the“Project for developing an observation-based GHG emissions geospatial information map”,funded by the Korea Ministry of Environment(MOE)(Grant No.RS-2023-00232066).
文摘Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.
基金Supported by the Natural Science Foundation of Jiangsu Province of China(Grant No.BK20210347)Supported by the National Natural Science Foundation of China(Grant No.U2141246).
文摘Artificial intelligence technology is introduced into the simulation of muzzle flow field to improve its simulation efficiency in this paper.A data-physical fusion driven framework is proposed.First,the known flow field data is used to initialize the model parameters,so that the parameters to be trained are close to the optimal value.Then physical prior knowledge is introduced into the training process so that the prediction results not only meet the known flow field information but also meet the physical conservation laws.Through two examples,it is proved that the model under the fusion driven framework can solve the strongly nonlinear flow field problems,and has stronger generalization and expansion.The proposed model is used to solve a muzzle flow field,and the safety clearance behind the barrel side is divided.It is pointed out that the shape of the safety clearance under different launch speeds is roughly the same,and the pressure disturbance in the area within 9.2 m behind the muzzle section exceeds the safety threshold,which is a dangerous area.Comparison with the CFD results shows that the calculation efficiency of the proposed model is greatly improved under the condition of the same calculation accuracy.The proposed model can quickly and accurately simulate the muzzle flow field under various launch conditions.
基金The Shanxi Provincial Administration of Traditional Chinese Medicine,No.2023ZYYDA2005.
文摘BACKGROUND Deep learning provides an efficient automatic image recognition method for small bowel(SB)capsule endoscopy(CE)that can assist physicians in diagnosis.However,the existing deep learning models present some unresolved challenges.AIM To propose a novel and effective classification and detection model to automatically identify various SB lesions and their bleeding risks,and label the lesions accurately so as to enhance the diagnostic efficiency of physicians and the ability to identify high-risk bleeding groups.METHODS The proposed model represents a two-stage method that combined image classification with object detection.First,we utilized the improved ResNet-50 classification model to classify endoscopic images into SB lesion images,normal SB mucosa images,and invalid images.Then,the improved YOLO-V5 detection model was utilized to detect the type of lesion and its risk of bleeding,and the location of the lesion was marked.We constructed training and testing sets and compared model-assisted reading with physician reading.RESULTS The accuracy of the model constructed in this study reached 98.96%,which was higher than the accuracy of other systems using only a single module.The sensitivity,specificity,and accuracy of the model-assisted reading detection of all images were 99.17%,99.92%,and 99.86%,which were significantly higher than those of the endoscopists’diagnoses.The image processing time of the model was 48 ms/image,and the image processing time of the physicians was 0.40±0.24 s/image(P<0.001).CONCLUSION The deep learning model of image classification combined with object detection exhibits a satisfactory diagnostic effect on a variety of SB lesions and their bleeding risks in CE images,which enhances the diagnostic efficiency of physicians and improves the ability of physicians to identify high-risk bleeding groups.
基金supported by the National Key Research and Development Program of China(2021YFB3901205)the National Institute of Natural Hazards,Ministry of Emergency Management of China(2023-JBKY-57)。
文摘A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized for their dependence on expert knowledge and subjective factors.Recent advancements in highresolution satellite imagery,coupled with the rapid development of artificial intelligence,particularly datadriven deep learning algorithms(DL)such as convolutional neural networks(CNN),have provided rich feature indicators for landslide mapping,overcoming previous limitations.In this review paper,77representative DL-based landslide detection methods applied in various environments over the past seven years were examined.This study analyzed the structures of different DL networks,discussed five main application scenarios,and assessed both the advancements and limitations of DL in geological hazard analysis.The results indicated that the increasing number of articles per year reflects growing interest in landslide mapping by artificial intelligence,with U-Net-based structures gaining prominence due to their flexibility in feature extraction and generalization.Finally,we explored the hindrances of DL in landslide hazard research based on the above research content.Challenges such as black-box operations and sample dependence persist,warranting further theoretical research and future application of DL in landslide detection.
基金supported by a grant from Mutua Madrile?a XVIII Convovatoria de ayudas a la investigación。
文摘Decision-making based on artificial intelligence(AI)methodology is increasingly present in all areas of modern medicine.In recent years,models based on deep-learning have begun to be used in organ transplantation.Taking into account the huge number of factors and variables involved in donor-recipient(DR)matching,AI models may be well suited to improve organ allocation.AI-based models should provide two solutions:complement decision-making with current metrics based on logistic regression and improve their predictability.Hundreds of classifiers could be used to address this problem.However,not all of them are really useful for D-R pairing.Basically,in the decision to assign a given donor to a candidate in waiting list,a multitude of variables are handled,including donor,recipient,logistic and perioperative variables.Of these last two,some of them can be inferred indirectly from the team’s previous experience.Two groups of AI models have been used in the D-R matching:artificial neural networks(ANN)and random forest(RF).The former mimics the functional architecture of neurons,with input layers and output layers.The algorithms can be uni-or multi-objective.In general,ANNs can be used with large databases,where their generalizability is improved.However,they are models that are very sensitive to the quality of the databases and,in essence,they are black-box models in which all variables are important.Unfortunately,these models do not allow to know safely the weight of each variable.On the other hand,RF builds decision trees and works well with small cohorts.In addition,they can select top variables as with logistic regression.However,they are not useful with large databases,due to the extreme number of decision trees that they would generate,making them impractical.Both ANN and RF allow a successful donor allocation in over 80%of D-R pairing,a number much higher than that obtained with the best statistical metrics such as model for end-stage liver disease,balance of risk score,and survival outcomes following liver transplantation scores.Many barriers need to be overcome before these deeplearning-based models can be included for D-R matching.The main one of them is the resistance of the clinicians to leave their own decision to autonomous computational models.