With integration of large-scale renewable energy,new controllable devices,and required reinforcement of power grids,modern power systems have typical characteristics such as uncertainty,vulnerability and openness,whic...With integration of large-scale renewable energy,new controllable devices,and required reinforcement of power grids,modern power systems have typical characteristics such as uncertainty,vulnerability and openness,which makes operation and control of power grids face severe security challenges.Application of artificial intelligence(AI)technologies represented by machine learning in power grid regulation is limited by reliability,interpretability and generalization ability of complex modeling.Mode of hybrid-augmented intelligence(HAI)based on human-machine collaboration(HMC)is a pivotal direction for future development of AI technology in this field.Based on characteristics of applications in power grid regulation,this paper discusses system architecture and key technologies of human-machine hybrid-augmented intelligence(HHI)system for large-scale power grid dispatching and control(PGDC).First,theory and application scenarios of HHI are introduced and analyzed;then physical and functional architectures of HHI system and human-machine collaborative regulation process are proposed.Key technologies are discussed to achieve a thorough integration of human/machine intelligence.Finally,state-of-theart and future development of HHI in power grid regulation are summarized,aiming to efficiently improve the intelligent level of power grid regulation in a human-machine interactive and collaborative way.展开更多
The long-term goal of artificial intelligence (AI) is to make machines learn and think like human beings. Due to the high levels of uncertainty and vulnerability in human life and the open-ended nature of problems t...The long-term goal of artificial intelligence (AI) is to make machines learn and think like human beings. Due to the high levels of uncertainty and vulnerability in human life and the open-ended nature of problems that humans are facing, no matter how intelligent machines are, they are unable to completely replace humans. Therefore, it is necessary to introduce human cognitive capabilities or human-like cognitive models into AI systems to develop a new form of AI, that is, hybrid-augmented intelligence. This form of AI or machine intelligence is a feasible and important developing model. Hybrid-augmented intelligence can be divided into two basic models: one is human-in-the-loop augmented intelligence with human-computer collaboration, and the other is cognitive computing based augmented intelligence, in which a cognitive model is embedded in the machine learning system. This survey describes a basic framework for human-computer collaborative hybrid-augmented intelligence, and the basic elements of hybrid-augmented intelligence based on cognitive computing. These elements include intuitive reasoning, causal models, evolution of memory and knowledge, especially the role and basic principles of intuitive reasoning for complex problem solving, and the cognitive learning framework for visual scene understanding based on memory and reasoning. Several typical applications of hybrid-augmented intelligence in related fields are given.展开更多
DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in...DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in the direction of Imaginative Intelligence(II),i.e.,something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments”[2]to replace conventional“Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]-[6].Now we have OpenAI’s Sora,so soon,but this is not the final,actually far away,and it is just the beginning.展开更多
In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,e...In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.展开更多
Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of ...Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.展开更多
THE tremendous impact of large models represented by ChatGPT[1]-[3]makes it necessary to con-sider the practical applications of such models[4].However,for an artificial intelligence(AI)to truly evolve,it needs to pos...THE tremendous impact of large models represented by ChatGPT[1]-[3]makes it necessary to con-sider the practical applications of such models[4].However,for an artificial intelligence(AI)to truly evolve,it needs to possess a physical“body”to transition from the virtual world to the real world and evolve through interaction with the real environments.In this context,“embodied intelligence”has sparked a new wave of research and technology,leading AI beyond the digital realm into a new paradigm that can actively act and perceive in a physical environment through tangible entities such as robots and automated devices[5].展开更多
AUTOMATION has come a long way since the early days of mechanization,i.e.,the process of working exclusively by hand or using animals to work with machinery.The rise of steam engines and water wheels represented the f...AUTOMATION has come a long way since the early days of mechanization,i.e.,the process of working exclusively by hand or using animals to work with machinery.The rise of steam engines and water wheels represented the first generation of industry,which is now called Industry Citation:L.Vlacic,H.Huang,M.Dotoli,Y.Wang,P.Ioanno,L.Fan,X.Wang,R.Carli,C.Lv,L.Li,X.Na,Q.-L.Han,and F.-Y.Wang,“Automation 5.0:The key to systems intelligence and Industry 5.0,”IEEE/CAA J.Autom.Sinica,vol.11,no.8,pp.1723-1727,Aug.2024.展开更多
Background: The growth and use of Artificial Intelligence (AI) in the medical field is rapidly rising. AI is exhibiting a practical tool in the healthcare industry in patient care. The objective of this current review...Background: The growth and use of Artificial Intelligence (AI) in the medical field is rapidly rising. AI is exhibiting a practical tool in the healthcare industry in patient care. The objective of this current review is to assess and analyze the use of AI and its use in orthopedic practice, as well as its applications, limitations, and pitfalls. Methods: A review of all relevant databases such as EMBASE, Cochrane Database of Systematic Reviews, MEDLINE, Science Citation Index, Scopus, and Web of Science with keywords of AI, orthopedic surgery, applications, and drawbacks. All related articles on AI and orthopaedic practice were reviewed. A total of 3210 articles were included in the review. Results: The data from 351 studies were analyzed where in orthopedic surgery. AI is being used for diagnostic procedures, radiological diagnosis, models of clinical care, and utilization of hospital and bed resources. AI has also taken a chunk of share in assisted robotic orthopaedic surgery. Conclusions: AI has now become part of the orthopedic practice and will further increase its stake in the healthcare industry. Nonetheless, clinicians should remain aware of AI’s serious limitations and pitfalls and consider the drawbacks and errors in its use.展开更多
Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce...Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.展开更多
This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Cl...This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Clinical Cases.AI has enormous potentialfor various applications in the field of Kawasaki disease(KD).One is machinelearning(ML)to assist in the diagnosis of KD,and clinical prediction models havebeen constructed worldwide using ML;the second is using a gene signalcalculation toolbox to identify KD,which can be used to monitor key clinicalfeatures and laboratory parameters of disease severity;and the third is using deeplearning(DL)to assist in cardiac ultrasound detection.The performance of the DLalgorithm is similar to that of experienced cardiac experts in detecting coronaryartery lesions to promoting the diagnosis of KD.To effectively utilize AI in thediagnosis and treatment process of KD,it is crucial to improve the accuracy of AIdecision-making using more medical data,while addressing issues related topatient personal information protection and AI decision-making responsibility.AIprogress is expected to provide patients with accurate and effective medicalservices that will positively impact the diagnosis and treatment of KD in thefuture.展开更多
Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have rev...Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI.展开更多
Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of...Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.展开更多
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.展开更多
The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated...The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats.展开更多
Under the background of“artificial intelligence+X”,the development of landscape architecture industry ushers in new opportunities,and professional talents need to be updated to meet the social demand.This paper anal...Under the background of“artificial intelligence+X”,the development of landscape architecture industry ushers in new opportunities,and professional talents need to be updated to meet the social demand.This paper analyzes the cultivation demand of landscape architecture graduate students in the context of the new era,and identifies the problems by comparing the original professional graduate training mode.The new cultivation mode of graduate students in landscape architecture is proposed,including updating the target orientation of the discipline,optimizing the teaching system,building a“dualteacher”tutor team,and improving the“industry-university-research-utilization”integrated cultivation,so as to cultivate high-quality compound talents with disciplinary characteristics.展开更多
Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on p...Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.展开更多
With the significant and widespread application of lithium-ion batteries,there is a growing demand for improved performances of lithium-ion batteries.The intricate degradation throughout the whole lifecycle profoundly...With the significant and widespread application of lithium-ion batteries,there is a growing demand for improved performances of lithium-ion batteries.The intricate degradation throughout the whole lifecycle profoundly impacts the safety,durability,and reliability of lithium-ion batteries.To ensure the long-term,safe,and efficient operation of lithium-ion batteries in various fields,there is a pressing need for enhanced battery intelligence that can withstand extreme events.This work reviews the current status of intelligent battery technology from three perspectives:intelligent response,intelligent sensing,and intelligent management.The intelligent response of battery materials forms the foundation for battery stability,the intelligent sensing of multi-dimensional signals is essential for battery management,and the intelligent management ensures the long-term stable operation of lithium-ion batteries.The critical challenges encountered in the development of intelligent battery technology from each perspective are thoroughly analyzed,and potential solutions are proposed,aiming to facilitate the rapid development of intelligent battery technologies.展开更多
The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and ...The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.展开更多
Owing to the rapid development of modern computer technologies,artificial intelligence(AI)has emerged as an essential instrument for intelligent analysis across a range of fields.AI has been proven to be highly effect...Owing to the rapid development of modern computer technologies,artificial intelligence(AI)has emerged as an essential instrument for intelligent analysis across a range of fields.AI has been proven to be highly effective in ophthalmology,where it is frequently used for identifying,diagnosing,and typing retinal diseases.An increasing number of researchers have begun to comprehensively map patients’retinal diseases using AI,which has made individualized clinical prediction and treatment possible.These include prognostic improvement,risk prediction,progression assessment,and interventional therapies for retinal diseases.Researchers have used a range of input data methods to increase the accuracy and dependability of the results,including the use of tabular,textual,or image-based input data.They also combined the analyses of multiple types of input data.To give ophthalmologists access to precise,individualized,and high-quality treatment strategies that will further optimize treatment outcomes,this review summarizes the latest findings in AI research related to the prediction and guidance of clinical diagnosis and treatment of retinal diseases.展开更多
Artificial intelligence(AI)is making significant strides in revolutionizing the detection of Barrett's esophagus(BE),a precursor to esophageal adenocarcinoma.In the research article by Tsai et al,researchers utili...Artificial intelligence(AI)is making significant strides in revolutionizing the detection of Barrett's esophagus(BE),a precursor to esophageal adenocarcinoma.In the research article by Tsai et al,researchers utilized endoscopic images to train an AI model,challenging the traditional distinction between endoscopic and histological BE.This approach yielded remarkable results,with the AI system achieving an accuracy of 94.37%,sensitivity of 94.29%,and specificity of 94.44%.The study's extensive dataset enhances the AI model's practicality,offering valuable support to endoscopists by minimizing unnecessary biopsies.However,questions about the applicability to different endoscopic systems remain.The study underscores the potential of AI in BE detection while highlighting the need for further research to assess its adaptability to diverse clinical settings.展开更多
基金supported by the National Key R&D Program of China(2018AAA0101500).
文摘With integration of large-scale renewable energy,new controllable devices,and required reinforcement of power grids,modern power systems have typical characteristics such as uncertainty,vulnerability and openness,which makes operation and control of power grids face severe security challenges.Application of artificial intelligence(AI)technologies represented by machine learning in power grid regulation is limited by reliability,interpretability and generalization ability of complex modeling.Mode of hybrid-augmented intelligence(HAI)based on human-machine collaboration(HMC)is a pivotal direction for future development of AI technology in this field.Based on characteristics of applications in power grid regulation,this paper discusses system architecture and key technologies of human-machine hybrid-augmented intelligence(HHI)system for large-scale power grid dispatching and control(PGDC).First,theory and application scenarios of HHI are introduced and analyzed;then physical and functional architectures of HHI system and human-machine collaborative regulation process are proposed.Key technologies are discussed to achieve a thorough integration of human/machine intelligence.Finally,state-of-theart and future development of HHI in power grid regulation are summarized,aiming to efficiently improve the intelligent level of power grid regulation in a human-machine interactive and collaborative way.
基金Project supported by the Chinese Academy of Engi- neering, the National Natural Science Foundation of China (No. L1522023), the National Basic Research Program (973) of China (No. 2015CB351703), and the National Key Research and Development Plan (Nos. 2016YFB1001004 and 2016YFB1000903)
文摘The long-term goal of artificial intelligence (AI) is to make machines learn and think like human beings. Due to the high levels of uncertainty and vulnerability in human life and the open-ended nature of problems that humans are facing, no matter how intelligent machines are, they are unable to completely replace humans. Therefore, it is necessary to introduce human cognitive capabilities or human-like cognitive models into AI systems to develop a new form of AI, that is, hybrid-augmented intelligence. This form of AI or machine intelligence is a feasible and important developing model. Hybrid-augmented intelligence can be divided into two basic models: one is human-in-the-loop augmented intelligence with human-computer collaboration, and the other is cognitive computing based augmented intelligence, in which a cognitive model is embedded in the machine learning system. This survey describes a basic framework for human-computer collaborative hybrid-augmented intelligence, and the basic elements of hybrid-augmented intelligence based on cognitive computing. These elements include intuitive reasoning, causal models, evolution of memory and knowledge, especially the role and basic principles of intuitive reasoning for complex problem solving, and the cognitive learning framework for visual scene understanding based on memory and reasoning. Several typical applications of hybrid-augmented intelligence in related fields are given.
基金the National Natural Science Foundation of China(62271485,61903363,U1811463,62103411,62203250)the Science and Technology Development Fund of Macao SAR(0093/2023/RIA2,0050/2020/A1)。
文摘DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in the direction of Imaginative Intelligence(II),i.e.,something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments”[2]to replace conventional“Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]-[6].Now we have OpenAI’s Sora,so soon,but this is not the final,actually far away,and it is just the beginning.
基金supported by the National Natural Science Foundation of China(62172033).
文摘In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.
基金supported by the National Natural Science Foundation of China(Grant Nos.42141019 and 42261144687)and STEP(Grant No.2019QZKK0102)supported by the Korea Environmental Industry&Technology Institute(KEITI)through the“Project for developing an observation-based GHG emissions geospatial information map”,funded by the Korea Ministry of Environment(MOE)(Grant No.RS-2023-00232066).
文摘Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.
基金supported by the National Natural Science Foundation of China(62302047,62203250)the Science and Technology Development Fund of Macao SAR(0093/2023/RIA2,0050/2020/A1).
文摘THE tremendous impact of large models represented by ChatGPT[1]-[3]makes it necessary to con-sider the practical applications of such models[4].However,for an artificial intelligence(AI)to truly evolve,it needs to possess a physical“body”to transition from the virtual world to the real world and evolve through interaction with the real environments.In this context,“embodied intelligence”has sparked a new wave of research and technology,leading AI beyond the digital realm into a new paradigm that can actively act and perceive in a physical environment through tangible entities such as robots and automated devices[5].
基金supported in part by the Hong Kong Polytechnic University via the project P0038447The Science and Technology Development Fund,Macao SAR(0093/2023/RIA2)The Science and Technology Development Fund,Macao SAR(0145/2023/RIA3).
文摘AUTOMATION has come a long way since the early days of mechanization,i.e.,the process of working exclusively by hand or using animals to work with machinery.The rise of steam engines and water wheels represented the first generation of industry,which is now called Industry Citation:L.Vlacic,H.Huang,M.Dotoli,Y.Wang,P.Ioanno,L.Fan,X.Wang,R.Carli,C.Lv,L.Li,X.Na,Q.-L.Han,and F.-Y.Wang,“Automation 5.0:The key to systems intelligence and Industry 5.0,”IEEE/CAA J.Autom.Sinica,vol.11,no.8,pp.1723-1727,Aug.2024.
文摘Background: The growth and use of Artificial Intelligence (AI) in the medical field is rapidly rising. AI is exhibiting a practical tool in the healthcare industry in patient care. The objective of this current review is to assess and analyze the use of AI and its use in orthopedic practice, as well as its applications, limitations, and pitfalls. Methods: A review of all relevant databases such as EMBASE, Cochrane Database of Systematic Reviews, MEDLINE, Science Citation Index, Scopus, and Web of Science with keywords of AI, orthopedic surgery, applications, and drawbacks. All related articles on AI and orthopaedic practice were reviewed. A total of 3210 articles were included in the review. Results: The data from 351 studies were analyzed where in orthopedic surgery. AI is being used for diagnostic procedures, radiological diagnosis, models of clinical care, and utilization of hospital and bed resources. AI has also taken a chunk of share in assisted robotic orthopaedic surgery. Conclusions: AI has now become part of the orthopedic practice and will further increase its stake in the healthcare industry. Nonetheless, clinicians should remain aware of AI’s serious limitations and pitfalls and consider the drawbacks and errors in its use.
文摘Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.
文摘This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Clinical Cases.AI has enormous potentialfor various applications in the field of Kawasaki disease(KD).One is machinelearning(ML)to assist in the diagnosis of KD,and clinical prediction models havebeen constructed worldwide using ML;the second is using a gene signalcalculation toolbox to identify KD,which can be used to monitor key clinicalfeatures and laboratory parameters of disease severity;and the third is using deeplearning(DL)to assist in cardiac ultrasound detection.The performance of the DLalgorithm is similar to that of experienced cardiac experts in detecting coronaryartery lesions to promoting the diagnosis of KD.To effectively utilize AI in thediagnosis and treatment process of KD,it is crucial to improve the accuracy of AIdecision-making using more medical data,while addressing issues related topatient personal information protection and AI decision-making responsibility.AIprogress is expected to provide patients with accurate and effective medicalservices that will positively impact the diagnosis and treatment of KD in thefuture.
基金supported by a grant from the Standardization and Integration of Resources Information for Seed-cluster in Hub-Spoke Material Bank Program,Rural Development Administration,Republic of Korea(PJ01587004).
文摘Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI.
基金supported in part by the National Natural Science Foundation of China(82072019)the Shenzhen Basic Research Program(JCYJ20210324130209023)+5 种基金the Shenzhen-Hong Kong-Macao S&T Program(Category C)(SGDX20201103095002019)the Mainland-Hong Kong Joint Funding Scheme(MHKJFS)(MHP/005/20),the Project of Strategic Importance Fund(P0035421)the Projects of RISA(P0043001)from the Hong Kong Polytechnic University,the Natural Science Foundation of Jiangsu Province(BK20201441)the Provincial and Ministry Co-constructed Project of Henan Province Medical Science and Technology Research(SBGJ202103038,SBGJ202102056)the Henan Province Key R&D and Promotion Project(Science and Technology Research)(222102310015)the Natural Science Foundation of Henan Province(222300420575),and the Henan Province Science and Technology Research(222102310322).
文摘Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.
文摘While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
文摘The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats.
基金University-level Graduate Education Reform Project of Yangtze University(YJY202329).
文摘Under the background of“artificial intelligence+X”,the development of landscape architecture industry ushers in new opportunities,and professional talents need to be updated to meet the social demand.This paper analyzes the cultivation demand of landscape architecture graduate students in the context of the new era,and identifies the problems by comparing the original professional graduate training mode.The new cultivation mode of graduate students in landscape architecture is proposed,including updating the target orientation of the discipline,optimizing the teaching system,building a“dualteacher”tutor team,and improving the“industry-university-research-utilization”integrated cultivation,so as to cultivate high-quality compound talents with disciplinary characteristics.
基金supported by the Capital’s Funds for Health Improvement and Research,No.2022-2-2072(to YG).
文摘Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.
基金supported by the National Natural Science Foundation of China (NSFC,Nos.52176199,and U20A20310)supported by the Program of Shanghai Academic/Technology Research Leader (22XD1423800)。
文摘With the significant and widespread application of lithium-ion batteries,there is a growing demand for improved performances of lithium-ion batteries.The intricate degradation throughout the whole lifecycle profoundly impacts the safety,durability,and reliability of lithium-ion batteries.To ensure the long-term,safe,and efficient operation of lithium-ion batteries in various fields,there is a pressing need for enhanced battery intelligence that can withstand extreme events.This work reviews the current status of intelligent battery technology from three perspectives:intelligent response,intelligent sensing,and intelligent management.The intelligent response of battery materials forms the foundation for battery stability,the intelligent sensing of multi-dimensional signals is essential for battery management,and the intelligent management ensures the long-term stable operation of lithium-ion batteries.The critical challenges encountered in the development of intelligent battery technology from each perspective are thoroughly analyzed,and potential solutions are proposed,aiming to facilitate the rapid development of intelligent battery technologies.
基金supported by theCONAHCYT(Consejo Nacional deHumanidades,Ciencias y Tecnologias).
文摘The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment.
基金Supported by the National Natural Science Foundation of China (No.82171080)Nanjing Health Science and Technology Development Special Fund (No.YKK23264).
文摘Owing to the rapid development of modern computer technologies,artificial intelligence(AI)has emerged as an essential instrument for intelligent analysis across a range of fields.AI has been proven to be highly effective in ophthalmology,where it is frequently used for identifying,diagnosing,and typing retinal diseases.An increasing number of researchers have begun to comprehensively map patients’retinal diseases using AI,which has made individualized clinical prediction and treatment possible.These include prognostic improvement,risk prediction,progression assessment,and interventional therapies for retinal diseases.Researchers have used a range of input data methods to increase the accuracy and dependability of the results,including the use of tabular,textual,or image-based input data.They also combined the analyses of multiple types of input data.To give ophthalmologists access to precise,individualized,and high-quality treatment strategies that will further optimize treatment outcomes,this review summarizes the latest findings in AI research related to the prediction and guidance of clinical diagnosis and treatment of retinal diseases.
文摘Artificial intelligence(AI)is making significant strides in revolutionizing the detection of Barrett's esophagus(BE),a precursor to esophageal adenocarcinoma.In the research article by Tsai et al,researchers utilized endoscopic images to train an AI model,challenging the traditional distinction between endoscopic and histological BE.This approach yielded remarkable results,with the AI system achieving an accuracy of 94.37%,sensitivity of 94.29%,and specificity of 94.44%.The study's extensive dataset enhances the AI model's practicality,offering valuable support to endoscopists by minimizing unnecessary biopsies.However,questions about the applicability to different endoscopic systems remain.The study underscores the potential of AI in BE detection while highlighting the need for further research to assess its adaptability to diverse clinical settings.