DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in...DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in the direction of Imaginative Intelligence(II),i.e.,something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments”[2]to replace conventional“Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]-[6].Now we have OpenAI’s Sora,so soon,but this is not the final,actually far away,and it is just the beginning.展开更多
In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,e...In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.展开更多
Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of ...Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.展开更多
THE tremendous impact of large models represented by ChatGPT[1]-[3]makes it necessary to con-sider the practical applications of such models[4].However,for an artificial intelligence(AI)to truly evolve,it needs to pos...THE tremendous impact of large models represented by ChatGPT[1]-[3]makes it necessary to con-sider the practical applications of such models[4].However,for an artificial intelligence(AI)to truly evolve,it needs to possess a physical“body”to transition from the virtual world to the real world and evolve through interaction with the real environments.In this context,“embodied intelligence”has sparked a new wave of research and technology,leading AI beyond the digital realm into a new paradigm that can actively act and perceive in a physical environment through tangible entities such as robots and automated devices[5].展开更多
AUTOMATION has come a long way since the early days of mechanization,i.e.,the process of working exclusively by hand or using animals to work with machinery.The rise of steam engines and water wheels represented the f...AUTOMATION has come a long way since the early days of mechanization,i.e.,the process of working exclusively by hand or using animals to work with machinery.The rise of steam engines and water wheels represented the first generation of industry,which is now called Industry Citation:L.Vlacic,H.Huang,M.Dotoli,Y.Wang,P.Ioanno,L.Fan,X.Wang,R.Carli,C.Lv,L.Li,X.Na,Q.-L.Han,and F.-Y.Wang,“Automation 5.0:The key to systems intelligence and Industry 5.0,”IEEE/CAA J.Autom.Sinica,vol.11,no.8,pp.1723-1727,Aug.2024.展开更多
Humans can perceive our complex world through multi-sensory fusion.Under limited visual conditions,people can sense a variety of tactile signals to identify objects accurately and rapidly.However,replicating this uniq...Humans can perceive our complex world through multi-sensory fusion.Under limited visual conditions,people can sense a variety of tactile signals to identify objects accurately and rapidly.However,replicating this unique capability in robots remains a significant challenge.Here,we present a new form of ultralight multifunctional tactile nano-layered carbon aerogel sensor that provides pressure,temperature,material recognition and 3D location capabilities,which is combined with multimodal supervised learning algorithms for object recognition.The sensor exhibits human-like pressure(0.04–100 kPa)and temperature(21.5–66.2℃)detection,millisecond response times(11 ms),a pressure sensitivity of 92.22 kPa^(−1)and triboelectric durability of over 6000 cycles.The devised algorithm has universality and can accommodate a range of application scenarios.The tactile system can identify common foods in a kitchen scene with 94.63%accuracy and explore the topographic and geomorphic features of a Mars scene with 100%accuracy.This sensing approach empowers robots with versatile tactile perception to advance future society toward heightened sensing,recognition and intelligence.展开更多
Background: The growth and use of Artificial Intelligence (AI) in the medical field is rapidly rising. AI is exhibiting a practical tool in the healthcare industry in patient care. The objective of this current review...Background: The growth and use of Artificial Intelligence (AI) in the medical field is rapidly rising. AI is exhibiting a practical tool in the healthcare industry in patient care. The objective of this current review is to assess and analyze the use of AI and its use in orthopedic practice, as well as its applications, limitations, and pitfalls. Methods: A review of all relevant databases such as EMBASE, Cochrane Database of Systematic Reviews, MEDLINE, Science Citation Index, Scopus, and Web of Science with keywords of AI, orthopedic surgery, applications, and drawbacks. All related articles on AI and orthopaedic practice were reviewed. A total of 3210 articles were included in the review. Results: The data from 351 studies were analyzed where in orthopedic surgery. AI is being used for diagnostic procedures, radiological diagnosis, models of clinical care, and utilization of hospital and bed resources. AI has also taken a chunk of share in assisted robotic orthopaedic surgery. Conclusions: AI has now become part of the orthopedic practice and will further increase its stake in the healthcare industry. Nonetheless, clinicians should remain aware of AI’s serious limitations and pitfalls and consider the drawbacks and errors in its use.展开更多
Lower Earth Orbit(LEO) satellite becomes an important part of complementing terrestrial communication due to its lower orbital altitude and smaller propagation delay than Geostationary satellite. However, the LEO sate...Lower Earth Orbit(LEO) satellite becomes an important part of complementing terrestrial communication due to its lower orbital altitude and smaller propagation delay than Geostationary satellite. However, the LEO satellite communication system cannot meet the requirements of users when the satellite-terrestrial link is blocked by obstacles. To solve this problem, we introduce Intelligent reflect surface(IRS) for improving the achievable rate of terrestrial users in LEO satellite communication. We investigated joint IRS scheduling, user scheduling, power and bandwidth allocation(JIRPB) optimization algorithm for improving LEO satellite system throughput.The optimization problem of joint user scheduling and resource allocation is formulated as a non-convex optimization problem. To cope with this problem, the nonconvex optimization problem is divided into resource allocation optimization sub-problem and scheduling optimization sub-problem firstly. Second, we optimize the resource allocation sub-problem via alternating direction multiplier method(ADMM) and scheduling sub-problem via Lagrangian dual method repeatedly.Third, we prove that the proposed resource allocation algorithm based ADMM approaches sublinear convergence theoretically. Finally, we demonstrate that the proposed JIRPB optimization algorithm improves the LEO satellite communication system throughput.展开更多
Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce...Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.展开更多
This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Cl...This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Clinical Cases.AI has enormous potentialfor various applications in the field of Kawasaki disease(KD).One is machinelearning(ML)to assist in the diagnosis of KD,and clinical prediction models havebeen constructed worldwide using ML;the second is using a gene signalcalculation toolbox to identify KD,which can be used to monitor key clinicalfeatures and laboratory parameters of disease severity;and the third is using deeplearning(DL)to assist in cardiac ultrasound detection.The performance of the DLalgorithm is similar to that of experienced cardiac experts in detecting coronaryartery lesions to promoting the diagnosis of KD.To effectively utilize AI in thediagnosis and treatment process of KD,it is crucial to improve the accuracy of AIdecision-making using more medical data,while addressing issues related topatient personal information protection and AI decision-making responsibility.AIprogress is expected to provide patients with accurate and effective medicalservices that will positively impact the diagnosis and treatment of KD in thefuture.展开更多
Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have rev...Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI.展开更多
Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of...Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.展开更多
This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control fram...This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control framework for such ICVs.This framework seamlessly integrates the predesigned optimal control with distributed model predictive control(DMPC)optimization and introduces a unique distributed attack detector to ensure the reliability of the transmitted information among vehicles.Notably,our strategy uses previously broadcasted information and a specialized convex set,termed the“resilience set”,to identify unreliable data.This approach significantly eases graph robustness prerequisites,requiring only an(F+1)-robust graph,in contrast to the established mean sequence reduced algorithms,which require a minimum(2F+1)-robust graph.Additionally,we introduce a verification algorithm to restore trust in vehicles under minor attacks,further reducing communication network robustness.Our analysis demonstrates the recursive feasibility of the DMPC optimization.Furthermore,the proposed method achieves exceptional control performance by minimizing the discrepancies between the DMPC control inputs and predesigned platoon control inputs,while ensuring constraint compliance and cybersecurity.Simulation results verify the effectiveness of our theoretical findings.展开更多
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.展开更多
AIM:To develop an artificial intelligence(AI)diagnosis model based on deep learning(DL)algorithm to diagnose different types of retinal vein occlusion(RVO)by recognizing color fundus photographs(CFPs).METHODS:Totally ...AIM:To develop an artificial intelligence(AI)diagnosis model based on deep learning(DL)algorithm to diagnose different types of retinal vein occlusion(RVO)by recognizing color fundus photographs(CFPs).METHODS:Totally 914 CFPs of healthy people and patients with RVO were collected as experimental data sets,and used to train,verify and test the diagnostic model of RVO.All the images were divided into four categories[normal,central retinal vein occlusion(CRVO),branch retinal vein occlusion(BRVO),and macular retinal vein occlusion(MRVO)]by three fundus disease experts.Swin Transformer was used to build the RVO diagnosis model,and different types of RVO diagnosis experiments were conducted.The model’s performance was compared to that of the experts.RESULTS:The accuracy of the model in the diagnosis of normal,CRVO,BRVO,and MRVO reached 1.000,0.978,0.957,and 0.978;the specificity reached 1.000,0.986,0.982,and 0.976;the sensitivity reached 1.000,0.955,0.917,and 1.000;the F1-Sore reached 1.000,0.9550.943,and 0.887 respectively.In addition,the area under curve of normal,CRVO,BRVO,and MRVO diagnosed by the diagnostic model were 1.000,0.900,0.959 and 0.970,respectively.The diagnostic results were highly consistent with those of fundus disease experts,and the diagnostic performance was superior.CONCLUSION:The diagnostic model developed in this study can well diagnose different types of RVO,effectively relieve the work pressure of clinicians,and provide help for the follow-up clinical diagnosis and treatment of RVO patients.展开更多
The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated...The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats.展开更多
Under the background of“artificial intelligence+X”,the development of landscape architecture industry ushers in new opportunities,and professional talents need to be updated to meet the social demand.This paper anal...Under the background of“artificial intelligence+X”,the development of landscape architecture industry ushers in new opportunities,and professional talents need to be updated to meet the social demand.This paper analyzes the cultivation demand of landscape architecture graduate students in the context of the new era,and identifies the problems by comparing the original professional graduate training mode.The new cultivation mode of graduate students in landscape architecture is proposed,including updating the target orientation of the discipline,optimizing the teaching system,building a“dualteacher”tutor team,and improving the“industry-university-research-utilization”integrated cultivation,so as to cultivate high-quality compound talents with disciplinary characteristics.展开更多
Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on p...Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.展开更多
With the significant and widespread application of lithium-ion batteries,there is a growing demand for improved performances of lithium-ion batteries.The intricate degradation throughout the whole lifecycle profoundly...With the significant and widespread application of lithium-ion batteries,there is a growing demand for improved performances of lithium-ion batteries.The intricate degradation throughout the whole lifecycle profoundly impacts the safety,durability,and reliability of lithium-ion batteries.To ensure the long-term,safe,and efficient operation of lithium-ion batteries in various fields,there is a pressing need for enhanced battery intelligence that can withstand extreme events.This work reviews the current status of intelligent battery technology from three perspectives:intelligent response,intelligent sensing,and intelligent management.The intelligent response of battery materials forms the foundation for battery stability,the intelligent sensing of multi-dimensional signals is essential for battery management,and the intelligent management ensures the long-term stable operation of lithium-ion batteries.The critical challenges encountered in the development of intelligent battery technology from each perspective are thoroughly analyzed,and potential solutions are proposed,aiming to facilitate the rapid development of intelligent battery technologies.展开更多
With the advancement of Artificial Intelligence(AI)technology,traditional industrial systems are undergoing an intelligent transformation,bringing together advanced computing,communication and control technologies,Mac...With the advancement of Artificial Intelligence(AI)technology,traditional industrial systems are undergoing an intelligent transformation,bringing together advanced computing,communication and control technologies,Machine Learning(ML)-based intelligentmodelling has become a newparadigm for solving problems in the industrial domain[1–3].With numerous applications and diverse data types in the industrial domain,algorithmic and data-driven ML techniques can intelligently learn potential correlations between complex data and make efficient decisions while reducing human intervention.However,in real-world application scenarios,existing algorithms may have a variety of limitations,such as small data volumes,small detection targets,low efficiency,and algorithmic gaps in specific application domains[4].Therefore,many new algorithms and strategies have been proposed to address the challenges in industrial applications[5–8].展开更多
基金the National Natural Science Foundation of China(62271485,61903363,U1811463,62103411,62203250)the Science and Technology Development Fund of Macao SAR(0093/2023/RIA2,0050/2020/A1)。
文摘DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in the direction of Imaginative Intelligence(II),i.e.,something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments”[2]to replace conventional“Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]-[6].Now we have OpenAI’s Sora,so soon,but this is not the final,actually far away,and it is just the beginning.
基金supported by the National Natural Science Foundation of China(62172033).
文摘In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.
基金supported by the National Natural Science Foundation of China(Grant Nos.42141019 and 42261144687)and STEP(Grant No.2019QZKK0102)supported by the Korea Environmental Industry&Technology Institute(KEITI)through the“Project for developing an observation-based GHG emissions geospatial information map”,funded by the Korea Ministry of Environment(MOE)(Grant No.RS-2023-00232066).
文摘Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.
基金supported by the National Natural Science Foundation of China(62302047,62203250)the Science and Technology Development Fund of Macao SAR(0093/2023/RIA2,0050/2020/A1).
文摘THE tremendous impact of large models represented by ChatGPT[1]-[3]makes it necessary to con-sider the practical applications of such models[4].However,for an artificial intelligence(AI)to truly evolve,it needs to possess a physical“body”to transition from the virtual world to the real world and evolve through interaction with the real environments.In this context,“embodied intelligence”has sparked a new wave of research and technology,leading AI beyond the digital realm into a new paradigm that can actively act and perceive in a physical environment through tangible entities such as robots and automated devices[5].
基金supported in part by the Hong Kong Polytechnic University via the project P0038447The Science and Technology Development Fund,Macao SAR(0093/2023/RIA2)The Science and Technology Development Fund,Macao SAR(0145/2023/RIA3).
文摘AUTOMATION has come a long way since the early days of mechanization,i.e.,the process of working exclusively by hand or using animals to work with machinery.The rise of steam engines and water wheels represented the first generation of industry,which is now called Industry Citation:L.Vlacic,H.Huang,M.Dotoli,Y.Wang,P.Ioanno,L.Fan,X.Wang,R.Carli,C.Lv,L.Li,X.Na,Q.-L.Han,and F.-Y.Wang,“Automation 5.0:The key to systems intelligence and Industry 5.0,”IEEE/CAA J.Autom.Sinica,vol.11,no.8,pp.1723-1727,Aug.2024.
基金the National Natural Science Foundation of China(Grant No.52072041)the Beijing Natural Science Foundation(Grant No.JQ21007)+2 种基金the University of Chinese Academy of Sciences(Grant No.Y8540XX2D2)the Robotics Rhino-Bird Focused Research Project(No.2020-01-002)the Tencent Robotics X Laboratory.
文摘Humans can perceive our complex world through multi-sensory fusion.Under limited visual conditions,people can sense a variety of tactile signals to identify objects accurately and rapidly.However,replicating this unique capability in robots remains a significant challenge.Here,we present a new form of ultralight multifunctional tactile nano-layered carbon aerogel sensor that provides pressure,temperature,material recognition and 3D location capabilities,which is combined with multimodal supervised learning algorithms for object recognition.The sensor exhibits human-like pressure(0.04–100 kPa)and temperature(21.5–66.2℃)detection,millisecond response times(11 ms),a pressure sensitivity of 92.22 kPa^(−1)and triboelectric durability of over 6000 cycles.The devised algorithm has universality and can accommodate a range of application scenarios.The tactile system can identify common foods in a kitchen scene with 94.63%accuracy and explore the topographic and geomorphic features of a Mars scene with 100%accuracy.This sensing approach empowers robots with versatile tactile perception to advance future society toward heightened sensing,recognition and intelligence.
文摘Background: The growth and use of Artificial Intelligence (AI) in the medical field is rapidly rising. AI is exhibiting a practical tool in the healthcare industry in patient care. The objective of this current review is to assess and analyze the use of AI and its use in orthopedic practice, as well as its applications, limitations, and pitfalls. Methods: A review of all relevant databases such as EMBASE, Cochrane Database of Systematic Reviews, MEDLINE, Science Citation Index, Scopus, and Web of Science with keywords of AI, orthopedic surgery, applications, and drawbacks. All related articles on AI and orthopaedic practice were reviewed. A total of 3210 articles were included in the review. Results: The data from 351 studies were analyzed where in orthopedic surgery. AI is being used for diagnostic procedures, radiological diagnosis, models of clinical care, and utilization of hospital and bed resources. AI has also taken a chunk of share in assisted robotic orthopaedic surgery. Conclusions: AI has now become part of the orthopedic practice and will further increase its stake in the healthcare industry. Nonetheless, clinicians should remain aware of AI’s serious limitations and pitfalls and consider the drawbacks and errors in its use.
基金supported by the National Key R&D Program of China under Grant 2020YFB1807900the National Natural Science Foundation of China (NSFC) under Grant 61931005Beijing University of Posts and Telecommunications-China Mobile Research Institute Joint Innovation Center。
文摘Lower Earth Orbit(LEO) satellite becomes an important part of complementing terrestrial communication due to its lower orbital altitude and smaller propagation delay than Geostationary satellite. However, the LEO satellite communication system cannot meet the requirements of users when the satellite-terrestrial link is blocked by obstacles. To solve this problem, we introduce Intelligent reflect surface(IRS) for improving the achievable rate of terrestrial users in LEO satellite communication. We investigated joint IRS scheduling, user scheduling, power and bandwidth allocation(JIRPB) optimization algorithm for improving LEO satellite system throughput.The optimization problem of joint user scheduling and resource allocation is formulated as a non-convex optimization problem. To cope with this problem, the nonconvex optimization problem is divided into resource allocation optimization sub-problem and scheduling optimization sub-problem firstly. Second, we optimize the resource allocation sub-problem via alternating direction multiplier method(ADMM) and scheduling sub-problem via Lagrangian dual method repeatedly.Third, we prove that the proposed resource allocation algorithm based ADMM approaches sublinear convergence theoretically. Finally, we demonstrate that the proposed JIRPB optimization algorithm improves the LEO satellite communication system throughput.
文摘Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.
文摘This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Clinical Cases.AI has enormous potentialfor various applications in the field of Kawasaki disease(KD).One is machinelearning(ML)to assist in the diagnosis of KD,and clinical prediction models havebeen constructed worldwide using ML;the second is using a gene signalcalculation toolbox to identify KD,which can be used to monitor key clinicalfeatures and laboratory parameters of disease severity;and the third is using deeplearning(DL)to assist in cardiac ultrasound detection.The performance of the DLalgorithm is similar to that of experienced cardiac experts in detecting coronaryartery lesions to promoting the diagnosis of KD.To effectively utilize AI in thediagnosis and treatment process of KD,it is crucial to improve the accuracy of AIdecision-making using more medical data,while addressing issues related topatient personal information protection and AI decision-making responsibility.AIprogress is expected to provide patients with accurate and effective medicalservices that will positively impact the diagnosis and treatment of KD in thefuture.
基金supported by a grant from the Standardization and Integration of Resources Information for Seed-cluster in Hub-Spoke Material Bank Program,Rural Development Administration,Republic of Korea(PJ01587004).
文摘Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI.
基金supported in part by the National Natural Science Foundation of China(82072019)the Shenzhen Basic Research Program(JCYJ20210324130209023)+5 种基金the Shenzhen-Hong Kong-Macao S&T Program(Category C)(SGDX20201103095002019)the Mainland-Hong Kong Joint Funding Scheme(MHKJFS)(MHP/005/20),the Project of Strategic Importance Fund(P0035421)the Projects of RISA(P0043001)from the Hong Kong Polytechnic University,the Natural Science Foundation of Jiangsu Province(BK20201441)the Provincial and Ministry Co-constructed Project of Henan Province Medical Science and Technology Research(SBGJ202103038,SBGJ202102056)the Henan Province Key R&D and Promotion Project(Science and Technology Research)(222102310015)the Natural Science Foundation of Henan Province(222300420575),and the Henan Province Science and Technology Research(222102310322).
文摘Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.
基金the financial support from the Natural Sciences and Engineering Research Council of Canada(NSERC)。
文摘This study investigates resilient platoon control for constrained intelligent and connected vehicles(ICVs)against F-local Byzantine attacks.We introduce a resilient distributed model-predictive platooning control framework for such ICVs.This framework seamlessly integrates the predesigned optimal control with distributed model predictive control(DMPC)optimization and introduces a unique distributed attack detector to ensure the reliability of the transmitted information among vehicles.Notably,our strategy uses previously broadcasted information and a specialized convex set,termed the“resilience set”,to identify unreliable data.This approach significantly eases graph robustness prerequisites,requiring only an(F+1)-robust graph,in contrast to the established mean sequence reduced algorithms,which require a minimum(2F+1)-robust graph.Additionally,we introduce a verification algorithm to restore trust in vehicles under minor attacks,further reducing communication network robustness.Our analysis demonstrates the recursive feasibility of the DMPC optimization.Furthermore,the proposed method achieves exceptional control performance by minimizing the discrepancies between the DMPC control inputs and predesigned platoon control inputs,while ensuring constraint compliance and cybersecurity.Simulation results verify the effectiveness of our theoretical findings.
文摘While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
基金Supported by Shenzhen Fund for Guangdong Provincial High-level Clinical Key Specialties(No.SZGSP014)Sanming Project of Medicine in Shenzhen(No.SZSM202011015)Shenzhen Science and Technology Planning Project(No.KCXFZ20211020163813019).
文摘AIM:To develop an artificial intelligence(AI)diagnosis model based on deep learning(DL)algorithm to diagnose different types of retinal vein occlusion(RVO)by recognizing color fundus photographs(CFPs).METHODS:Totally 914 CFPs of healthy people and patients with RVO were collected as experimental data sets,and used to train,verify and test the diagnostic model of RVO.All the images were divided into four categories[normal,central retinal vein occlusion(CRVO),branch retinal vein occlusion(BRVO),and macular retinal vein occlusion(MRVO)]by three fundus disease experts.Swin Transformer was used to build the RVO diagnosis model,and different types of RVO diagnosis experiments were conducted.The model’s performance was compared to that of the experts.RESULTS:The accuracy of the model in the diagnosis of normal,CRVO,BRVO,and MRVO reached 1.000,0.978,0.957,and 0.978;the specificity reached 1.000,0.986,0.982,and 0.976;the sensitivity reached 1.000,0.955,0.917,and 1.000;the F1-Sore reached 1.000,0.9550.943,and 0.887 respectively.In addition,the area under curve of normal,CRVO,BRVO,and MRVO diagnosed by the diagnostic model were 1.000,0.900,0.959 and 0.970,respectively.The diagnostic results were highly consistent with those of fundus disease experts,and the diagnostic performance was superior.CONCLUSION:The diagnostic model developed in this study can well diagnose different types of RVO,effectively relieve the work pressure of clinicians,and provide help for the follow-up clinical diagnosis and treatment of RVO patients.
文摘The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats.
基金University-level Graduate Education Reform Project of Yangtze University(YJY202329).
文摘Under the background of“artificial intelligence+X”,the development of landscape architecture industry ushers in new opportunities,and professional talents need to be updated to meet the social demand.This paper analyzes the cultivation demand of landscape architecture graduate students in the context of the new era,and identifies the problems by comparing the original professional graduate training mode.The new cultivation mode of graduate students in landscape architecture is proposed,including updating the target orientation of the discipline,optimizing the teaching system,building a“dualteacher”tutor team,and improving the“industry-university-research-utilization”integrated cultivation,so as to cultivate high-quality compound talents with disciplinary characteristics.
基金supported by the Capital’s Funds for Health Improvement and Research,No.2022-2-2072(to YG).
文摘Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.
基金supported by the National Natural Science Foundation of China (NSFC,Nos.52176199,and U20A20310)supported by the Program of Shanghai Academic/Technology Research Leader (22XD1423800)。
文摘With the significant and widespread application of lithium-ion batteries,there is a growing demand for improved performances of lithium-ion batteries.The intricate degradation throughout the whole lifecycle profoundly impacts the safety,durability,and reliability of lithium-ion batteries.To ensure the long-term,safe,and efficient operation of lithium-ion batteries in various fields,there is a pressing need for enhanced battery intelligence that can withstand extreme events.This work reviews the current status of intelligent battery technology from three perspectives:intelligent response,intelligent sensing,and intelligent management.The intelligent response of battery materials forms the foundation for battery stability,the intelligent sensing of multi-dimensional signals is essential for battery management,and the intelligent management ensures the long-term stable operation of lithium-ion batteries.The critical challenges encountered in the development of intelligent battery technology from each perspective are thoroughly analyzed,and potential solutions are proposed,aiming to facilitate the rapid development of intelligent battery technologies.
基金supported in part by the Beijing Natural Science Foundation under Grants L211020 and M21032in part by the National Natural Science Foundation of China under Grants U1836106,62271045,and U2133218.
文摘With the advancement of Artificial Intelligence(AI)technology,traditional industrial systems are undergoing an intelligent transformation,bringing together advanced computing,communication and control technologies,Machine Learning(ML)-based intelligentmodelling has become a newparadigm for solving problems in the industrial domain[1–3].With numerous applications and diverse data types in the industrial domain,algorithmic and data-driven ML techniques can intelligently learn potential correlations between complex data and make efficient decisions while reducing human intervention.However,in real-world application scenarios,existing algorithms may have a variety of limitations,such as small data volumes,small detection targets,low efficiency,and algorithmic gaps in specific application domains[4].Therefore,many new algorithms and strategies have been proposed to address the challenges in industrial applications[5–8].