期刊文献+
共找到167,405篇文章
< 1 2 250 >
每页显示 20 50 100
A review of artificial intelligence applications in high-speed railway systems 被引量:2
1
作者 Xuehan Li Minghao Zhu +3 位作者 Boyang Zhang Xiaoxuan Wang Zha Liu Liang Han 《High-Speed Railway》 2024年第1期11-16,共6页
In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,e... In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions. 展开更多
关键词 High-speed railway Artificial intelligence Intelligent distribution Intelligent control Intelligent scheduling
下载PDF
Automation 5.0: The Key to Systems Intelligence and Industry 5.0 被引量:1
2
作者 Ljubo Vlacic Hailong Huang +10 位作者 Mariagrazia Dotoli Yutong Wang Petros A.Ioannou Lili Fan Xingxia Wang Raffaele Carli Chen Lv Lingxi Li Xiaoxiang Na Qing-Long Han Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第8期1723-1727,共5页
AUTOMATION has come a long way since the early days of mechanization,i.e.,the process of working exclusively by hand or using animals to work with machinery.The rise of steam engines and water wheels represented the f... AUTOMATION has come a long way since the early days of mechanization,i.e.,the process of working exclusively by hand or using animals to work with machinery.The rise of steam engines and water wheels represented the first generation of industry,which is now called Industry Citation:L.Vlacic,H.Huang,M.Dotoli,Y.Wang,P.Ioanno,L.Fan,X.Wang,R.Carli,C.Lv,L.Li,X.Na,Q.-L.Han,and F.-Y.Wang,“Automation 5.0:The key to systems intelligence and Industry 5.0,”IEEE/CAA J.Autom.Sinica,vol.11,no.8,pp.1723-1727,Aug.2024. 展开更多
关键词 AUtOMAtION MACHINERY intelligence
下载PDF
Systematic bibliometric and visualized analysis of research hotspots and trends on the application of artificial intelligence in glaucoma from 2013 to 2022
3
作者 Chun Liu Lu-Yao Wang +2 位作者 Ke-Yu Zhu Chun-Meng Liu Jun-Guo Duan 《International Journal of Ophthalmology(English edition)》 SCIE CAS 2024年第9期1731-1742,共12页
AIM:To conduct a bibliometric analysis of research on artificial intelligence(AI)in the field of glaucoma to gain a comprehensive understanding of the current state of research and identify potential new directions fo... AIM:To conduct a bibliometric analysis of research on artificial intelligence(AI)in the field of glaucoma to gain a comprehensive understanding of the current state of research and identify potential new directions for future studies.METHODS:Relevant articles on the application of AI in the field of glaucoma from the Web of Science Core Collection were retrieved,covering the period from January 1,2013,to December 31,2022.In order to assess the contributions and co-occurrence relationships among different countries/regions,institutions,authors,and journals,CiteSpace and VOSviewer software were employed and the research hotspots and future trends within the field were identified.RESULTS:A total of 750 English articles published between 2013 and 2022 were collected,and the number of publications exhibited an overall increasing trend.The majority of the articles were from China,followed by the United States and India.National University of Singapore,Chinese Academy of Sciences,and Sun Yat-sen University made significant contributions to the published works.Weinreb RN and Fu HZ ranked first among authors and cited authors.American Journal of Ophthalmology is the most impactful academic journal in the field of AI application in glaucoma.The disciplinary scope of this field includes ophthalmology,computer science,mathematics,molecular biology,genetics,and other related disciplines.The clustering and identification of keyword nodes in the co-occurrence network reveal the evolving landscape of AI application in the field of glaucoma.Initially,the hot topics in this field were primarily“segmentation”,“classification”and“diagnosis”.However,in recent years,the focus has shifted to“deep learning”,“convolutional neural network”and“artificial intelligence”.CONCLUSION:With the rapid development of AI technology,scholars have shown increasing interest in its application in the field of glaucoma.Moreover,the application of AI in assisting treatment and predicting prognosis in glaucoma may become a future research hotspot.However,the reliability and interpretability of AI data remain pressing issues that require resolution. 展开更多
关键词 GLAUCOMA ar tificial intelligence BIBLIOMEtRICs
下载PDF
Adaptation of Federated Explainable Artificial Intelligence for Efficient and Secure E-Healthcare Systems
4
作者 Rabia Abid Muhammad Rizwan +3 位作者 Abdulatif Alabdulatif Abdullah Alnajim Meznah Alamro Mourade Azrour 《Computers, Materials & Continua》 SCIE EI 2024年第3期3413-3429,共17页
Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorit... Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system. 展开更多
关键词 Artificial intelligence data privacy federated machine learning healthcare system sECURItY
下载PDF
Artificial intelligence for the detection of glaucoma with SD-OCT images:a systematic review and Meta-analysis
5
作者 Nan-Nan Shi Jing Li +1 位作者 Guang-Hui Liu Ming-Fang Cao 《International Journal of Ophthalmology(English edition)》 SCIE CAS 2024年第3期408-419,共12页
●AIM:To quantify the performance of artificial intelligence(AI)in detecting glaucoma with spectral-domain optical coherence tomography(SD-OCT)images.●METHODS:Electronic databases including PubMed,Embase,Scopus,Scien... ●AIM:To quantify the performance of artificial intelligence(AI)in detecting glaucoma with spectral-domain optical coherence tomography(SD-OCT)images.●METHODS:Electronic databases including PubMed,Embase,Scopus,ScienceDirect,ProQuest and Cochrane Library were searched before May 31,2023 which adopted AI for glaucoma detection with SD-OCT images.All pieces of the literature were screened and extracted by two investigators.Meta-analysis,Meta-regression,subgroup,and publication of bias were conducted by Stata16.0.The risk of bias assessment was performed in Revman5.4 using the QUADAS-2 tool.●RESULTS:Twenty studies and 51 models were selected for systematic review and Meta-analysis.The pooled sensitivity and specificity were 0.91(95%CI:0.86–0.94,I2=94.67%),0.90(95%CI:0.87–0.92,I2=89.24%).The pooled positive likelihood ratio(PLR)and negative likelihood ratio(NLR)were 8.79(95%CI:6.93–11.15,I2=89.31%)and 0.11(95%CI:0.07–0.16,I2=95.25%).The pooled diagnostic odds ratio(DOR)and area under curve(AUC)were 83.58(95%CI:47.15–148.15,I2=100%)and 0.95(95%CI:0.93–0.97).There was no threshold effect(Spearman correlation coefficient=0.22,P>0.05).●CONCLUSION:There is a high accuracy for the detection of glaucoma with AI with SD-OCT images.The application of AI-based algorithms allows together with“doctor+artificial intelligence”to improve the diagnosis of glaucoma. 展开更多
关键词 artificial intelligence spectral-domain optical coherence tomography GLAUCOMA MEtA-ANALYsIs
下载PDF
Battlefield target intelligence system architecture modeling and system optimization
6
作者 LI Wei WANG Yue +2 位作者 JIA Lijuan PENG Senran HE Ruixi 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第5期1190-1210,共21页
To address the current problems of poor generality,low real-time,and imperfect information transmission of the battlefield target intelligence system,this paper studies the battlefield target intelligence system from ... To address the current problems of poor generality,low real-time,and imperfect information transmission of the battlefield target intelligence system,this paper studies the battlefield target intelligence system from the top-level perspective of multi-service joint warfare.First,an overall planning and analysis method of architecture modeling is proposed with the idea of a bionic analogy for battlefield target intelligence system architecture modeling,which reduces the difficulty of the planning and design process.The method introduces the Department of Defense architecture framework(DoDAF)modeling method,the multi-living agent(MLA)theory modeling method,and other combinations for planning and modeling.A set of rapid planning methods that can be applied to model the architecture of various types of complex systems is formed.Further,the liveness analysis of the battlefield target intelligence system is carried out,and the problems of the existing system are presented from several aspects.And the technical prediction of the development and construction is given,which provides directional ideas for the subsequent research and development of the battlefield target intelligence system.In the end,the proposed architecture model of the battlefield target intelligence system is simulated and verified by applying the colored Petri nets(CPN)simulation software.The analysis demonstrates the reasonable integrity of its logic. 展开更多
关键词 battlefield target intelligence system architecture modeling bionic design system optimization simulation verification
下载PDF
Advanced Optimized Anomaly Detection System for IoT Cyberattacks Using Artificial Intelligence
7
作者 Ali Hamid Farea Omar H.Alhazmi Kerem Kucuk 《Computers, Materials & Continua》 SCIE EI 2024年第2期1525-1545,共21页
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),... While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features. 展开更多
关键词 Internet of things sECURItY anomaly detection and prevention system artificial intelligence optimization techniques
下载PDF
Artificial intelligence powered glucose monitoring and controlling system:Pumping module
8
作者 Sravani Medanki Nikhil Dommati +7 位作者 Hema Harshitha Bodapati Venkata Naga Sai Kowsik Katru Gollapalli Moses Abhishek Komaraju Nanda Sai Donepudi Dhanya Yalamanchili Jasti Sateesh Pratap Turimerla 《World Journal of Experimental Medicine》 2024年第1期100-112,共13页
BACKGROUND Diabetes,a globally escalating health concern,necessitates innovative solutions for efficient detection and management.Blood glucose control is an essential aspect of managing diabetes and finding the most ... BACKGROUND Diabetes,a globally escalating health concern,necessitates innovative solutions for efficient detection and management.Blood glucose control is an essential aspect of managing diabetes and finding the most effective ways to control it.The latest findings suggest that a basal insulin administration rate and a single,highconcentration injection before a meal may not be sufficient to maintain healthy blood glucose levels.While the basal insulin rate treatment can stabilize blood glucose levels over the long term,it may not be enough to bring the levels below the post-meal limit after 60 min.The short-term impacts of meals can be greatly reduced by high-concentration injections,which can help stabilize blood glucose levels.Unfortunately,they cannot provide long-term stability to satisfy the postmeal or pre-meal restrictions.However,proportional-integral-derivative(PID)control with basal dose maintains the blood glucose levels within the range for a longer period.AIM To develop a closed-loop electronic system to pump required insulin into the patient's body automatically in synchronization with glucose sensor readings.METHODS The proposed system integrates a glucose sensor,decision unit,and pumping module to specifically address the pumping of insulin and enhance system effectiveness.Serving as the intelligence hub,the decision unit analyzes data from the glucose sensor to determine the optimal insulin dosage,guided by a pre-existing glucose and insulin level table.The artificial intelligence detection block processes this information,providing decision instructions to the pumping module.Equipped with communication antennas,the glucose sensor and micropump operate in a feedback loop,creating a closed-loop system that eliminates the need for manual intervention.RESULTS The incorporation of a PID controller to assess and regulate blood glucose and insulin levels in individuals with diabetes introduces a sophisticated and dynamic element to diabetes management.The simulation not only allows visualization of how the body responds to different inputs but also offers a valuable tool for predicting and testing the effects of various interventions over time.The PID controller's role in adjusting insulin dosage based on the discrepancy between desired setpoints and actual measurements showcases a proactive strategy for maintaining blood glucose levels within a healthy range.This dynamic feedback loop not only delays the onset of steady-state conditions but also effectively counteracts post-meal spikes in blood glucose.CONCLUSION The WiFi-controlled voltage controller and the PID controller simulation collectively underscore the ongoing efforts to enhance efficiency,safety,and personalized care within the realm of diabetes management.These technological advancements not only contribute to the optimization of insulin delivery systems but also have the potential to reshape our understanding of glucose and insulin dynamics,fostering a new era of precision medicine in the treatment of diabetes. 展开更多
关键词 DIABEtEs HYPERGLYCEMIA INsULIN MICROPUMP Closed loop systems Artificial intelligence automation
下载PDF
World Industrial Revolutions and the Development of Artificial Intelligence System
9
作者 Khatira Guliyeva 《Chinese Business Review》 2024年第1期47-51,共5页
The objective-scientific conclusions obtained from the researches conducted in various fields of science prove that era and worldview are in unity and are phenomena that determine one another,and era and worldview are... The objective-scientific conclusions obtained from the researches conducted in various fields of science prove that era and worldview are in unity and are phenomena that determine one another,and era and worldview are the most important phenomena in the understanding of geniuses,historical events,including personalities who have left a mark on the history of politics,and every individual as a whole.And it is appropriate to briefly consider the problem in the context of human and personality factors.It is known that man has tried to understand natural phenomena since the beginning of time.Contact with the material world naturally affects his consciousness and even his subconscious as he solves problems that are important or useful for human life.During this understanding,the worldview changes and is formed.Thus,depending on the material and moral development of all spheres of life,the content and essence of the progress events,as the civilizations replaced each other in different periods,the event of periodization took place and became a system.If we take Europe,the people of the Ice Age of 300,000 years ago,who engaged in hunting to solve their hunger needs,in other words,the age of dinosaurs,have spread to many parts of the world from Africa,where they lived in order to survive and meet more of their daily needs.The extensive integration of agricultural Ice Age People into the Earth included farming,fishing,animal husbandry,hunting,as well as handicrafts,etc.,and has led to the revolutionary development of the fields.As economic activities led these first inhabitants of the planet from caves to less comfortable shelters,then to good houses,then to palaces,labor activities in various occupations,including crafts,developed rapidly.Thus,the fads of the era who differed from the crowd(later this class will be called personalities,geniuses...-Kh.G.)began to appear.If we approach the issue from the point of view of history,we witness that the world view determines the development in different periods.This idea can be expressed in such a way that each period can be considered to have developed or experienced a crisis according to the level of worldview.In this direction of our thoughts,the question arises:So,what is the phenomenon of worldview of this era-XXI century?Based on the general content of the current events,characterized as the globalization stage of the modern world,we can say that the outlook of the historical stage we live in is based on the achievements of the last stage of the industrial revolution.In this article,by analyzing the history of the artificial intelligence system during the world industrial revolutions,we will study both the concept of progress of the industrial revolutions and the progressive and at the same time regressive development of the artificial intelligence system. 展开更多
关键词 world industrial revolutions artificial intelligence DEVELOPMENt
下载PDF
Comparative evaluation of artificial intelligence systems'accuracy in providing medical drug dosages:A methodological study
10
作者 Swaminathan Ramasubramanian Sangeetha Balaji +5 位作者 Tejashri Kannan Naveen Jeyaraman Shilpa Sharma Filippo Migliorini Suhasini Balasubramaniam Madhan Jeyaraman 《World Journal of Methodology》 2024年第4期121-130,共10页
BACKGROUND Medication errors,especially in dosage calculation,pose risks in healthcare.Artificial intelligence(AI)systems like ChatGPT and Google Bard may help reduce errors,but their accuracy in providing medication ... BACKGROUND Medication errors,especially in dosage calculation,pose risks in healthcare.Artificial intelligence(AI)systems like ChatGPT and Google Bard may help reduce errors,but their accuracy in providing medication information remains to be evaluated.AIM To evaluate the accuracy of AI systems(ChatGPT 3.5,ChatGPT 4,Google Bard)in providing drug dosage information per Harrison's Principles of Internal Medicine.METHODS A set of natural language queries mimicking real-world medical dosage inquiries was presented to the AI systems.Responses were analyzed using a 3-point Likert scale.The analysis,conducted with Python and its libraries,focused on basic statistics,overall system accuracy,and disease-specific and organ system accuracies.RESULTS ChatGPT 4 outperformed the other systems,showing the highest rate of correct responses(83.77%)and the best overall weighted accuracy(0.6775).Disease-specific accuracy varied notably across systems,with some diseases being accurately recognized,while others demonstrated significant discrepancies.Organ system accuracy also showed variable results,underscoring system-specific strengths and weaknesses.CONCLUSION ChatGPT 4 demonstrates superior reliability in medical dosage information,yet variations across diseases emphasize the need for ongoing improvements.These results highlight AI's potential in aiding healthcare professionals,urging continuous development for dependable accuracy in critical medical situations. 展开更多
关键词 Dosage calculation Artificial intelligence ChatGPt Drug dosage Healthcare Large language models
下载PDF
Artificial intelligence for characterization of diminutive colorectal polyps:A feasibility study comparing two computer-aided diagnosis systems
11
作者 Quirine Eunice Wennie van der Zander Ramon M Schreuder +9 位作者 Ayla Thijssen Carolus H J Kusters Nikoo Dehghani Thom Scheeve Bjorn Winkens Mirjam C M van der Ende-van Loon Peter H N de With Fons van der Sommen Ad A M Masclee Erik J Schoon 《Artificial Intelligence in Gastrointestinal Endoscopy》 2024年第1期11-22,共12页
BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Poly... BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Polyps(AI4CRP)for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYE^(TM)(Fujifilm,Tokyo,Japan).CADx influence on the optical diagnosis of an expert endoscopist was also investigated.METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm.Both CADxsystems exploit convolutional neural networks.Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard.AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value(range 0.0-1.0).A predefined cut-off value of 0.6 was set with values<0.6 indicating benign and values≥0.6 indicating premalignant colorectal polyps.Low confidence characterizations were defined as values 40%around the cut-off value of 0.6(<0.36 and>0.76).Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps.Self-critical AI4CRP,excluding 14 low confidence characterizations[27.5%(14/51)],had a diagnostic accuracy of 89.2%,sensitivity of 89.7%,and specificity of 87.5%,which was higher compared to AI4CRP.CAD EYE had a 83.7%diagnostic accuracy,74.2%sensitivity,and 100.0%specificity.Diagnostic performances of the endoscopist alone(before AI)increased nonsignificantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE(AI-assisted endoscopist).Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems,except for specificity for which CAD EYE performed best.CONCLUSION Real-time use of AI4CRP was feasible.Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP. 展开更多
关键词 Artificial intelligence Colorectal polyp characterization Computer aided diagnosis Diminutive colorectal polyps Optical diagnosis self-critical artificial intelligence
下载PDF
Research on a Comprehensive Monitoring System for Tunnel Operation based on the Internet of Things and Artificial Intelligence Identification Technology
12
作者 Xingxing Wang Donglin Dai Xiangjun Fan 《Journal of Architectural Research and Development》 2024年第2期84-89,共6页
This article proposes a comprehensive monitoring system for tunnel operation to address the risks associated with tunnel operations.These risks include safety control risks,increased traffic flow,extreme weather event... This article proposes a comprehensive monitoring system for tunnel operation to address the risks associated with tunnel operations.These risks include safety control risks,increased traffic flow,extreme weather events,and movement of tectonic plates.The proposed system is based on the Internet of Things and artificial intelligence identification technology.The monitoring system will cover various aspects of tunnel operations,such as the slope of the entrance,the structural safety of the cave body,toxic and harmful gases that may appear during operation,excessively high and low-temperature humidity,poor illumination,water leakage or road water accumulation caused by extreme weather,combustion and smoke caused by fires,and more.The system will enable comprehensive monitoring and early warning of fire protection systems,accident vehicles,and overheating vehicles.This will effectively improve safety during tunnel operation. 展开更多
关键词 Internet of things Artificial intelligence Operation tunnel MONItORING
下载PDF
Construction and Practice of Education and Teaching Quality Assurance Systems in Applied Colleges and Universities under the Background of Digital Intelligence
13
作者 Hui Cheng 《Journal of Contemporary Educational Research》 2024年第9期265-270,共6页
This paper discusses the optimization strategy of education and teaching quality assurance systems in applied colleges and universities under the background of digital intelligence.It first summarizes the relevant the... This paper discusses the optimization strategy of education and teaching quality assurance systems in applied colleges and universities under the background of digital intelligence.It first summarizes the relevant theories of digital intelligence transformation and analyzes the impact of digital intelligence transformation on higher education.Secondly,this paper puts forward the principles of constructing the quality assurance system of applied colleges,including strengthening the quality assurance consciousness,improving teachers’digital literacy,and implementing digital intelligence governance.From the practical perspective,this paper expounds on strategies such as optimizing educational teaching resource allocation,constructing a diversified evaluation system of teaching quality,strengthening the construction and training of teaching staff,and innovating teaching management methods.Specific optimization measures are put forward,such as improving policies,regulations,and system guarantees,strengthening cooperation between schools and enterprises,integrating industry,school,and research,building an educational information platform,and improving the monitoring and feedback mechanism of educational quality. 展开更多
关键词 Digital intelligence transformation Applied colleges and universities Education and teaching quality Assurance system Optimization strategy
下载PDF
When Does Sora Show:The Beginning of TAO to Imaginative Intelligence and Scenarios Engineering 被引量:13
14
作者 Fei-Yue Wang Qinghai Miao +6 位作者 Lingxi Li Qinghua Ni Xuan Li Juanjuan Li Lili Fan Yonglin Tian Qing-Long Han 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期809-815,共7页
DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in... DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in the direction of Imaginative Intelligence(II),i.e.,something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments”[2]to replace conventional“Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]-[6].Now we have OpenAI’s Sora,so soon,but this is not the final,actually far away,and it is just the beginning. 展开更多
关键词 sOMEtHING intelligence replace
下载PDF
The Journey/DAO/TAO of Embodied Intelligence: From Large Models to Foundation Intelligence and Parallel Intelligence 被引量:1
15
作者 Tianyu Shen Jinlin Sun +4 位作者 Shihan Kong Yutong Wang Juanjuan Li Xuan Li Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第6期1313-1316,共4页
THE tremendous impact of large models represented by ChatGPT[1]-[3]makes it necessary to con-sider the practical applications of such models[4].However,for an artificial intelligence(AI)to truly evolve,it needs to pos... THE tremendous impact of large models represented by ChatGPT[1]-[3]makes it necessary to con-sider the practical applications of such models[4].However,for an artificial intelligence(AI)to truly evolve,it needs to possess a physical“body”to transition from the virtual world to the real world and evolve through interaction with the real environments.In this context,“embodied intelligence”has sparked a new wave of research and technology,leading AI beyond the digital realm into a new paradigm that can actively act and perceive in a physical environment through tangible entities such as robots and automated devices[5]. 展开更多
关键词 intelligence DAO tAO
下载PDF
Artificial Intelligence and Computer Vision during Surgery: Discussing Laparoscopic Images with ChatGPT4—Preliminary Results 被引量:1
16
作者 Savvas Hirides Petros Hirides +1 位作者 Kouloufakou Kalliopi Constantinos Hirides 《Surgical Science》 2024年第3期169-181,共13页
Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce... Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come. 展开更多
关键词 Artificial intelligence sURGERY Image Recognition Autonomous surgery
下载PDF
Application of artificial intelligence in the diagnosis and treatment of Kawasaki disease 被引量:1
17
作者 Yan Pan Fu-Yong Jiao 《World Journal of Clinical Cases》 SCIE 2024年第23期5304-5307,共4页
This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Cl... This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Clinical Cases.AI has enormous potentialfor various applications in the field of Kawasaki disease(KD).One is machinelearning(ML)to assist in the diagnosis of KD,and clinical prediction models havebeen constructed worldwide using ML;the second is using a gene signalcalculation toolbox to identify KD,which can be used to monitor key clinicalfeatures and laboratory parameters of disease severity;and the third is using deeplearning(DL)to assist in cardiac ultrasound detection.The performance of the DLalgorithm is similar to that of experienced cardiac experts in detecting coronaryartery lesions to promoting the diagnosis of KD.To effectively utilize AI in thediagnosis and treatment process of KD,it is crucial to improve the accuracy of AIdecision-making using more medical data,while addressing issues related topatient personal information protection and AI decision-making responsibility.AIprogress is expected to provide patients with accurate and effective medicalservices that will positively impact the diagnosis and treatment of KD in thefuture. 展开更多
关键词 Artificial intelligence Kawasaki disease DIAGNOsIs PREDICtION IMAGE
下载PDF
Use of machine learning models for the prognostication of liver transplantation: A systematic review 被引量:2
18
作者 Gidion Chongo Jonathan Soldera 《World Journal of Transplantation》 2024年第1期164-188,共25页
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p... BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication. 展开更多
关键词 Liver transplantation Machine learning models PROGNOstICAtION Allograft allocation Artificial intelligence
下载PDF
Artificial intelligence-driven radiomics study in cancer:the role of feature engineering and modeling 被引量:1
19
作者 Yuan-Peng Zhang Xin-Yun Zhang +11 位作者 Yu-Ting Cheng Bing Li Xin-Zhi Teng Jiang Zhang Saikit Lam Ta Zhou Zong-Rui Ma Jia-Bao Sheng Victor CWTam Shara WYLee Hong Ge Jing Cai 《Military Medical Research》 SCIE CAS CSCD 2024年第1期115-147,共33页
Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of... Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research. 展开更多
关键词 Artificial intelligence Radiomics Feature extraction Feature selection Modeling INtERPREtABILItY Multimodalities Head and neck cancer
下载PDF
A Review of Hybrid Cyber Threats Modelling and Detection Using Artificial Intelligence in IIoT 被引量:1
20
作者 Yifan Liu Shancang Li +1 位作者 Xinheng Wang Li Xu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第8期1233-1261,共29页
The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated... The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats. 展开更多
关键词 Cyber security Industrial Internet of things artificial intelligence machine learning algorithms hybrid cyber threats
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部