In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,e...In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.展开更多
AUTOMATION has come a long way since the early days of mechanization,i.e.,the process of working exclusively by hand or using animals to work with machinery.The rise of steam engines and water wheels represented the f...AUTOMATION has come a long way since the early days of mechanization,i.e.,the process of working exclusively by hand or using animals to work with machinery.The rise of steam engines and water wheels represented the first generation of industry,which is now called Industry Citation:L.Vlacic,H.Huang,M.Dotoli,Y.Wang,P.Ioanno,L.Fan,X.Wang,R.Carli,C.Lv,L.Li,X.Na,Q.-L.Han,and F.-Y.Wang,“Automation 5.0:The key to systems intelligence and Industry 5.0,”IEEE/CAA J.Autom.Sinica,vol.11,no.8,pp.1723-1727,Aug.2024.展开更多
AIM:To conduct a bibliometric analysis of research on artificial intelligence(AI)in the field of glaucoma to gain a comprehensive understanding of the current state of research and identify potential new directions fo...AIM:To conduct a bibliometric analysis of research on artificial intelligence(AI)in the field of glaucoma to gain a comprehensive understanding of the current state of research and identify potential new directions for future studies.METHODS:Relevant articles on the application of AI in the field of glaucoma from the Web of Science Core Collection were retrieved,covering the period from January 1,2013,to December 31,2022.In order to assess the contributions and co-occurrence relationships among different countries/regions,institutions,authors,and journals,CiteSpace and VOSviewer software were employed and the research hotspots and future trends within the field were identified.RESULTS:A total of 750 English articles published between 2013 and 2022 were collected,and the number of publications exhibited an overall increasing trend.The majority of the articles were from China,followed by the United States and India.National University of Singapore,Chinese Academy of Sciences,and Sun Yat-sen University made significant contributions to the published works.Weinreb RN and Fu HZ ranked first among authors and cited authors.American Journal of Ophthalmology is the most impactful academic journal in the field of AI application in glaucoma.The disciplinary scope of this field includes ophthalmology,computer science,mathematics,molecular biology,genetics,and other related disciplines.The clustering and identification of keyword nodes in the co-occurrence network reveal the evolving landscape of AI application in the field of glaucoma.Initially,the hot topics in this field were primarily“segmentation”,“classification”and“diagnosis”.However,in recent years,the focus has shifted to“deep learning”,“convolutional neural network”and“artificial intelligence”.CONCLUSION:With the rapid development of AI technology,scholars have shown increasing interest in its application in the field of glaucoma.Moreover,the application of AI in assisting treatment and predicting prognosis in glaucoma may become a future research hotspot.However,the reliability and interpretability of AI data remain pressing issues that require resolution.展开更多
Plants sequester carbon through photosynthesis and provide primary productivity for the ecosystem. However, they also simultaneously consume water through transpiration, leading to a carbon-water balance relationship....Plants sequester carbon through photosynthesis and provide primary productivity for the ecosystem. However, they also simultaneously consume water through transpiration, leading to a carbon-water balance relationship. Agricultural production can be regarded as a form of carbon sequestration behavior.From the perspective of the natural-social-economic complex ecosystem, excessive water usage in food production will aggravate regional water pressure for both domestic and industrial purposes. Hence, achieving a harmonious equilibrium between carbon and water resources during the food production process is a key scientific challenge for ensuring food security and sustainability. Digital intelligence(DI) and cyber-physical-social systems(CPSS) are emerging as the new research paradigms that are causing a substantial shift in the conventional thinking and methodologies across various scientific fields, including ecological science and sustainability studies. This paper outlines our recent efforts in using advanced technologies such as big data, artificial intelligence(AI), digital twins, metaverses, and parallel intelligence to model, analyze, and manage the intricate dynamics and equilibrium among plants, carbon, and water in arid and semiarid ecosystems. It introduces the concept of the carbon-water balance and explores its management at three levels: the individual plant level, the community level, and the natural-social-economic complex ecosystem level. Additionally, we elucidate the significance of agricultural foundation models as fundamental technologies within this context. A case analysis of water usage shows that, given the limited availability of water resources in the context of the carbon-water balance, regional collaboration and optimized allocation have the potential to enhance the utilization efficiency of water resources in the river basin. A suggested approach is to consider the river basin as a unified entity and coordinate the relationship between the upstream, midstream and downstream areas. Furthermore, establishing mechanisms for water resource transfer and trade among different industries can be instrumental in maximizing the benefits derived from water resources.Finally, we envisage a future of agriculture characterized by the integration of digital, robotic and biological farming techniques.This vision aims to incorporate small tasks, big models, and deep intelligence into the regular ecological practices of intelligent agriculture.展开更多
Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorit...Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.展开更多
●AIM:To quantify the performance of artificial intelligence(AI)in detecting glaucoma with spectral-domain optical coherence tomography(SD-OCT)images.●METHODS:Electronic databases including PubMed,Embase,Scopus,Scien...●AIM:To quantify the performance of artificial intelligence(AI)in detecting glaucoma with spectral-domain optical coherence tomography(SD-OCT)images.●METHODS:Electronic databases including PubMed,Embase,Scopus,ScienceDirect,ProQuest and Cochrane Library were searched before May 31,2023 which adopted AI for glaucoma detection with SD-OCT images.All pieces of the literature were screened and extracted by two investigators.Meta-analysis,Meta-regression,subgroup,and publication of bias were conducted by Stata16.0.The risk of bias assessment was performed in Revman5.4 using the QUADAS-2 tool.●RESULTS:Twenty studies and 51 models were selected for systematic review and Meta-analysis.The pooled sensitivity and specificity were 0.91(95%CI:0.86–0.94,I2=94.67%),0.90(95%CI:0.87–0.92,I2=89.24%).The pooled positive likelihood ratio(PLR)and negative likelihood ratio(NLR)were 8.79(95%CI:6.93–11.15,I2=89.31%)and 0.11(95%CI:0.07–0.16,I2=95.25%).The pooled diagnostic odds ratio(DOR)and area under curve(AUC)were 83.58(95%CI:47.15–148.15,I2=100%)and 0.95(95%CI:0.93–0.97).There was no threshold effect(Spearman correlation coefficient=0.22,P>0.05).●CONCLUSION:There is a high accuracy for the detection of glaucoma with AI with SD-OCT images.The application of AI-based algorithms allows together with“doctor+artificial intelligence”to improve the diagnosis of glaucoma.展开更多
To address the current problems of poor generality,low real-time,and imperfect information transmission of the battlefield target intelligence system,this paper studies the battlefield target intelligence system from ...To address the current problems of poor generality,low real-time,and imperfect information transmission of the battlefield target intelligence system,this paper studies the battlefield target intelligence system from the top-level perspective of multi-service joint warfare.First,an overall planning and analysis method of architecture modeling is proposed with the idea of a bionic analogy for battlefield target intelligence system architecture modeling,which reduces the difficulty of the planning and design process.The method introduces the Department of Defense architecture framework(DoDAF)modeling method,the multi-living agent(MLA)theory modeling method,and other combinations for planning and modeling.A set of rapid planning methods that can be applied to model the architecture of various types of complex systems is formed.Further,the liveness analysis of the battlefield target intelligence system is carried out,and the problems of the existing system are presented from several aspects.And the technical prediction of the development and construction is given,which provides directional ideas for the subsequent research and development of the battlefield target intelligence system.In the end,the proposed architecture model of the battlefield target intelligence system is simulated and verified by applying the colored Petri nets(CPN)simulation software.The analysis demonstrates the reasonable integrity of its logic.展开更多
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.展开更多
BACKGROUND Diabetes,a globally escalating health concern,necessitates innovative solutions for efficient detection and management.Blood glucose control is an essential aspect of managing diabetes and finding the most ...BACKGROUND Diabetes,a globally escalating health concern,necessitates innovative solutions for efficient detection and management.Blood glucose control is an essential aspect of managing diabetes and finding the most effective ways to control it.The latest findings suggest that a basal insulin administration rate and a single,highconcentration injection before a meal may not be sufficient to maintain healthy blood glucose levels.While the basal insulin rate treatment can stabilize blood glucose levels over the long term,it may not be enough to bring the levels below the post-meal limit after 60 min.The short-term impacts of meals can be greatly reduced by high-concentration injections,which can help stabilize blood glucose levels.Unfortunately,they cannot provide long-term stability to satisfy the postmeal or pre-meal restrictions.However,proportional-integral-derivative(PID)control with basal dose maintains the blood glucose levels within the range for a longer period.AIM To develop a closed-loop electronic system to pump required insulin into the patient's body automatically in synchronization with glucose sensor readings.METHODS The proposed system integrates a glucose sensor,decision unit,and pumping module to specifically address the pumping of insulin and enhance system effectiveness.Serving as the intelligence hub,the decision unit analyzes data from the glucose sensor to determine the optimal insulin dosage,guided by a pre-existing glucose and insulin level table.The artificial intelligence detection block processes this information,providing decision instructions to the pumping module.Equipped with communication antennas,the glucose sensor and micropump operate in a feedback loop,creating a closed-loop system that eliminates the need for manual intervention.RESULTS The incorporation of a PID controller to assess and regulate blood glucose and insulin levels in individuals with diabetes introduces a sophisticated and dynamic element to diabetes management.The simulation not only allows visualization of how the body responds to different inputs but also offers a valuable tool for predicting and testing the effects of various interventions over time.The PID controller's role in adjusting insulin dosage based on the discrepancy between desired setpoints and actual measurements showcases a proactive strategy for maintaining blood glucose levels within a healthy range.This dynamic feedback loop not only delays the onset of steady-state conditions but also effectively counteracts post-meal spikes in blood glucose.CONCLUSION The WiFi-controlled voltage controller and the PID controller simulation collectively underscore the ongoing efforts to enhance efficiency,safety,and personalized care within the realm of diabetes management.These technological advancements not only contribute to the optimization of insulin delivery systems but also have the potential to reshape our understanding of glucose and insulin dynamics,fostering a new era of precision medicine in the treatment of diabetes.展开更多
The objective-scientific conclusions obtained from the researches conducted in various fields of science prove that era and worldview are in unity and are phenomena that determine one another,and era and worldview are...The objective-scientific conclusions obtained from the researches conducted in various fields of science prove that era and worldview are in unity and are phenomena that determine one another,and era and worldview are the most important phenomena in the understanding of geniuses,historical events,including personalities who have left a mark on the history of politics,and every individual as a whole.And it is appropriate to briefly consider the problem in the context of human and personality factors.It is known that man has tried to understand natural phenomena since the beginning of time.Contact with the material world naturally affects his consciousness and even his subconscious as he solves problems that are important or useful for human life.During this understanding,the worldview changes and is formed.Thus,depending on the material and moral development of all spheres of life,the content and essence of the progress events,as the civilizations replaced each other in different periods,the event of periodization took place and became a system.If we take Europe,the people of the Ice Age of 300,000 years ago,who engaged in hunting to solve their hunger needs,in other words,the age of dinosaurs,have spread to many parts of the world from Africa,where they lived in order to survive and meet more of their daily needs.The extensive integration of agricultural Ice Age People into the Earth included farming,fishing,animal husbandry,hunting,as well as handicrafts,etc.,and has led to the revolutionary development of the fields.As economic activities led these first inhabitants of the planet from caves to less comfortable shelters,then to good houses,then to palaces,labor activities in various occupations,including crafts,developed rapidly.Thus,the fads of the era who differed from the crowd(later this class will be called personalities,geniuses...-Kh.G.)began to appear.If we approach the issue from the point of view of history,we witness that the world view determines the development in different periods.This idea can be expressed in such a way that each period can be considered to have developed or experienced a crisis according to the level of worldview.In this direction of our thoughts,the question arises:So,what is the phenomenon of worldview of this era-XXI century?Based on the general content of the current events,characterized as the globalization stage of the modern world,we can say that the outlook of the historical stage we live in is based on the achievements of the last stage of the industrial revolution.In this article,by analyzing the history of the artificial intelligence system during the world industrial revolutions,we will study both the concept of progress of the industrial revolutions and the progressive and at the same time regressive development of the artificial intelligence system.展开更多
Legacy-based threat detection systems have not been able to keep up with the exponential growth in scope, frequency, and effect of cybersecurity threats. Artificial intelligence is being used as a result to help with ...Legacy-based threat detection systems have not been able to keep up with the exponential growth in scope, frequency, and effect of cybersecurity threats. Artificial intelligence is being used as a result to help with the issue. This paper’s primary goal is to examine how African nations are utilizing artificial intelligence to defend their infrastructure against cyberattacks. Artificial intelligence (AI) systems will make decisions that impact Africa’s future. The lack of technical expertise, the labor pool, financial resources, data limitations, uncertainty, lack of structured data, absence of government policies, ethics, user attitudes, insufficient investment in research and development, and the requirement for more adaptable and dynamic regulatory systems all pose obstacles to the adoption of AI technologies in Africa. The paper discusses how African countries are adopting artificial intelligence solutions for cybersecurity. And it shows the impact of AI to identify shadow data, monitor for abnormalities in data access and alert cyber security professionals about potential threats by anyone accessing the data or sensitive information saving valuable time in detecting and remediating issues in real-time. The study finds that 69.16% of African companies are implementing information security strategies and of these, 45% said they use technologies based on AI algorithms. This study finds that a large number of African businesses use tools that can track and analyze user behaviour in designated areas and spot anomalies, such as new users, strange IP addresses and login activity, changes to permissions on files, folders, and other resources, and the copying or erasure of massive amounts of data. Thus, we discover that just 18.18% of the target has no national cybersecurity strategy or policy. The study proposes using big data security analytics to integrate AI. Adopting it would be beneficial for all African nations, as it provides a range of cyberattack defense techniques.展开更多
The intelligent security system is a series of systems that use modern information technology means such as artificial intelligence, cloud computing, big data, face recognition to carry out comprehensive monitoring, e...The intelligent security system is a series of systems that use modern information technology means such as artificial intelligence, cloud computing, big data, face recognition to carry out comprehensive monitoring, early warning, prevention and control, disposal, etc, for security protection. It is the development trend of security system in the future, and it is also the basis for open sharing between higher education parks and universities. By using content analysis, unstructured interviews and other research methods, this paper deeply studies the feasibility and basic ideas of the construction of intelligent security system in Shahe Higher Education Park, and forms basic experience and typical practices through the project construction, which further promotes the more intelligent, standardized and scientific safety management in colleges and universities. It really provides an important theoretical basis and practical guidance for the opening and sharing between higher education parks and universities.展开更多
BACKGROUND Medication errors,especially in dosage calculation,pose risks in healthcare.Artificial intelligence(AI)systems like ChatGPT and Google Bard may help reduce errors,but their accuracy in providing medication ...BACKGROUND Medication errors,especially in dosage calculation,pose risks in healthcare.Artificial intelligence(AI)systems like ChatGPT and Google Bard may help reduce errors,but their accuracy in providing medication information remains to be evaluated.AIM To evaluate the accuracy of AI systems(ChatGPT 3.5,ChatGPT 4,Google Bard)in providing drug dosage information per Harrison's Principles of Internal Medicine.METHODS A set of natural language queries mimicking real-world medical dosage inquiries was presented to the AI systems.Responses were analyzed using a 3-point Likert scale.The analysis,conducted with Python and its libraries,focused on basic statistics,overall system accuracy,and disease-specific and organ system accuracies.RESULTS ChatGPT 4 outperformed the other systems,showing the highest rate of correct responses(83.77%)and the best overall weighted accuracy(0.6775).Disease-specific accuracy varied notably across systems,with some diseases being accurately recognized,while others demonstrated significant discrepancies.Organ system accuracy also showed variable results,underscoring system-specific strengths and weaknesses.CONCLUSION ChatGPT 4 demonstrates superior reliability in medical dosage information,yet variations across diseases emphasize the need for ongoing improvements.These results highlight AI's potential in aiding healthcare professionals,urging continuous development for dependable accuracy in critical medical situations.展开更多
The multi-mode integrated railway system,anchored by the high-speed railway,caters to the diverse travel requirements both within and between cities,offering safe,comfortable,punctual,and eco-friendly transportation s...The multi-mode integrated railway system,anchored by the high-speed railway,caters to the diverse travel requirements both within and between cities,offering safe,comfortable,punctual,and eco-friendly transportation services.With the expansion of the railway networks,enhancing the efficiency and safety of the comprehensive system has become a crucial issue in the advanced development of railway transportation.In light of the prevailing application of artificial intelligence technologies within railway systems,this study leverages large model technology characterized by robust learning capabilities,efficient associative abilities,and linkage analysis to propose an Artificial-intelligent(AI)-powered railway control and dispatching system.This system is elaborately designed with four core functions,including global optimum unattended dispatching,synergetic transportation in multiple modes,high-speed automatic control,and precise maintenance decision and execution.The deployment pathway and essential tasks of the system are further delineated,alongside the challenges and obstacles encountered.The AI-powered system promises a significant enhancement in the operational efficiency and safety of the composite railway system,ensuring a more effective alignment between transportation services and passenger demands.展开更多
BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Poly...BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Polyps(AI4CRP)for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYE^(TM)(Fujifilm,Tokyo,Japan).CADx influence on the optical diagnosis of an expert endoscopist was also investigated.METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm.Both CADxsystems exploit convolutional neural networks.Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard.AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value(range 0.0-1.0).A predefined cut-off value of 0.6 was set with values<0.6 indicating benign and values≥0.6 indicating premalignant colorectal polyps.Low confidence characterizations were defined as values 40%around the cut-off value of 0.6(<0.36 and>0.76).Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps.Self-critical AI4CRP,excluding 14 low confidence characterizations[27.5%(14/51)],had a diagnostic accuracy of 89.2%,sensitivity of 89.7%,and specificity of 87.5%,which was higher compared to AI4CRP.CAD EYE had a 83.7%diagnostic accuracy,74.2%sensitivity,and 100.0%specificity.Diagnostic performances of the endoscopist alone(before AI)increased nonsignificantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE(AI-assisted endoscopist).Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems,except for specificity for which CAD EYE performed best.CONCLUSION Real-time use of AI4CRP was feasible.Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP.展开更多
This article proposes a comprehensive monitoring system for tunnel operation to address the risks associated with tunnel operations.These risks include safety control risks,increased traffic flow,extreme weather event...This article proposes a comprehensive monitoring system for tunnel operation to address the risks associated with tunnel operations.These risks include safety control risks,increased traffic flow,extreme weather events,and movement of tectonic plates.The proposed system is based on the Internet of Things and artificial intelligence identification technology.The monitoring system will cover various aspects of tunnel operations,such as the slope of the entrance,the structural safety of the cave body,toxic and harmful gases that may appear during operation,excessively high and low-temperature humidity,poor illumination,water leakage or road water accumulation caused by extreme weather,combustion and smoke caused by fires,and more.The system will enable comprehensive monitoring and early warning of fire protection systems,accident vehicles,and overheating vehicles.This will effectively improve safety during tunnel operation.展开更多
This paper discusses the optimization strategy of education and teaching quality assurance systems in applied colleges and universities under the background of digital intelligence.It first summarizes the relevant the...This paper discusses the optimization strategy of education and teaching quality assurance systems in applied colleges and universities under the background of digital intelligence.It first summarizes the relevant theories of digital intelligence transformation and analyzes the impact of digital intelligence transformation on higher education.Secondly,this paper puts forward the principles of constructing the quality assurance system of applied colleges,including strengthening the quality assurance consciousness,improving teachers’digital literacy,and implementing digital intelligence governance.From the practical perspective,this paper expounds on strategies such as optimizing educational teaching resource allocation,constructing a diversified evaluation system of teaching quality,strengthening the construction and training of teaching staff,and innovating teaching management methods.Specific optimization measures are put forward,such as improving policies,regulations,and system guarantees,strengthening cooperation between schools and enterprises,integrating industry,school,and research,building an educational information platform,and improving the monitoring and feedback mechanism of educational quality.展开更多
Acute pancreatitis(AP)is a potentially life-threatening inflammatory disease of the pancreas,with clinical management determined by the severity of the disease.Diagnosis,severity prediction,and prognosis assessment of...Acute pancreatitis(AP)is a potentially life-threatening inflammatory disease of the pancreas,with clinical management determined by the severity of the disease.Diagnosis,severity prediction,and prognosis assessment of AP typically involve the use of imaging technologies,such as computed tomography,magnetic resonance imaging,and ultrasound,and scoring systems,including Ranson,Acute Physiology and Chronic Health Evaluation II,and Bedside Index for Severity in AP scores.Computed tomography is considered the gold standard imaging modality for AP due to its high sensitivity and specificity,while magnetic resonance imaging and ultrasound can provide additional information on biliary obstruction and vascular complications.Scoring systems utilize clinical and laboratory parameters to classify AP patients into mild,moderate,or severe categories,guiding treatment decisions,such as intensive care unit admission,early enteral feeding,and antibiotic use.Despite the central role of imaging technologies and scoring systems in AP management,these methods have limitations in terms of accuracy,reproducibility,practicality and economics.Recent advancements of artificial intelligence(AI)provide new opportunities to enhance their performance by analyzing vast amounts of clinical and imaging data.AI algorithms can analyze large amounts of clinical and imaging data,identify scoring system patterns,and predict the clinical course of disease.AI-based models have shown promising results in predicting the severity and mortality of AP,but further validation and standardization are required before widespread clinical application.In addition,understanding the correlation between these three technologies will aid in developing new methods that can accurately,sensitively,and specifically be used in the diagnosis,severity prediction,and prognosis assessment of AP through complementary advantages.展开更多
The integration of digital twin(DT)and 6G edge intelligence provides accurate forecasting for distributed resources control in smart park.However,the adverse impact of model poisoning attacks on DT model training cann...The integration of digital twin(DT)and 6G edge intelligence provides accurate forecasting for distributed resources control in smart park.However,the adverse impact of model poisoning attacks on DT model training cannot be ignored.To address this issue,we firstly construct the models of DT model training and model poisoning attacks.An optimization problem is formulated to minimize the weighted sum of the DT loss function and DT model training delay.Then,the problem is transformed and solved by the proposed Multi-timescAle endogenouS securiTy-aware DQN-based rEsouRce management algorithm(MASTER)based on DT-assisted state information evaluation and attack detection.MASTER adopts multi-timescale deep Q-learning(DQN)networks to jointly schedule local training epochs and devices.It actively adjusts resource management strategies based on estimated attack probability to achieve endogenous security awareness.Simulation results demonstrate that MASTER has excellent performances in DT model training accuracy and delay.展开更多
DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in...DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in the direction of Imaginative Intelligence(II),i.e.,something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments”[2]to replace conventional“Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]-[6].Now we have OpenAI’s Sora,so soon,but this is not the final,actually far away,and it is just the beginning.展开更多
基金supported by the National Natural Science Foundation of China(62172033).
文摘In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.
基金supported in part by the Hong Kong Polytechnic University via the project P0038447The Science and Technology Development Fund,Macao SAR(0093/2023/RIA2)The Science and Technology Development Fund,Macao SAR(0145/2023/RIA3).
文摘AUTOMATION has come a long way since the early days of mechanization,i.e.,the process of working exclusively by hand or using animals to work with machinery.The rise of steam engines and water wheels represented the first generation of industry,which is now called Industry Citation:L.Vlacic,H.Huang,M.Dotoli,Y.Wang,P.Ioanno,L.Fan,X.Wang,R.Carli,C.Lv,L.Li,X.Na,Q.-L.Han,and F.-Y.Wang,“Automation 5.0:The key to systems intelligence and Industry 5.0,”IEEE/CAA J.Autom.Sinica,vol.11,no.8,pp.1723-1727,Aug.2024.
基金Supported by National Natural Science Foundation of China(No.82074335).
文摘AIM:To conduct a bibliometric analysis of research on artificial intelligence(AI)in the field of glaucoma to gain a comprehensive understanding of the current state of research and identify potential new directions for future studies.METHODS:Relevant articles on the application of AI in the field of glaucoma from the Web of Science Core Collection were retrieved,covering the period from January 1,2013,to December 31,2022.In order to assess the contributions and co-occurrence relationships among different countries/regions,institutions,authors,and journals,CiteSpace and VOSviewer software were employed and the research hotspots and future trends within the field were identified.RESULTS:A total of 750 English articles published between 2013 and 2022 were collected,and the number of publications exhibited an overall increasing trend.The majority of the articles were from China,followed by the United States and India.National University of Singapore,Chinese Academy of Sciences,and Sun Yat-sen University made significant contributions to the published works.Weinreb RN and Fu HZ ranked first among authors and cited authors.American Journal of Ophthalmology is the most impactful academic journal in the field of AI application in glaucoma.The disciplinary scope of this field includes ophthalmology,computer science,mathematics,molecular biology,genetics,and other related disciplines.The clustering and identification of keyword nodes in the co-occurrence network reveal the evolving landscape of AI application in the field of glaucoma.Initially,the hot topics in this field were primarily“segmentation”,“classification”and“diagnosis”.However,in recent years,the focus has shifted to“deep learning”,“convolutional neural network”and“artificial intelligence”.CONCLUSION:With the rapid development of AI technology,scholars have shown increasing interest in its application in the field of glaucoma.Moreover,the application of AI in assisting treatment and predicting prognosis in glaucoma may become a future research hotspot.However,the reliability and interpretability of AI data remain pressing issues that require resolution.
基金supported in part by the National Key Research and Development Program of China (2021ZD0113704)the National Natural Science Foundation of China (62076239, 42041005,62103411)+1 种基金the Science and Technology Development FundMacao SAR(0050/2020/A1)。
文摘Plants sequester carbon through photosynthesis and provide primary productivity for the ecosystem. However, they also simultaneously consume water through transpiration, leading to a carbon-water balance relationship. Agricultural production can be regarded as a form of carbon sequestration behavior.From the perspective of the natural-social-economic complex ecosystem, excessive water usage in food production will aggravate regional water pressure for both domestic and industrial purposes. Hence, achieving a harmonious equilibrium between carbon and water resources during the food production process is a key scientific challenge for ensuring food security and sustainability. Digital intelligence(DI) and cyber-physical-social systems(CPSS) are emerging as the new research paradigms that are causing a substantial shift in the conventional thinking and methodologies across various scientific fields, including ecological science and sustainability studies. This paper outlines our recent efforts in using advanced technologies such as big data, artificial intelligence(AI), digital twins, metaverses, and parallel intelligence to model, analyze, and manage the intricate dynamics and equilibrium among plants, carbon, and water in arid and semiarid ecosystems. It introduces the concept of the carbon-water balance and explores its management at three levels: the individual plant level, the community level, and the natural-social-economic complex ecosystem level. Additionally, we elucidate the significance of agricultural foundation models as fundamental technologies within this context. A case analysis of water usage shows that, given the limited availability of water resources in the context of the carbon-water balance, regional collaboration and optimized allocation have the potential to enhance the utilization efficiency of water resources in the river basin. A suggested approach is to consider the river basin as a unified entity and coordinate the relationship between the upstream, midstream and downstream areas. Furthermore, establishing mechanisms for water resource transfer and trade among different industries can be instrumental in maximizing the benefits derived from water resources.Finally, we envisage a future of agriculture characterized by the integration of digital, robotic and biological farming techniques.This vision aims to incorporate small tasks, big models, and deep intelligence into the regular ecological practices of intelligent agriculture.
文摘Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.
文摘●AIM:To quantify the performance of artificial intelligence(AI)in detecting glaucoma with spectral-domain optical coherence tomography(SD-OCT)images.●METHODS:Electronic databases including PubMed,Embase,Scopus,ScienceDirect,ProQuest and Cochrane Library were searched before May 31,2023 which adopted AI for glaucoma detection with SD-OCT images.All pieces of the literature were screened and extracted by two investigators.Meta-analysis,Meta-regression,subgroup,and publication of bias were conducted by Stata16.0.The risk of bias assessment was performed in Revman5.4 using the QUADAS-2 tool.●RESULTS:Twenty studies and 51 models were selected for systematic review and Meta-analysis.The pooled sensitivity and specificity were 0.91(95%CI:0.86–0.94,I2=94.67%),0.90(95%CI:0.87–0.92,I2=89.24%).The pooled positive likelihood ratio(PLR)and negative likelihood ratio(NLR)were 8.79(95%CI:6.93–11.15,I2=89.31%)and 0.11(95%CI:0.07–0.16,I2=95.25%).The pooled diagnostic odds ratio(DOR)and area under curve(AUC)were 83.58(95%CI:47.15–148.15,I2=100%)and 0.95(95%CI:0.93–0.97).There was no threshold effect(Spearman correlation coefficient=0.22,P>0.05).●CONCLUSION:There is a high accuracy for the detection of glaucoma with AI with SD-OCT images.The application of AI-based algorithms allows together with“doctor+artificial intelligence”to improve the diagnosis of glaucoma.
基金supported by the National Natural Science Foundation of China(41927801).
文摘To address the current problems of poor generality,low real-time,and imperfect information transmission of the battlefield target intelligence system,this paper studies the battlefield target intelligence system from the top-level perspective of multi-service joint warfare.First,an overall planning and analysis method of architecture modeling is proposed with the idea of a bionic analogy for battlefield target intelligence system architecture modeling,which reduces the difficulty of the planning and design process.The method introduces the Department of Defense architecture framework(DoDAF)modeling method,the multi-living agent(MLA)theory modeling method,and other combinations for planning and modeling.A set of rapid planning methods that can be applied to model the architecture of various types of complex systems is formed.Further,the liveness analysis of the battlefield target intelligence system is carried out,and the problems of the existing system are presented from several aspects.And the technical prediction of the development and construction is given,which provides directional ideas for the subsequent research and development of the battlefield target intelligence system.In the end,the proposed architecture model of the battlefield target intelligence system is simulated and verified by applying the colored Petri nets(CPN)simulation software.The analysis demonstrates the reasonable integrity of its logic.
文摘While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
文摘BACKGROUND Diabetes,a globally escalating health concern,necessitates innovative solutions for efficient detection and management.Blood glucose control is an essential aspect of managing diabetes and finding the most effective ways to control it.The latest findings suggest that a basal insulin administration rate and a single,highconcentration injection before a meal may not be sufficient to maintain healthy blood glucose levels.While the basal insulin rate treatment can stabilize blood glucose levels over the long term,it may not be enough to bring the levels below the post-meal limit after 60 min.The short-term impacts of meals can be greatly reduced by high-concentration injections,which can help stabilize blood glucose levels.Unfortunately,they cannot provide long-term stability to satisfy the postmeal or pre-meal restrictions.However,proportional-integral-derivative(PID)control with basal dose maintains the blood glucose levels within the range for a longer period.AIM To develop a closed-loop electronic system to pump required insulin into the patient's body automatically in synchronization with glucose sensor readings.METHODS The proposed system integrates a glucose sensor,decision unit,and pumping module to specifically address the pumping of insulin and enhance system effectiveness.Serving as the intelligence hub,the decision unit analyzes data from the glucose sensor to determine the optimal insulin dosage,guided by a pre-existing glucose and insulin level table.The artificial intelligence detection block processes this information,providing decision instructions to the pumping module.Equipped with communication antennas,the glucose sensor and micropump operate in a feedback loop,creating a closed-loop system that eliminates the need for manual intervention.RESULTS The incorporation of a PID controller to assess and regulate blood glucose and insulin levels in individuals with diabetes introduces a sophisticated and dynamic element to diabetes management.The simulation not only allows visualization of how the body responds to different inputs but also offers a valuable tool for predicting and testing the effects of various interventions over time.The PID controller's role in adjusting insulin dosage based on the discrepancy between desired setpoints and actual measurements showcases a proactive strategy for maintaining blood glucose levels within a healthy range.This dynamic feedback loop not only delays the onset of steady-state conditions but also effectively counteracts post-meal spikes in blood glucose.CONCLUSION The WiFi-controlled voltage controller and the PID controller simulation collectively underscore the ongoing efforts to enhance efficiency,safety,and personalized care within the realm of diabetes management.These technological advancements not only contribute to the optimization of insulin delivery systems but also have the potential to reshape our understanding of glucose and insulin dynamics,fostering a new era of precision medicine in the treatment of diabetes.
文摘The objective-scientific conclusions obtained from the researches conducted in various fields of science prove that era and worldview are in unity and are phenomena that determine one another,and era and worldview are the most important phenomena in the understanding of geniuses,historical events,including personalities who have left a mark on the history of politics,and every individual as a whole.And it is appropriate to briefly consider the problem in the context of human and personality factors.It is known that man has tried to understand natural phenomena since the beginning of time.Contact with the material world naturally affects his consciousness and even his subconscious as he solves problems that are important or useful for human life.During this understanding,the worldview changes and is formed.Thus,depending on the material and moral development of all spheres of life,the content and essence of the progress events,as the civilizations replaced each other in different periods,the event of periodization took place and became a system.If we take Europe,the people of the Ice Age of 300,000 years ago,who engaged in hunting to solve their hunger needs,in other words,the age of dinosaurs,have spread to many parts of the world from Africa,where they lived in order to survive and meet more of their daily needs.The extensive integration of agricultural Ice Age People into the Earth included farming,fishing,animal husbandry,hunting,as well as handicrafts,etc.,and has led to the revolutionary development of the fields.As economic activities led these first inhabitants of the planet from caves to less comfortable shelters,then to good houses,then to palaces,labor activities in various occupations,including crafts,developed rapidly.Thus,the fads of the era who differed from the crowd(later this class will be called personalities,geniuses...-Kh.G.)began to appear.If we approach the issue from the point of view of history,we witness that the world view determines the development in different periods.This idea can be expressed in such a way that each period can be considered to have developed or experienced a crisis according to the level of worldview.In this direction of our thoughts,the question arises:So,what is the phenomenon of worldview of this era-XXI century?Based on the general content of the current events,characterized as the globalization stage of the modern world,we can say that the outlook of the historical stage we live in is based on the achievements of the last stage of the industrial revolution.In this article,by analyzing the history of the artificial intelligence system during the world industrial revolutions,we will study both the concept of progress of the industrial revolutions and the progressive and at the same time regressive development of the artificial intelligence system.
文摘Legacy-based threat detection systems have not been able to keep up with the exponential growth in scope, frequency, and effect of cybersecurity threats. Artificial intelligence is being used as a result to help with the issue. This paper’s primary goal is to examine how African nations are utilizing artificial intelligence to defend their infrastructure against cyberattacks. Artificial intelligence (AI) systems will make decisions that impact Africa’s future. The lack of technical expertise, the labor pool, financial resources, data limitations, uncertainty, lack of structured data, absence of government policies, ethics, user attitudes, insufficient investment in research and development, and the requirement for more adaptable and dynamic regulatory systems all pose obstacles to the adoption of AI technologies in Africa. The paper discusses how African countries are adopting artificial intelligence solutions for cybersecurity. And it shows the impact of AI to identify shadow data, monitor for abnormalities in data access and alert cyber security professionals about potential threats by anyone accessing the data or sensitive information saving valuable time in detecting and remediating issues in real-time. The study finds that 69.16% of African companies are implementing information security strategies and of these, 45% said they use technologies based on AI algorithms. This study finds that a large number of African businesses use tools that can track and analyze user behaviour in designated areas and spot anomalies, such as new users, strange IP addresses and login activity, changes to permissions on files, folders, and other resources, and the copying or erasure of massive amounts of data. Thus, we discover that just 18.18% of the target has no national cybersecurity strategy or policy. The study proposes using big data security analytics to integrate AI. Adopting it would be beneficial for all African nations, as it provides a range of cyberattack defense techniques.
文摘The intelligent security system is a series of systems that use modern information technology means such as artificial intelligence, cloud computing, big data, face recognition to carry out comprehensive monitoring, early warning, prevention and control, disposal, etc, for security protection. It is the development trend of security system in the future, and it is also the basis for open sharing between higher education parks and universities. By using content analysis, unstructured interviews and other research methods, this paper deeply studies the feasibility and basic ideas of the construction of intelligent security system in Shahe Higher Education Park, and forms basic experience and typical practices through the project construction, which further promotes the more intelligent, standardized and scientific safety management in colleges and universities. It really provides an important theoretical basis and practical guidance for the opening and sharing between higher education parks and universities.
文摘BACKGROUND Medication errors,especially in dosage calculation,pose risks in healthcare.Artificial intelligence(AI)systems like ChatGPT and Google Bard may help reduce errors,but their accuracy in providing medication information remains to be evaluated.AIM To evaluate the accuracy of AI systems(ChatGPT 3.5,ChatGPT 4,Google Bard)in providing drug dosage information per Harrison's Principles of Internal Medicine.METHODS A set of natural language queries mimicking real-world medical dosage inquiries was presented to the AI systems.Responses were analyzed using a 3-point Likert scale.The analysis,conducted with Python and its libraries,focused on basic statistics,overall system accuracy,and disease-specific and organ system accuracies.RESULTS ChatGPT 4 outperformed the other systems,showing the highest rate of correct responses(83.77%)and the best overall weighted accuracy(0.6775).Disease-specific accuracy varied notably across systems,with some diseases being accurately recognized,while others demonstrated significant discrepancies.Organ system accuracy also showed variable results,underscoring system-specific strengths and weaknesses.CONCLUSION ChatGPT 4 demonstrates superior reliability in medical dosage information,yet variations across diseases emphasize the need for ongoing improvements.These results highlight AI's potential in aiding healthcare professionals,urging continuous development for dependable accuracy in critical medical situations.
基金supported by the National Key R&D Program of China(2022YFB4300500).
文摘The multi-mode integrated railway system,anchored by the high-speed railway,caters to the diverse travel requirements both within and between cities,offering safe,comfortable,punctual,and eco-friendly transportation services.With the expansion of the railway networks,enhancing the efficiency and safety of the comprehensive system has become a crucial issue in the advanced development of railway transportation.In light of the prevailing application of artificial intelligence technologies within railway systems,this study leverages large model technology characterized by robust learning capabilities,efficient associative abilities,and linkage analysis to propose an Artificial-intelligent(AI)-powered railway control and dispatching system.This system is elaborately designed with four core functions,including global optimum unattended dispatching,synergetic transportation in multiple modes,high-speed automatic control,and precise maintenance decision and execution.The deployment pathway and essential tasks of the system are further delineated,alongside the challenges and obstacles encountered.The AI-powered system promises a significant enhancement in the operational efficiency and safety of the composite railway system,ensuring a more effective alignment between transportation services and passenger demands.
文摘BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Polyps(AI4CRP)for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYE^(TM)(Fujifilm,Tokyo,Japan).CADx influence on the optical diagnosis of an expert endoscopist was also investigated.METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm.Both CADxsystems exploit convolutional neural networks.Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard.AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value(range 0.0-1.0).A predefined cut-off value of 0.6 was set with values<0.6 indicating benign and values≥0.6 indicating premalignant colorectal polyps.Low confidence characterizations were defined as values 40%around the cut-off value of 0.6(<0.36 and>0.76).Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps.Self-critical AI4CRP,excluding 14 low confidence characterizations[27.5%(14/51)],had a diagnostic accuracy of 89.2%,sensitivity of 89.7%,and specificity of 87.5%,which was higher compared to AI4CRP.CAD EYE had a 83.7%diagnostic accuracy,74.2%sensitivity,and 100.0%specificity.Diagnostic performances of the endoscopist alone(before AI)increased nonsignificantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE(AI-assisted endoscopist).Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems,except for specificity for which CAD EYE performed best.CONCLUSION Real-time use of AI4CRP was feasible.Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP.
文摘This article proposes a comprehensive monitoring system for tunnel operation to address the risks associated with tunnel operations.These risks include safety control risks,increased traffic flow,extreme weather events,and movement of tectonic plates.The proposed system is based on the Internet of Things and artificial intelligence identification technology.The monitoring system will cover various aspects of tunnel operations,such as the slope of the entrance,the structural safety of the cave body,toxic and harmful gases that may appear during operation,excessively high and low-temperature humidity,poor illumination,water leakage or road water accumulation caused by extreme weather,combustion and smoke caused by fires,and more.The system will enable comprehensive monitoring and early warning of fire protection systems,accident vehicles,and overheating vehicles.This will effectively improve safety during tunnel operation.
基金2023 Annual Funded Projects for Educational Scientific Research at Xuzhou University of Technology“Construction and Practice of the Quality Assurance System for Education and Teaching in Applied Undergraduate Colleges under the Background of Digitalization”(YGJ2345)。
文摘This paper discusses the optimization strategy of education and teaching quality assurance systems in applied colleges and universities under the background of digital intelligence.It first summarizes the relevant theories of digital intelligence transformation and analyzes the impact of digital intelligence transformation on higher education.Secondly,this paper puts forward the principles of constructing the quality assurance system of applied colleges,including strengthening the quality assurance consciousness,improving teachers’digital literacy,and implementing digital intelligence governance.From the practical perspective,this paper expounds on strategies such as optimizing educational teaching resource allocation,constructing a diversified evaluation system of teaching quality,strengthening the construction and training of teaching staff,and innovating teaching management methods.Specific optimization measures are put forward,such as improving policies,regulations,and system guarantees,strengthening cooperation between schools and enterprises,integrating industry,school,and research,building an educational information platform,and improving the monitoring and feedback mechanism of educational quality.
基金Fujian Provincial Health Technology Project,No.2020GGA079Natural Science Foundation of Fujian Province,No.2021J011380National Natural Science Foundation of China,No.62276146.
文摘Acute pancreatitis(AP)is a potentially life-threatening inflammatory disease of the pancreas,with clinical management determined by the severity of the disease.Diagnosis,severity prediction,and prognosis assessment of AP typically involve the use of imaging technologies,such as computed tomography,magnetic resonance imaging,and ultrasound,and scoring systems,including Ranson,Acute Physiology and Chronic Health Evaluation II,and Bedside Index for Severity in AP scores.Computed tomography is considered the gold standard imaging modality for AP due to its high sensitivity and specificity,while magnetic resonance imaging and ultrasound can provide additional information on biliary obstruction and vascular complications.Scoring systems utilize clinical and laboratory parameters to classify AP patients into mild,moderate,or severe categories,guiding treatment decisions,such as intensive care unit admission,early enteral feeding,and antibiotic use.Despite the central role of imaging technologies and scoring systems in AP management,these methods have limitations in terms of accuracy,reproducibility,practicality and economics.Recent advancements of artificial intelligence(AI)provide new opportunities to enhance their performance by analyzing vast amounts of clinical and imaging data.AI algorithms can analyze large amounts of clinical and imaging data,identify scoring system patterns,and predict the clinical course of disease.AI-based models have shown promising results in predicting the severity and mortality of AP,but further validation and standardization are required before widespread clinical application.In addition,understanding the correlation between these three technologies will aid in developing new methods that can accurately,sensitively,and specifically be used in the diagnosis,severity prediction,and prognosis assessment of AP through complementary advantages.
基金supported by the Science and Technology Project of State Grid Corporation of China under Grant Number 52094021N010 (5400-202199534A-05-ZN)。
文摘The integration of digital twin(DT)and 6G edge intelligence provides accurate forecasting for distributed resources control in smart park.However,the adverse impact of model poisoning attacks on DT model training cannot be ignored.To address this issue,we firstly construct the models of DT model training and model poisoning attacks.An optimization problem is formulated to minimize the weighted sum of the DT loss function and DT model training delay.Then,the problem is transformed and solved by the proposed Multi-timescAle endogenouS securiTy-aware DQN-based rEsouRce management algorithm(MASTER)based on DT-assisted state information evaluation and attack detection.MASTER adopts multi-timescale deep Q-learning(DQN)networks to jointly schedule local training epochs and devices.It actively adjusts resource management strategies based on estimated attack probability to achieve endogenous security awareness.Simulation results demonstrate that MASTER has excellent performances in DT model training accuracy and delay.
基金the National Natural Science Foundation of China(62271485,61903363,U1811463,62103411,62203250)the Science and Technology Development Fund of Macao SAR(0093/2023/RIA2,0050/2020/A1)。
文摘DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in the direction of Imaginative Intelligence(II),i.e.,something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments”[2]to replace conventional“Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]-[6].Now we have OpenAI’s Sora,so soon,but this is not the final,actually far away,and it is just the beginning.