Blasting is the live wire of mining and its operations,with air overpressure(AOp)recognised as an end product of blasting.AOp is known to be one of the most important environmental hazards of mining.Further research i...Blasting is the live wire of mining and its operations,with air overpressure(AOp)recognised as an end product of blasting.AOp is known to be one of the most important environmental hazards of mining.Further research in this area of mining is required to help improve on safety of the working environment.Review of previous studies has shown that many empirical and artificial intelligence(AI)methods have been proposed as a forecasting model.As an alternative to the previous methods,this study proposes a new class of advanced artificial neural network known as brain inspired emotional neural network(BIENN)to predict AOp.The proposed BI-ENN approach is compared with two classical AOp predictors(generalised predictor and McKenzie formula)and three established AI methods of backpropagation neural network(BPNN),group method of data handling(GMDH),and support vector machine(SVM).From the analysis of the results,BI-ENN is the best by achieving the least RMSE,MAPE,NRMSE and highest R,VAF and PI values of 1.0941,0.8339%,0.1243%,0.8249,68.0512%and 1.2367 respectively and thus can be used for monitoring and controlling AOp.展开更多
Rock thin-section identification is an indispensable geological exploration tool for understanding and recognizing the composition of the earth.It is also an important evaluation method for oil and gas exploration and...Rock thin-section identification is an indispensable geological exploration tool for understanding and recognizing the composition of the earth.It is also an important evaluation method for oil and gas exploration and development.It can be used to identify the petrological characteristics of reservoirs,determine the type of diagenesis,and distinguish the characteristics of reservoir space and pore structure.It is necessary to understand the physical properties and sedimentary environment of the reservoir,obtain the relevant parameters of the reservoir,formulate the oil and gas development plan,and reserve calculation.The traditional thin-section identification method has a history of more than one hundred years,which mainly depends on the geological experts'visual observation with the optical microscope,and is bothered by the problems of strong subjectivity,high dependence on experience,heavy workload,long identification cycle,and incapability to achieve complete and accurate quantification.In this paper,the models of particle segmentation,mineralogy identification,and pore type intelligent identification are constructed by using deep learning,computer vision,and other technologies,and the intelligent thinsection identification is realized.This paper overcomes the problem of multi-target recognition in the image sequence,constructs a fine-grained classification network under the multi-mode and multi-light source,and proposes a modeling scheme of data annotation while building models,forming a scientific,quantitative and efficient slice identification method.The experimental results and practical application results show that the thin-section intelligent identification technology proposed in this paper does not only greatly improves the identification efficiency,but also realizes the intuitive,accurate and quantitative identification results,which is a subversive innovation and change to the traditional thin-section identification practice.展开更多
The application of various artificial intelligent(AI) techniques,namely artificial neural network(ANN),adaptive neuro fuzzy interface system(ANFIS),genetic algorithm optimized least square support vector machine(GA-LS...The application of various artificial intelligent(AI) techniques,namely artificial neural network(ANN),adaptive neuro fuzzy interface system(ANFIS),genetic algorithm optimized least square support vector machine(GA-LSSVM) and multivariable regression(MVR) models was presented to identify the real power transfer between generators and loads.These AI techniques adopt supervised learning,which first uses modified nodal equation(MNE) method to determine real power contribution from each generator to loads.Then the results of MNE method and load flow information are utilized to estimate the power transfer using AI techniques.The 25-bus equivalent system of south Malaysia is utilized as a test system to illustrate the effectiveness of various AI methods compared to that of the MNE method.展开更多
A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a tr...A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a trend.This paper provides AI based channel characteristic prediction and scenario classification model for millimeter wave(mmWave)HST communications.Firstly,the ray tracing method verified by measurement data is applied to reconstruct four representative HST scenarios.By setting the positions of transmitter(Tx),receiver(Rx),and other parameters,the multi-scenarios wireless channel big data is acquired.Then,based on the obtained channel database,radial basis function neural network(RBF-NN)and back propagation neural network(BP-NN)are trained for channel characteristic prediction and scenario classification.Finally,the channel characteristic prediction and scenario classification capabilities of the network are evaluated by calculating the root mean square error(RMSE).The results show that RBF-NN can generally achieve better performance than BP-NN,and is more applicable to prediction of HST scenarios.展开更多
Eye diagnosis is a method for inspecting systemic diseases and syndromes by observing the eyes.With the development of intelligent diagnosis in traditional Chinese medicine(TCM);artificial intelligence(AI)can improve ...Eye diagnosis is a method for inspecting systemic diseases and syndromes by observing the eyes.With the development of intelligent diagnosis in traditional Chinese medicine(TCM);artificial intelligence(AI)can improve the accuracy and efficiency of eye diagnosis.However;the research on intelligent eye diagnosis still faces many challenges;including the lack of standardized and precisely labeled data;multi-modal information analysis;and artificial in-telligence models for syndrome differentiation.The widespread application of AI models in medicine provides new insights and opportunities for the research of eye diagnosis intelli-gence.This study elaborates on the three key technologies of AI models in the intelligent ap-plication of TCM eye diagnosis;and explores the implications for the research of eye diagno-sis intelligence.First;a database concerning eye diagnosis was established based on self-su-pervised learning so as to solve the issues related to the lack of standardized and precisely la-beled data.Next;the cross-modal understanding and generation of deep neural network models to address the problem of lacking multi-modal information analysis.Last;the build-ing of data-driven models for eye diagnosis to tackle the issue of the absence of syndrome dif-ferentiation models.In summary;research on intelligent eye diagnosis has great potential to be applied the surge of AI model applications.展开更多
The multi-mode integrated railway system,anchored by the high-speed railway,caters to the diverse travel requirements both within and between cities,offering safe,comfortable,punctual,and eco-friendly transportation s...The multi-mode integrated railway system,anchored by the high-speed railway,caters to the diverse travel requirements both within and between cities,offering safe,comfortable,punctual,and eco-friendly transportation services.With the expansion of the railway networks,enhancing the efficiency and safety of the comprehensive system has become a crucial issue in the advanced development of railway transportation.In light of the prevailing application of artificial intelligence technologies within railway systems,this study leverages large model technology characterized by robust learning capabilities,efficient associative abilities,and linkage analysis to propose an Artificial-intelligent(AI)-powered railway control and dispatching system.This system is elaborately designed with four core functions,including global optimum unattended dispatching,synergetic transportation in multiple modes,high-speed automatic control,and precise maintenance decision and execution.The deployment pathway and essential tasks of the system are further delineated,alongside the challenges and obstacles encountered.The AI-powered system promises a significant enhancement in the operational efficiency and safety of the composite railway system,ensuring a more effective alignment between transportation services and passenger demands.展开更多
With the rapid development of artificial intelligence,large language models(LLMs)have shown promising capabilities in mimicking human-level language comprehen-sion and reasoning.This has sparked significant interest i...With the rapid development of artificial intelligence,large language models(LLMs)have shown promising capabilities in mimicking human-level language comprehen-sion and reasoning.This has sparked significant interest in applying LLMs to enhance various aspects of healthcare,ranging from medical education to clinical decision support.However,medicine involves multifaceted data modalities and nuanced reasoning skills,presenting challenges for integrating LLMs.This review introduces the fundamental applications of general-purpose and specialized LLMs,demon-strating their utilities in knowledge retrieval,research support,clinical workflow automation,and diagnostic assistance.Recognizing the inherent multimodality of medicine,the review emphasizes the multimodal LLMs and discusses their ability to process diverse data types like medical imaging and electronic health records to augment diagnostic accuracy.To address LLMs'limitations regarding personalization and complex clinical reasoning,the review further explores the emerging develop-ment of LLM-powered autonomous agents for healthcare.Moreover,it summarizes the evaluation methodologies for assessing LLMs'reliability and safety in medical contexts.LLMs have transformative potential in medicine;however,there is a pivotal need for continuous optimizations and ethical oversight before these models can be effectively integrated into clinical practice.展开更多
BACKGROUND:Rapid on-site triage is critical after mass-casualty incidents(MCIs)and other mass injury events.Unmanned aerial vehicles(UAVs)have been used in MCIs to search and rescue wounded individuals,but they mainly...BACKGROUND:Rapid on-site triage is critical after mass-casualty incidents(MCIs)and other mass injury events.Unmanned aerial vehicles(UAVs)have been used in MCIs to search and rescue wounded individuals,but they mainly depend on the UAV operator’s experience.We used UAVs and artificial intelligence(AI)to provide a new technique for the triage of MCIs and more efficient solutions for emergency rescue.METHODS:This was a preliminary experimental study.We developed an intelligent triage system based on two AI algorithms,namely OpenPose and YOLO.Volunteers were recruited to simulate the MCI scene and triage,combined with UAV and Fifth Generation(5G)Mobile Communication Technology real-time transmission technique,to achieve triage in the simulated MCI scene.RESULTS:Seven postures were designed and recognized to achieve brief but meaningful triage in MCIs.Eight volunteers participated in the MCI simulation scenario.The results of simulation scenarios showed that the proposed method was feasible in tasks of triage for MCIs.CONCLUSION:The proposed technique may provide an alternative technique for the triage of MCIs and is an innovative method in emergency rescue.展开更多
Wearable and flexible electronics are shaping our life with their unique advantages of light weight,good compliance,and desirable comfortability.With marching into the era of Internet of Things(IoT),numerous sensor no...Wearable and flexible electronics are shaping our life with their unique advantages of light weight,good compliance,and desirable comfortability.With marching into the era of Internet of Things(IoT),numerous sensor nodes are distributed throughout networks to capture,process,and transmit diverse sensory information,which gives rise to the demand on self-powered sensors to reduce the power consumption.Meanwhile,the rapid development of artificial intelligence(AI)and fifth-generation(5G)technologies provides an opportunity to enable smart-decision making and instantaneous data transmission in IoT systems.Due to continuously increased sensor and dataset number,conventional computing based on von Neumann architecture cannot meet the needs of brain-like high-efficient sensing and computing applications anymore.Neuromorphic electronics,drawing inspiration from the human brain,provide an alternative approach for efficient and low-power-consumption information processing.Hence,this review presents the general technology roadmap of self-powered sensors with detail discussion on their diversified applications in healthcare,human machine interactions,smart homes,etc.Via leveraging AI and virtual reality/augmented reality(VR/AR)techniques,the development of single sensors to intelligent integrated systems is reviewed in terms of step-by-step system integration and algorithm improvement.In order to realize efficient sensing and computing,brain-inspired neuromorphic electronics are next briefly discussed.Last,it concludes and highlights both challenges and opportunities from the aspects of materials,minimization,integration,multimodal information fusion,and artificial sensory system.展开更多
Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on p...Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.展开更多
Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of ...Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.展开更多
Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of...Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.展开更多
Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have rev...Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI.展开更多
With increasingly explored ideologies and technologies for potential applications of artificial intelligence(AI)in oncology,we here describe a holistic and structured concept termed intelligent oncology.Intelligent on...With increasingly explored ideologies and technologies for potential applications of artificial intelligence(AI)in oncology,we here describe a holistic and structured concept termed intelligent oncology.Intelligent oncology is defined as a cross-disciplinary specialty which integrates oncology,radiology,pathology,molecular biology,multi-omics and computer sciences,aiming to promote cancer prevention,screening,early diagnosis and precision treatment.The development of intelligent oncology has been facilitated by fast AI technology development such as natural language processing,machine/deep learning,computer vision,and robotic process automation.While the concept and applications of intelligent oncology is still in its infancy,and there are still many hurdles and challenges,we are optimistic that it will play a pivotal role for the future of basic,translational and clinical oncology.展开更多
AIM:To develop an artificial intelligence(AI)diagnosis model based on deep learning(DL)algorithm to diagnose different types of retinal vein occlusion(RVO)by recognizing color fundus photographs(CFPs).METHODS:Totally ...AIM:To develop an artificial intelligence(AI)diagnosis model based on deep learning(DL)algorithm to diagnose different types of retinal vein occlusion(RVO)by recognizing color fundus photographs(CFPs).METHODS:Totally 914 CFPs of healthy people and patients with RVO were collected as experimental data sets,and used to train,verify and test the diagnostic model of RVO.All the images were divided into four categories[normal,central retinal vein occlusion(CRVO),branch retinal vein occlusion(BRVO),and macular retinal vein occlusion(MRVO)]by three fundus disease experts.Swin Transformer was used to build the RVO diagnosis model,and different types of RVO diagnosis experiments were conducted.The model’s performance was compared to that of the experts.RESULTS:The accuracy of the model in the diagnosis of normal,CRVO,BRVO,and MRVO reached 1.000,0.978,0.957,and 0.978;the specificity reached 1.000,0.986,0.982,and 0.976;the sensitivity reached 1.000,0.955,0.917,and 1.000;the F1-Sore reached 1.000,0.9550.943,and 0.887 respectively.In addition,the area under curve of normal,CRVO,BRVO,and MRVO diagnosed by the diagnostic model were 1.000,0.900,0.959 and 0.970,respectively.The diagnostic results were highly consistent with those of fundus disease experts,and the diagnostic performance was superior.CONCLUSION:The diagnostic model developed in this study can well diagnose different types of RVO,effectively relieve the work pressure of clinicians,and provide help for the follow-up clinical diagnosis and treatment of RVO patients.展开更多
The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated...The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats.展开更多
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.展开更多
In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,e...In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.展开更多
This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Cl...This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Clinical Cases.AI has enormous potentialfor various applications in the field of Kawasaki disease(KD).One is machinelearning(ML)to assist in the diagnosis of KD,and clinical prediction models havebeen constructed worldwide using ML;the second is using a gene signalcalculation toolbox to identify KD,which can be used to monitor key clinicalfeatures and laboratory parameters of disease severity;and the third is using deeplearning(DL)to assist in cardiac ultrasound detection.The performance of the DLalgorithm is similar to that of experienced cardiac experts in detecting coronaryartery lesions to promoting the diagnosis of KD.To effectively utilize AI in thediagnosis and treatment process of KD,it is crucial to improve the accuracy of AIdecision-making using more medical data,while addressing issues related topatient personal information protection and AI decision-making responsibility.AIprogress is expected to provide patients with accurate and effective medicalservices that will positively impact the diagnosis and treatment of KD in thefuture.展开更多
Obesity poses several challenges to healthcare and the well-being of individuals.It can be linked to several life-threatening diseases.Surgery is a viable option in some instances to reduce obesity-related risks and e...Obesity poses several challenges to healthcare and the well-being of individuals.It can be linked to several life-threatening diseases.Surgery is a viable option in some instances to reduce obesity-related risks and enable weight loss.State-of-the-art technologies have the potential for long-term benefits in post-surgery living.In this work,an Internet of Things(IoT)framework is proposed to effectively communicate the daily living data and exercise routine of surgery patients and patients with excessive weight.The proposed IoT framework aims to enable seamless communications from wearable sensors and body networks to the cloud to create an accurate profile of the patients.It also attempts to automate the data analysis and represent the facts about a patient.The IoT framework proposes a co-channel interference avoidance mechanism and the ability to communicate higher activity data with minimal impact on the bandwidth requirements of the system.The proposed IoT framework also benefits from machine learning based activity classification systems,with relatively high accuracy,which allow the communicated data to be translated into meaningful information.展开更多
基金This work was supported by the Ghana National Petroleum Corporation(GNPC)through the GNPC Professorial Chair in Mining Engineering at the University of Mines and Technology(UMaT),GhanaThe authors thank the Ghana National Petroleum Corporation(GNPC)for providing funding to support this work through the GNPC Professorial Chair in Mining Engineering at the University of Mines and Technology(UMaT),Ghana.
文摘Blasting is the live wire of mining and its operations,with air overpressure(AOp)recognised as an end product of blasting.AOp is known to be one of the most important environmental hazards of mining.Further research in this area of mining is required to help improve on safety of the working environment.Review of previous studies has shown that many empirical and artificial intelligence(AI)methods have been proposed as a forecasting model.As an alternative to the previous methods,this study proposes a new class of advanced artificial neural network known as brain inspired emotional neural network(BIENN)to predict AOp.The proposed BI-ENN approach is compared with two classical AOp predictors(generalised predictor and McKenzie formula)and three established AI methods of backpropagation neural network(BPNN),group method of data handling(GMDH),and support vector machine(SVM).From the analysis of the results,BI-ENN is the best by achieving the least RMSE,MAPE,NRMSE and highest R,VAF and PI values of 1.0941,0.8339%,0.1243%,0.8249,68.0512%and 1.2367 respectively and thus can be used for monitoring and controlling AOp.
基金supported by the Project of Basic Science Center for the National Natural Science Foundation of China(Grant No.72088101)。
文摘Rock thin-section identification is an indispensable geological exploration tool for understanding and recognizing the composition of the earth.It is also an important evaluation method for oil and gas exploration and development.It can be used to identify the petrological characteristics of reservoirs,determine the type of diagenesis,and distinguish the characteristics of reservoir space and pore structure.It is necessary to understand the physical properties and sedimentary environment of the reservoir,obtain the relevant parameters of the reservoir,formulate the oil and gas development plan,and reserve calculation.The traditional thin-section identification method has a history of more than one hundred years,which mainly depends on the geological experts'visual observation with the optical microscope,and is bothered by the problems of strong subjectivity,high dependence on experience,heavy workload,long identification cycle,and incapability to achieve complete and accurate quantification.In this paper,the models of particle segmentation,mineralogy identification,and pore type intelligent identification are constructed by using deep learning,computer vision,and other technologies,and the intelligent thinsection identification is realized.This paper overcomes the problem of multi-target recognition in the image sequence,constructs a fine-grained classification network under the multi-mode and multi-light source,and proposes a modeling scheme of data annotation while building models,forming a scientific,quantitative and efficient slice identification method.The experimental results and practical application results show that the thin-section intelligent identification technology proposed in this paper does not only greatly improves the identification efficiency,but also realizes the intuitive,accurate and quantitative identification results,which is a subversive innovation and change to the traditional thin-section identification practice.
基金the Ministry of Higher Education,Malaysia (MOHE) for the financial funding of this projectUniversiti Kebangsaan Malaysia and Universiti Teknologi Malaysia for providing infrastructure and moral support for the research work
文摘The application of various artificial intelligent(AI) techniques,namely artificial neural network(ANN),adaptive neuro fuzzy interface system(ANFIS),genetic algorithm optimized least square support vector machine(GA-LSSVM) and multivariable regression(MVR) models was presented to identify the real power transfer between generators and loads.These AI techniques adopt supervised learning,which first uses modified nodal equation(MNE) method to determine real power contribution from each generator to loads.Then the results of MNE method and load flow information are utilized to estimate the power transfer using AI techniques.The 25-bus equivalent system of south Malaysia is utilized as a test system to illustrate the effectiveness of various AI methods compared to that of the MNE method.
基金supported by the National Key R&D Program of China under Grant 2021YFB1407001the National Natural Science Foundation of China (NSFC) under Grants 62001269 and 61960206006+2 种基金the State Key Laboratory of Rail Traffic Control and Safety (under Grants RCS2022K009)Beijing Jiaotong University, the Future Plan Program for Young Scholars of Shandong Universitythe EU H2020 RISE TESTBED2 project under Grant 872172
文摘A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a trend.This paper provides AI based channel characteristic prediction and scenario classification model for millimeter wave(mmWave)HST communications.Firstly,the ray tracing method verified by measurement data is applied to reconstruct four representative HST scenarios.By setting the positions of transmitter(Tx),receiver(Rx),and other parameters,the multi-scenarios wireless channel big data is acquired.Then,based on the obtained channel database,radial basis function neural network(RBF-NN)and back propagation neural network(BP-NN)are trained for channel characteristic prediction and scenario classification.Finally,the channel characteristic prediction and scenario classification capabilities of the network are evaluated by calculating the root mean square error(RMSE).The results show that RBF-NN can generally achieve better performance than BP-NN,and is more applicable to prediction of HST scenarios.
基金National Natural Science Foundation of China(82274265 and 82274588)Hunan University of Traditional Chinese Medicine Research Unveiled Marshal Programs(2022XJJB003).
文摘Eye diagnosis is a method for inspecting systemic diseases and syndromes by observing the eyes.With the development of intelligent diagnosis in traditional Chinese medicine(TCM);artificial intelligence(AI)can improve the accuracy and efficiency of eye diagnosis.However;the research on intelligent eye diagnosis still faces many challenges;including the lack of standardized and precisely labeled data;multi-modal information analysis;and artificial in-telligence models for syndrome differentiation.The widespread application of AI models in medicine provides new insights and opportunities for the research of eye diagnosis intelli-gence.This study elaborates on the three key technologies of AI models in the intelligent ap-plication of TCM eye diagnosis;and explores the implications for the research of eye diagno-sis intelligence.First;a database concerning eye diagnosis was established based on self-su-pervised learning so as to solve the issues related to the lack of standardized and precisely la-beled data.Next;the cross-modal understanding and generation of deep neural network models to address the problem of lacking multi-modal information analysis.Last;the build-ing of data-driven models for eye diagnosis to tackle the issue of the absence of syndrome dif-ferentiation models.In summary;research on intelligent eye diagnosis has great potential to be applied the surge of AI model applications.
基金supported by the National Key R&D Program of China(2022YFB4300500).
文摘The multi-mode integrated railway system,anchored by the high-speed railway,caters to the diverse travel requirements both within and between cities,offering safe,comfortable,punctual,and eco-friendly transportation services.With the expansion of the railway networks,enhancing the efficiency and safety of the comprehensive system has become a crucial issue in the advanced development of railway transportation.In light of the prevailing application of artificial intelligence technologies within railway systems,this study leverages large model technology characterized by robust learning capabilities,efficient associative abilities,and linkage analysis to propose an Artificial-intelligent(AI)-powered railway control and dispatching system.This system is elaborately designed with four core functions,including global optimum unattended dispatching,synergetic transportation in multiple modes,high-speed automatic control,and precise maintenance decision and execution.The deployment pathway and essential tasks of the system are further delineated,alongside the challenges and obstacles encountered.The AI-powered system promises a significant enhancement in the operational efficiency and safety of the composite railway system,ensuring a more effective alignment between transportation services and passenger demands.
基金supported by the National Natural Science Foundation of China(91959205,U22A20327,82203881,12090022,11831002,and 81801778)Beijing Natural Science Foundation(7222021)+2 种基金Beijing Hospitals Authority Youth Programme(QML20231115)Clinical Medicine Plus X-Young Scholars Project of Peking University(PKU2023LCXQ041)Guangdong Provincial Key Laboratory of Precision Medicine for Gastrointestinal Cancer(2020B121201004).
文摘With the rapid development of artificial intelligence,large language models(LLMs)have shown promising capabilities in mimicking human-level language comprehen-sion and reasoning.This has sparked significant interest in applying LLMs to enhance various aspects of healthcare,ranging from medical education to clinical decision support.However,medicine involves multifaceted data modalities and nuanced reasoning skills,presenting challenges for integrating LLMs.This review introduces the fundamental applications of general-purpose and specialized LLMs,demon-strating their utilities in knowledge retrieval,research support,clinical workflow automation,and diagnostic assistance.Recognizing the inherent multimodality of medicine,the review emphasizes the multimodal LLMs and discusses their ability to process diverse data types like medical imaging and electronic health records to augment diagnostic accuracy.To address LLMs'limitations regarding personalization and complex clinical reasoning,the review further explores the emerging develop-ment of LLM-powered autonomous agents for healthcare.Moreover,it summarizes the evaluation methodologies for assessing LLMs'reliability and safety in medical contexts.LLMs have transformative potential in medicine;however,there is a pivotal need for continuous optimizations and ethical oversight before these models can be effectively integrated into clinical practice.
基金Sanming Project of Medicine in Shenzhen(No.SZSM201911007)Shenzhen Stability Support Plan(20200824145152001)。
文摘BACKGROUND:Rapid on-site triage is critical after mass-casualty incidents(MCIs)and other mass injury events.Unmanned aerial vehicles(UAVs)have been used in MCIs to search and rescue wounded individuals,but they mainly depend on the UAV operator’s experience.We used UAVs and artificial intelligence(AI)to provide a new technique for the triage of MCIs and more efficient solutions for emergency rescue.METHODS:This was a preliminary experimental study.We developed an intelligent triage system based on two AI algorithms,namely OpenPose and YOLO.Volunteers were recruited to simulate the MCI scene and triage,combined with UAV and Fifth Generation(5G)Mobile Communication Technology real-time transmission technique,to achieve triage in the simulated MCI scene.RESULTS:Seven postures were designed and recognized to achieve brief but meaningful triage in MCIs.Eight volunteers participated in the MCI simulation scenario.The results of simulation scenarios showed that the proposed method was feasible in tasks of triage for MCIs.CONCLUSION:The proposed technique may provide an alternative technique for the triage of MCIs and is an innovative method in emergency rescue.
基金supported by the Reimagine Research Scheme(RRSC)grant(“Scalable AI Phenome Platform towards Fast-Forward Plant Breeding(Sensor)”,Nos.A-0009037-02-00 and A-0009037-03-00)at NUS,Singaporethe Reimagine Research Scheme(RRSC)grant(“Under-utilised Potential of Micro-biomes(soil)in Sustainable Urban Agriculture”,No.A-0009454-01-00)at NUS,Singaporethe RIE advanced manufacturing and engineering(AME)programmatic grant(“Nanosystems at the Edge”,No.A18A4b0055)at NUS,Singapore.
文摘Wearable and flexible electronics are shaping our life with their unique advantages of light weight,good compliance,and desirable comfortability.With marching into the era of Internet of Things(IoT),numerous sensor nodes are distributed throughout networks to capture,process,and transmit diverse sensory information,which gives rise to the demand on self-powered sensors to reduce the power consumption.Meanwhile,the rapid development of artificial intelligence(AI)and fifth-generation(5G)technologies provides an opportunity to enable smart-decision making and instantaneous data transmission in IoT systems.Due to continuously increased sensor and dataset number,conventional computing based on von Neumann architecture cannot meet the needs of brain-like high-efficient sensing and computing applications anymore.Neuromorphic electronics,drawing inspiration from the human brain,provide an alternative approach for efficient and low-power-consumption information processing.Hence,this review presents the general technology roadmap of self-powered sensors with detail discussion on their diversified applications in healthcare,human machine interactions,smart homes,etc.Via leveraging AI and virtual reality/augmented reality(VR/AR)techniques,the development of single sensors to intelligent integrated systems is reviewed in terms of step-by-step system integration and algorithm improvement.In order to realize efficient sensing and computing,brain-inspired neuromorphic electronics are next briefly discussed.Last,it concludes and highlights both challenges and opportunities from the aspects of materials,minimization,integration,multimodal information fusion,and artificial sensory system.
基金supported by the Capital’s Funds for Health Improvement and Research,No.2022-2-2072(to YG).
文摘Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.
基金supported by the National Natural Science Foundation of China(Grant Nos.42141019 and 42261144687)and STEP(Grant No.2019QZKK0102)supported by the Korea Environmental Industry&Technology Institute(KEITI)through the“Project for developing an observation-based GHG emissions geospatial information map”,funded by the Korea Ministry of Environment(MOE)(Grant No.RS-2023-00232066).
文摘Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.
基金supported in part by the National Natural Science Foundation of China(82072019)the Shenzhen Basic Research Program(JCYJ20210324130209023)+5 种基金the Shenzhen-Hong Kong-Macao S&T Program(Category C)(SGDX20201103095002019)the Mainland-Hong Kong Joint Funding Scheme(MHKJFS)(MHP/005/20),the Project of Strategic Importance Fund(P0035421)the Projects of RISA(P0043001)from the Hong Kong Polytechnic University,the Natural Science Foundation of Jiangsu Province(BK20201441)the Provincial and Ministry Co-constructed Project of Henan Province Medical Science and Technology Research(SBGJ202103038,SBGJ202102056)the Henan Province Key R&D and Promotion Project(Science and Technology Research)(222102310015)the Natural Science Foundation of Henan Province(222300420575),and the Henan Province Science and Technology Research(222102310322).
文摘Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.
基金supported by a grant from the Standardization and Integration of Resources Information for Seed-cluster in Hub-Spoke Material Bank Program,Rural Development Administration,Republic of Korea(PJ01587004).
文摘Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI.
基金supported by the National Natural Science Foundation of China(grant numbers:81974464,61906022)Chongqing Natural Science Foundation(grant number:cstc2020jcyj-msxmX0482)Chongqing University Research Fund(grant number:2021CDJXKJC004).
文摘With increasingly explored ideologies and technologies for potential applications of artificial intelligence(AI)in oncology,we here describe a holistic and structured concept termed intelligent oncology.Intelligent oncology is defined as a cross-disciplinary specialty which integrates oncology,radiology,pathology,molecular biology,multi-omics and computer sciences,aiming to promote cancer prevention,screening,early diagnosis and precision treatment.The development of intelligent oncology has been facilitated by fast AI technology development such as natural language processing,machine/deep learning,computer vision,and robotic process automation.While the concept and applications of intelligent oncology is still in its infancy,and there are still many hurdles and challenges,we are optimistic that it will play a pivotal role for the future of basic,translational and clinical oncology.
基金Supported by Shenzhen Fund for Guangdong Provincial High-level Clinical Key Specialties(No.SZGSP014)Sanming Project of Medicine in Shenzhen(No.SZSM202011015)Shenzhen Science and Technology Planning Project(No.KCXFZ20211020163813019).
文摘AIM:To develop an artificial intelligence(AI)diagnosis model based on deep learning(DL)algorithm to diagnose different types of retinal vein occlusion(RVO)by recognizing color fundus photographs(CFPs).METHODS:Totally 914 CFPs of healthy people and patients with RVO were collected as experimental data sets,and used to train,verify and test the diagnostic model of RVO.All the images were divided into four categories[normal,central retinal vein occlusion(CRVO),branch retinal vein occlusion(BRVO),and macular retinal vein occlusion(MRVO)]by three fundus disease experts.Swin Transformer was used to build the RVO diagnosis model,and different types of RVO diagnosis experiments were conducted.The model’s performance was compared to that of the experts.RESULTS:The accuracy of the model in the diagnosis of normal,CRVO,BRVO,and MRVO reached 1.000,0.978,0.957,and 0.978;the specificity reached 1.000,0.986,0.982,and 0.976;the sensitivity reached 1.000,0.955,0.917,and 1.000;the F1-Sore reached 1.000,0.9550.943,and 0.887 respectively.In addition,the area under curve of normal,CRVO,BRVO,and MRVO diagnosed by the diagnostic model were 1.000,0.900,0.959 and 0.970,respectively.The diagnostic results were highly consistent with those of fundus disease experts,and the diagnostic performance was superior.CONCLUSION:The diagnostic model developed in this study can well diagnose different types of RVO,effectively relieve the work pressure of clinicians,and provide help for the follow-up clinical diagnosis and treatment of RVO patients.
文摘The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats.
文摘While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
基金supported by the National Natural Science Foundation of China(62172033).
文摘In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.
文摘This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Clinical Cases.AI has enormous potentialfor various applications in the field of Kawasaki disease(KD).One is machinelearning(ML)to assist in the diagnosis of KD,and clinical prediction models havebeen constructed worldwide using ML;the second is using a gene signalcalculation toolbox to identify KD,which can be used to monitor key clinicalfeatures and laboratory parameters of disease severity;and the third is using deeplearning(DL)to assist in cardiac ultrasound detection.The performance of the DLalgorithm is similar to that of experienced cardiac experts in detecting coronaryartery lesions to promoting the diagnosis of KD.To effectively utilize AI in thediagnosis and treatment process of KD,it is crucial to improve the accuracy of AIdecision-making using more medical data,while addressing issues related topatient personal information protection and AI decision-making responsibility.AIprogress is expected to provide patients with accurate and effective medicalservices that will positively impact the diagnosis and treatment of KD in thefuture.
基金The authors would like to acknowledge the support of the Deputy for Research and Innovation-Ministry of Education,Kingdom of Saudi Arabia,for this research through a grant(NU/IFC/ENT/01/020)under the institutional Funding Committee at Najran University,Kingdom of Saudi Arabia。
文摘Obesity poses several challenges to healthcare and the well-being of individuals.It can be linked to several life-threatening diseases.Surgery is a viable option in some instances to reduce obesity-related risks and enable weight loss.State-of-the-art technologies have the potential for long-term benefits in post-surgery living.In this work,an Internet of Things(IoT)framework is proposed to effectively communicate the daily living data and exercise routine of surgery patients and patients with excessive weight.The proposed IoT framework aims to enable seamless communications from wearable sensors and body networks to the cloud to create an accurate profile of the patients.It also attempts to automate the data analysis and represent the facts about a patient.The IoT framework proposes a co-channel interference avoidance mechanism and the ability to communicate higher activity data with minimal impact on the bandwidth requirements of the system.The proposed IoT framework also benefits from machine learning based activity classification systems,with relatively high accuracy,which allow the communicated data to be translated into meaningful information.