Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorit...Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.展开更多
A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a tr...A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a trend.This paper provides AI based channel characteristic prediction and scenario classification model for millimeter wave(mmWave)HST communications.Firstly,the ray tracing method verified by measurement data is applied to reconstruct four representative HST scenarios.By setting the positions of transmitter(Tx),receiver(Rx),and other parameters,the multi-scenarios wireless channel big data is acquired.Then,based on the obtained channel database,radial basis function neural network(RBF-NN)and back propagation neural network(BP-NN)are trained for channel characteristic prediction and scenario classification.Finally,the channel characteristic prediction and scenario classification capabilities of the network are evaluated by calculating the root mean square error(RMSE).The results show that RBF-NN can generally achieve better performance than BP-NN,and is more applicable to prediction of HST scenarios.展开更多
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.展开更多
●AIM:To quantify the performance of artificial intelligence(AI)in detecting glaucoma with spectral-domain optical coherence tomography(SD-OCT)images.●METHODS:Electronic databases including PubMed,Embase,Scopus,Scien...●AIM:To quantify the performance of artificial intelligence(AI)in detecting glaucoma with spectral-domain optical coherence tomography(SD-OCT)images.●METHODS:Electronic databases including PubMed,Embase,Scopus,ScienceDirect,ProQuest and Cochrane Library were searched before May 31,2023 which adopted AI for glaucoma detection with SD-OCT images.All pieces of the literature were screened and extracted by two investigators.Meta-analysis,Meta-regression,subgroup,and publication of bias were conducted by Stata16.0.The risk of bias assessment was performed in Revman5.4 using the QUADAS-2 tool.●RESULTS:Twenty studies and 51 models were selected for systematic review and Meta-analysis.The pooled sensitivity and specificity were 0.91(95%CI:0.86–0.94,I2=94.67%),0.90(95%CI:0.87–0.92,I2=89.24%).The pooled positive likelihood ratio(PLR)and negative likelihood ratio(NLR)were 8.79(95%CI:6.93–11.15,I2=89.31%)and 0.11(95%CI:0.07–0.16,I2=95.25%).The pooled diagnostic odds ratio(DOR)and area under curve(AUC)were 83.58(95%CI:47.15–148.15,I2=100%)and 0.95(95%CI:0.93–0.97).There was no threshold effect(Spearman correlation coefficient=0.22,P>0.05).●CONCLUSION:There is a high accuracy for the detection of glaucoma with AI with SD-OCT images.The application of AI-based algorithms allows together with“doctor+artificial intelligence”to improve the diagnosis of glaucoma.展开更多
BACKGROUND Diabetes,a globally escalating health concern,necessitates innovative solutions for efficient detection and management.Blood glucose control is an essential aspect of managing diabetes and finding the most ...BACKGROUND Diabetes,a globally escalating health concern,necessitates innovative solutions for efficient detection and management.Blood glucose control is an essential aspect of managing diabetes and finding the most effective ways to control it.The latest findings suggest that a basal insulin administration rate and a single,highconcentration injection before a meal may not be sufficient to maintain healthy blood glucose levels.While the basal insulin rate treatment can stabilize blood glucose levels over the long term,it may not be enough to bring the levels below the post-meal limit after 60 min.The short-term impacts of meals can be greatly reduced by high-concentration injections,which can help stabilize blood glucose levels.Unfortunately,they cannot provide long-term stability to satisfy the postmeal or pre-meal restrictions.However,proportional-integral-derivative(PID)control with basal dose maintains the blood glucose levels within the range for a longer period.AIM To develop a closed-loop electronic system to pump required insulin into the patient's body automatically in synchronization with glucose sensor readings.METHODS The proposed system integrates a glucose sensor,decision unit,and pumping module to specifically address the pumping of insulin and enhance system effectiveness.Serving as the intelligence hub,the decision unit analyzes data from the glucose sensor to determine the optimal insulin dosage,guided by a pre-existing glucose and insulin level table.The artificial intelligence detection block processes this information,providing decision instructions to the pumping module.Equipped with communication antennas,the glucose sensor and micropump operate in a feedback loop,creating a closed-loop system that eliminates the need for manual intervention.RESULTS The incorporation of a PID controller to assess and regulate blood glucose and insulin levels in individuals with diabetes introduces a sophisticated and dynamic element to diabetes management.The simulation not only allows visualization of how the body responds to different inputs but also offers a valuable tool for predicting and testing the effects of various interventions over time.The PID controller's role in adjusting insulin dosage based on the discrepancy between desired setpoints and actual measurements showcases a proactive strategy for maintaining blood glucose levels within a healthy range.This dynamic feedback loop not only delays the onset of steady-state conditions but also effectively counteracts post-meal spikes in blood glucose.CONCLUSION The WiFi-controlled voltage controller and the PID controller simulation collectively underscore the ongoing efforts to enhance efficiency,safety,and personalized care within the realm of diabetes management.These technological advancements not only contribute to the optimization of insulin delivery systems but also have the potential to reshape our understanding of glucose and insulin dynamics,fostering a new era of precision medicine in the treatment of diabetes.展开更多
In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,e...In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.展开更多
The objective-scientific conclusions obtained from the researches conducted in various fields of science prove that era and worldview are in unity and are phenomena that determine one another,and era and worldview are...The objective-scientific conclusions obtained from the researches conducted in various fields of science prove that era and worldview are in unity and are phenomena that determine one another,and era and worldview are the most important phenomena in the understanding of geniuses,historical events,including personalities who have left a mark on the history of politics,and every individual as a whole.And it is appropriate to briefly consider the problem in the context of human and personality factors.It is known that man has tried to understand natural phenomena since the beginning of time.Contact with the material world naturally affects his consciousness and even his subconscious as he solves problems that are important or useful for human life.During this understanding,the worldview changes and is formed.Thus,depending on the material and moral development of all spheres of life,the content and essence of the progress events,as the civilizations replaced each other in different periods,the event of periodization took place and became a system.If we take Europe,the people of the Ice Age of 300,000 years ago,who engaged in hunting to solve their hunger needs,in other words,the age of dinosaurs,have spread to many parts of the world from Africa,where they lived in order to survive and meet more of their daily needs.The extensive integration of agricultural Ice Age People into the Earth included farming,fishing,animal husbandry,hunting,as well as handicrafts,etc.,and has led to the revolutionary development of the fields.As economic activities led these first inhabitants of the planet from caves to less comfortable shelters,then to good houses,then to palaces,labor activities in various occupations,including crafts,developed rapidly.Thus,the fads of the era who differed from the crowd(later this class will be called personalities,geniuses...-Kh.G.)began to appear.If we approach the issue from the point of view of history,we witness that the world view determines the development in different periods.This idea can be expressed in such a way that each period can be considered to have developed or experienced a crisis according to the level of worldview.In this direction of our thoughts,the question arises:So,what is the phenomenon of worldview of this era-XXI century?Based on the general content of the current events,characterized as the globalization stage of the modern world,we can say that the outlook of the historical stage we live in is based on the achievements of the last stage of the industrial revolution.In this article,by analyzing the history of the artificial intelligence system during the world industrial revolutions,we will study both the concept of progress of the industrial revolutions and the progressive and at the same time regressive development of the artificial intelligence system.展开更多
BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Poly...BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Polyps(AI4CRP)for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYE^(TM)(Fujifilm,Tokyo,Japan).CADx influence on the optical diagnosis of an expert endoscopist was also investigated.METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm.Both CADxsystems exploit convolutional neural networks.Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard.AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value(range 0.0-1.0).A predefined cut-off value of 0.6 was set with values<0.6 indicating benign and values≥0.6 indicating premalignant colorectal polyps.Low confidence characterizations were defined as values 40%around the cut-off value of 0.6(<0.36 and>0.76).Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps.Self-critical AI4CRP,excluding 14 low confidence characterizations[27.5%(14/51)],had a diagnostic accuracy of 89.2%,sensitivity of 89.7%,and specificity of 87.5%,which was higher compared to AI4CRP.CAD EYE had a 83.7%diagnostic accuracy,74.2%sensitivity,and 100.0%specificity.Diagnostic performances of the endoscopist alone(before AI)increased nonsignificantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE(AI-assisted endoscopist).Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems,except for specificity for which CAD EYE performed best.CONCLUSION Real-time use of AI4CRP was feasible.Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP.展开更多
This article proposes a comprehensive monitoring system for tunnel operation to address the risks associated with tunnel operations.These risks include safety control risks,increased traffic flow,extreme weather event...This article proposes a comprehensive monitoring system for tunnel operation to address the risks associated with tunnel operations.These risks include safety control risks,increased traffic flow,extreme weather events,and movement of tectonic plates.The proposed system is based on the Internet of Things and artificial intelligence identification technology.The monitoring system will cover various aspects of tunnel operations,such as the slope of the entrance,the structural safety of the cave body,toxic and harmful gases that may appear during operation,excessively high and low-temperature humidity,poor illumination,water leakage or road water accumulation caused by extreme weather,combustion and smoke caused by fires,and more.The system will enable comprehensive monitoring and early warning of fire protection systems,accident vehicles,and overheating vehicles.This will effectively improve safety during tunnel operation.展开更多
BACKGROUND:Rapid on-site triage is critical after mass-casualty incidents(MCIs)and other mass injury events.Unmanned aerial vehicles(UAVs)have been used in MCIs to search and rescue wounded individuals,but they mainly...BACKGROUND:Rapid on-site triage is critical after mass-casualty incidents(MCIs)and other mass injury events.Unmanned aerial vehicles(UAVs)have been used in MCIs to search and rescue wounded individuals,but they mainly depend on the UAV operator’s experience.We used UAVs and artificial intelligence(AI)to provide a new technique for the triage of MCIs and more efficient solutions for emergency rescue.METHODS:This was a preliminary experimental study.We developed an intelligent triage system based on two AI algorithms,namely OpenPose and YOLO.Volunteers were recruited to simulate the MCI scene and triage,combined with UAV and Fifth Generation(5G)Mobile Communication Technology real-time transmission technique,to achieve triage in the simulated MCI scene.RESULTS:Seven postures were designed and recognized to achieve brief but meaningful triage in MCIs.Eight volunteers participated in the MCI simulation scenario.The results of simulation scenarios showed that the proposed method was feasible in tasks of triage for MCIs.CONCLUSION:The proposed technique may provide an alternative technique for the triage of MCIs and is an innovative method in emergency rescue.展开更多
Acute pancreatitis(AP)is a potentially life-threatening inflammatory disease of the pancreas,with clinical management determined by the severity of the disease.Diagnosis,severity prediction,and prognosis assessment of...Acute pancreatitis(AP)is a potentially life-threatening inflammatory disease of the pancreas,with clinical management determined by the severity of the disease.Diagnosis,severity prediction,and prognosis assessment of AP typically involve the use of imaging technologies,such as computed tomography,magnetic resonance imaging,and ultrasound,and scoring systems,including Ranson,Acute Physiology and Chronic Health Evaluation II,and Bedside Index for Severity in AP scores.Computed tomography is considered the gold standard imaging modality for AP due to its high sensitivity and specificity,while magnetic resonance imaging and ultrasound can provide additional information on biliary obstruction and vascular complications.Scoring systems utilize clinical and laboratory parameters to classify AP patients into mild,moderate,or severe categories,guiding treatment decisions,such as intensive care unit admission,early enteral feeding,and antibiotic use.Despite the central role of imaging technologies and scoring systems in AP management,these methods have limitations in terms of accuracy,reproducibility,practicality and economics.Recent advancements of artificial intelligence(AI)provide new opportunities to enhance their performance by analyzing vast amounts of clinical and imaging data.AI algorithms can analyze large amounts of clinical and imaging data,identify scoring system patterns,and predict the clinical course of disease.AI-based models have shown promising results in predicting the severity and mortality of AP,but further validation and standardization are required before widespread clinical application.In addition,understanding the correlation between these three technologies will aid in developing new methods that can accurately,sensitively,and specifically be used in the diagnosis,severity prediction,and prognosis assessment of AP through complementary advantages.展开更多
BACKGROUND Barrett’s esophagus(BE),which has increased in prevalence worldwide,is a precursor for esophageal adenocarcinoma.Although there is a gap in the detection rates between endoscopic BE and histological BE in ...BACKGROUND Barrett’s esophagus(BE),which has increased in prevalence worldwide,is a precursor for esophageal adenocarcinoma.Although there is a gap in the detection rates between endoscopic BE and histological BE in current research,we trained our artificial intelligence(AI)system with images of endoscopic BE and tested the system with images of histological BE.AIM To assess whether an AI system can aid in the detection of BE in our setting.METHODS Endoscopic narrow-band imaging(NBI)was collected from Chung Shan Medical University Hospital and Changhua Christian Hospital,resulting in 724 cases,with 86 patients having pathological results.Three senior endoscopists,who were instructing physicians of the Digestive Endoscopy Society of Taiwan,independently annotated the images in the development set to determine whether each image was classified as an endoscopic BE.The test set consisted of 160 endoscopic images of 86 cases with histological results.RESULTS Six pre-trained models were compared,and EfficientNetV2B2(accuracy[ACC]:0.8)was selected as the backbone architecture for further evaluation due to better ACC results.In the final test,the AI system correctly identified 66 of 70 cases of BE and 85 of 90 cases without BE,resulting in an ACC of 94.37%.CONCLUSION Our AI system,which was trained by NBI of endoscopic BE,can adequately predict endoscopic images of histological BE.The ACC,sensitivity,and specificity are 94.37%,94.29%,and 94.44%,respectively.展开更多
Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on p...Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.展开更多
Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of...Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.展开更多
In the context of the era of continuous development of artificial intelligence, the labor value of university students is impacted by technological substitution. Simultaneously, university students are also required t...In the context of the era of continuous development of artificial intelligence, the labor value of university students is impacted by technological substitution. Simultaneously, university students are also required to constantly update their skills. All of the above will be the challenge of university students’ employment prospects. However, artificial intelligence will also bring new opportunities, which will stimulate the innovation ability of university students and bring new directions for employment. In order to better cope with the possible impact of artificial intelligence, universities should incorporate employment guidance services into the “three-wide education” system. To achieve this, universities need to take the following measures: developing the dynamic monitoring system of university employment based on big data, constructing the employment guidance curriculum system of university students throughout the whole process, updating the mode of diversified employment guidance service as well as establishing a team of employment guidance teachers keeping pace with the times. These measures aim to better adapt to the job market demands in the context of artificial intelligence, guide students to actively respond to the possible impact of artificial intelligence technology, cultivate their core competencies and qualities that are less likely to be replaced by artificial intelligence, and promote the high-quality employment of university students.展开更多
Clinical applications of Artificial Intelligence(AI)for mental health care have experienced a meteoric rise in the past few years.AIenabled chatbot software and applications have been administering significant medical...Clinical applications of Artificial Intelligence(AI)for mental health care have experienced a meteoric rise in the past few years.AIenabled chatbot software and applications have been administering significant medical treatments that were previously only available from experienced and competent healthcare professionals.Such initiatives,which range from“virtual psychiatrists”to“social robots”in mental health,strive to improve nursing performance and cost management,as well as meeting the mental health needs of vulnerable and underserved populations.Nevertheless,there is still a substantial gap between recent progress in AI mental health and the widespread use of these solutions by healthcare practitioners in clinical settings.Furthermore,treatments are frequently developed without clear ethical concerns.While AI-enabled solutions show promise in the realm of mental health,further research is needed to address the ethical and social aspects of these technologies,as well as to establish efficient research and medical practices in this innovative sector.Moreover,the current relevant literature still lacks a formal and objective review that specifically focuses on research questions from both developers and psychiatrists in AI-enabled chatbotpsychologists development.Taking into account all the problems outlined in this study,we conducted a systematic review of AI-enabled chatbots in mental healthcare that could cover some issues concerning psychotherapy and artificial intelligence.In this systematic review,we put five research questions related to technologies in chatbot development,psychological disorders that can be treated by using chatbots,types of therapies that are enabled in chatbots,machine learning models and techniques in chatbot psychologists,as well as ethical challenges.展开更多
Obesity poses several challenges to healthcare and the well-being of individuals.It can be linked to several life-threatening diseases.Surgery is a viable option in some instances to reduce obesity-related risks and e...Obesity poses several challenges to healthcare and the well-being of individuals.It can be linked to several life-threatening diseases.Surgery is a viable option in some instances to reduce obesity-related risks and enable weight loss.State-of-the-art technologies have the potential for long-term benefits in post-surgery living.In this work,an Internet of Things(IoT)framework is proposed to effectively communicate the daily living data and exercise routine of surgery patients and patients with excessive weight.The proposed IoT framework aims to enable seamless communications from wearable sensors and body networks to the cloud to create an accurate profile of the patients.It also attempts to automate the data analysis and represent the facts about a patient.The IoT framework proposes a co-channel interference avoidance mechanism and the ability to communicate higher activity data with minimal impact on the bandwidth requirements of the system.The proposed IoT framework also benefits from machine learning based activity classification systems,with relatively high accuracy,which allow the communicated data to be translated into meaningful information.展开更多
With the increasing and rapid growth rate of COVID-19 cases,the healthcare scheme of several developed countries have reached the point of collapse.An important and critical steps in fighting against COVID-19 is power...With the increasing and rapid growth rate of COVID-19 cases,the healthcare scheme of several developed countries have reached the point of collapse.An important and critical steps in fighting against COVID-19 is powerful screening of diseased patients,in such a way that positive patient can be treated and isolated.A chest radiology image-based diagnosis scheme might have several benefits over traditional approach.The accomplishment of artificial intelligence(AI)based techniques in automated diagnoses in the healthcare sector and rapid increase in COVID-19 cases have demanded the requirement of AI based automated diagnosis and recognition systems.This study develops an Intelligent Firefly Algorithm Deep Transfer Learning Based COVID-19Monitoring System(IFFA-DTLMS).The proposed IFFADTLMSmodelmajorly aims at identifying and categorizing the occurrence of COVID19 on chest radiographs.To attain this,the presented IFFA-DTLMS model primarily applies densely connected networks(DenseNet121)model to generate a collection of feature vectors.In addition,the firefly algorithm(FFA)is applied for the hyper parameter optimization of DenseNet121 model.Moreover,autoencoder-long short term memory(AE-LSTM)model is exploited for the classification and identification of COVID19.For ensuring the enhanced performance of the IFFA-DTLMS model,a wide-ranging experiments were performed and the results are reviewed under distinctive aspects.The experimental value reports the betterment of IFFA-DTLMS model over recent approaches.展开更多
With the rise of the Internet of Vehicles(IoV)and the number of connected vehicles increasing on the roads,Cooperative Intelligent Transportation Systems(C-ITSs)have become an important area of research.As the number ...With the rise of the Internet of Vehicles(IoV)and the number of connected vehicles increasing on the roads,Cooperative Intelligent Transportation Systems(C-ITSs)have become an important area of research.As the number of Vehicle to Vehicle(V2V)and Vehicle to Interface(V2I)communication links increases,the amount of data received and processed in the network also increases.In addition,networking interfaces need to be made more secure for which existing cryptography-based security schemes may not be sufficient.Thus,there is a need to augment them with intelligent network intrusion detection techniques.Some machine learning-based intrusion detection and anomaly detection techniques for vehicular networks have been proposed in recent times.However,given the expected large network size,there is a necessity for extensive data processing for use in such anomaly detection methods.Deep learning solutions are lucrative options as they remove the necessity for feature selection.Therefore,with the amount of vehicular network traffic increasing at an unprecedented rate in the C-ITS scenario,the need for deep learning-based techniques is all the more heightened.This work presents three deep learning-based misbehavior classification schemes for intrusion detection in IoV networks using Long Short Term Memory(LSTM)and Convolutional Neural Networks(CNNs).The proposed Deep Learning Classification Engines(DCLE)comprise of single or multi-step classification done by deep learning models that are deployed on the vehicular edge servers.Vehicular data received by the Road Side Units(RSUs)is pre-processed and forwarded to the edge server for classifications following the three classification schemes proposed in this paper.The proposed classifiers identify 18 different vehicular behavior types,the F1-scores ranging from 95.58%to 96.75%,much higher than the existing works.By running the classifiers on testbeds emulating edge servers,the prediction performance and prediction time comparison of the proposed scheme is compared with those of the existing studies.展开更多
Lung cancer is the leading cause of cancer-related death around the globe.The treatment and survival rates among lung cancer patients are significantly impacted by early diagnosis.Most diagnostic techniques can identi...Lung cancer is the leading cause of cancer-related death around the globe.The treatment and survival rates among lung cancer patients are significantly impacted by early diagnosis.Most diagnostic techniques can identify and classify only one type of lung cancer.It is crucial to close this gap with a system that detects all lung cancer types.This paper proposes an intelligent decision support system for this purpose.This system aims to support the quick and early detection and classification of all lung cancer types and subtypes to improve treatment and save lives.Its algorithm uses a Convolutional Neural Network(CNN)tool to perform deep learning and a Random Forest Algorithm(RFA)to help classify the type of cancer present using several extracted features,including histograms and energy.Numerous simulation experiments were conducted on MATLAB,evidencing that this system achieves 98.7%accuracy and over 98%precision and recall.A comparative assessment assessing accuracy,recall,precision,specificity,and F-score between the proposed algorithm and works from the literature shows that the proposed system in this study outperforms existing methods in all considered metrics.This study found that using CNNs and RFAs is highly effective in detecting lung cancer,given the high accuracy,precision,and recall results.These results lead us to believe that bringing this kind of technology to doctors diagnosing lung cancer is critical.展开更多
文摘Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.
基金supported by the National Key R&D Program of China under Grant 2021YFB1407001the National Natural Science Foundation of China (NSFC) under Grants 62001269 and 61960206006+2 种基金the State Key Laboratory of Rail Traffic Control and Safety (under Grants RCS2022K009)Beijing Jiaotong University, the Future Plan Program for Young Scholars of Shandong Universitythe EU H2020 RISE TESTBED2 project under Grant 872172
文摘A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a trend.This paper provides AI based channel characteristic prediction and scenario classification model for millimeter wave(mmWave)HST communications.Firstly,the ray tracing method verified by measurement data is applied to reconstruct four representative HST scenarios.By setting the positions of transmitter(Tx),receiver(Rx),and other parameters,the multi-scenarios wireless channel big data is acquired.Then,based on the obtained channel database,radial basis function neural network(RBF-NN)and back propagation neural network(BP-NN)are trained for channel characteristic prediction and scenario classification.Finally,the channel characteristic prediction and scenario classification capabilities of the network are evaluated by calculating the root mean square error(RMSE).The results show that RBF-NN can generally achieve better performance than BP-NN,and is more applicable to prediction of HST scenarios.
文摘While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
文摘●AIM:To quantify the performance of artificial intelligence(AI)in detecting glaucoma with spectral-domain optical coherence tomography(SD-OCT)images.●METHODS:Electronic databases including PubMed,Embase,Scopus,ScienceDirect,ProQuest and Cochrane Library were searched before May 31,2023 which adopted AI for glaucoma detection with SD-OCT images.All pieces of the literature were screened and extracted by two investigators.Meta-analysis,Meta-regression,subgroup,and publication of bias were conducted by Stata16.0.The risk of bias assessment was performed in Revman5.4 using the QUADAS-2 tool.●RESULTS:Twenty studies and 51 models were selected for systematic review and Meta-analysis.The pooled sensitivity and specificity were 0.91(95%CI:0.86–0.94,I2=94.67%),0.90(95%CI:0.87–0.92,I2=89.24%).The pooled positive likelihood ratio(PLR)and negative likelihood ratio(NLR)were 8.79(95%CI:6.93–11.15,I2=89.31%)and 0.11(95%CI:0.07–0.16,I2=95.25%).The pooled diagnostic odds ratio(DOR)and area under curve(AUC)were 83.58(95%CI:47.15–148.15,I2=100%)and 0.95(95%CI:0.93–0.97).There was no threshold effect(Spearman correlation coefficient=0.22,P>0.05).●CONCLUSION:There is a high accuracy for the detection of glaucoma with AI with SD-OCT images.The application of AI-based algorithms allows together with“doctor+artificial intelligence”to improve the diagnosis of glaucoma.
文摘BACKGROUND Diabetes,a globally escalating health concern,necessitates innovative solutions for efficient detection and management.Blood glucose control is an essential aspect of managing diabetes and finding the most effective ways to control it.The latest findings suggest that a basal insulin administration rate and a single,highconcentration injection before a meal may not be sufficient to maintain healthy blood glucose levels.While the basal insulin rate treatment can stabilize blood glucose levels over the long term,it may not be enough to bring the levels below the post-meal limit after 60 min.The short-term impacts of meals can be greatly reduced by high-concentration injections,which can help stabilize blood glucose levels.Unfortunately,they cannot provide long-term stability to satisfy the postmeal or pre-meal restrictions.However,proportional-integral-derivative(PID)control with basal dose maintains the blood glucose levels within the range for a longer period.AIM To develop a closed-loop electronic system to pump required insulin into the patient's body automatically in synchronization with glucose sensor readings.METHODS The proposed system integrates a glucose sensor,decision unit,and pumping module to specifically address the pumping of insulin and enhance system effectiveness.Serving as the intelligence hub,the decision unit analyzes data from the glucose sensor to determine the optimal insulin dosage,guided by a pre-existing glucose and insulin level table.The artificial intelligence detection block processes this information,providing decision instructions to the pumping module.Equipped with communication antennas,the glucose sensor and micropump operate in a feedback loop,creating a closed-loop system that eliminates the need for manual intervention.RESULTS The incorporation of a PID controller to assess and regulate blood glucose and insulin levels in individuals with diabetes introduces a sophisticated and dynamic element to diabetes management.The simulation not only allows visualization of how the body responds to different inputs but also offers a valuable tool for predicting and testing the effects of various interventions over time.The PID controller's role in adjusting insulin dosage based on the discrepancy between desired setpoints and actual measurements showcases a proactive strategy for maintaining blood glucose levels within a healthy range.This dynamic feedback loop not only delays the onset of steady-state conditions but also effectively counteracts post-meal spikes in blood glucose.CONCLUSION The WiFi-controlled voltage controller and the PID controller simulation collectively underscore the ongoing efforts to enhance efficiency,safety,and personalized care within the realm of diabetes management.These technological advancements not only contribute to the optimization of insulin delivery systems but also have the potential to reshape our understanding of glucose and insulin dynamics,fostering a new era of precision medicine in the treatment of diabetes.
基金supported by the National Natural Science Foundation of China(62172033).
文摘In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.
文摘The objective-scientific conclusions obtained from the researches conducted in various fields of science prove that era and worldview are in unity and are phenomena that determine one another,and era and worldview are the most important phenomena in the understanding of geniuses,historical events,including personalities who have left a mark on the history of politics,and every individual as a whole.And it is appropriate to briefly consider the problem in the context of human and personality factors.It is known that man has tried to understand natural phenomena since the beginning of time.Contact with the material world naturally affects his consciousness and even his subconscious as he solves problems that are important or useful for human life.During this understanding,the worldview changes and is formed.Thus,depending on the material and moral development of all spheres of life,the content and essence of the progress events,as the civilizations replaced each other in different periods,the event of periodization took place and became a system.If we take Europe,the people of the Ice Age of 300,000 years ago,who engaged in hunting to solve their hunger needs,in other words,the age of dinosaurs,have spread to many parts of the world from Africa,where they lived in order to survive and meet more of their daily needs.The extensive integration of agricultural Ice Age People into the Earth included farming,fishing,animal husbandry,hunting,as well as handicrafts,etc.,and has led to the revolutionary development of the fields.As economic activities led these first inhabitants of the planet from caves to less comfortable shelters,then to good houses,then to palaces,labor activities in various occupations,including crafts,developed rapidly.Thus,the fads of the era who differed from the crowd(later this class will be called personalities,geniuses...-Kh.G.)began to appear.If we approach the issue from the point of view of history,we witness that the world view determines the development in different periods.This idea can be expressed in such a way that each period can be considered to have developed or experienced a crisis according to the level of worldview.In this direction of our thoughts,the question arises:So,what is the phenomenon of worldview of this era-XXI century?Based on the general content of the current events,characterized as the globalization stage of the modern world,we can say that the outlook of the historical stage we live in is based on the achievements of the last stage of the industrial revolution.In this article,by analyzing the history of the artificial intelligence system during the world industrial revolutions,we will study both the concept of progress of the industrial revolutions and the progressive and at the same time regressive development of the artificial intelligence system.
文摘BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Polyps(AI4CRP)for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYE^(TM)(Fujifilm,Tokyo,Japan).CADx influence on the optical diagnosis of an expert endoscopist was also investigated.METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm.Both CADxsystems exploit convolutional neural networks.Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard.AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value(range 0.0-1.0).A predefined cut-off value of 0.6 was set with values<0.6 indicating benign and values≥0.6 indicating premalignant colorectal polyps.Low confidence characterizations were defined as values 40%around the cut-off value of 0.6(<0.36 and>0.76).Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps.Self-critical AI4CRP,excluding 14 low confidence characterizations[27.5%(14/51)],had a diagnostic accuracy of 89.2%,sensitivity of 89.7%,and specificity of 87.5%,which was higher compared to AI4CRP.CAD EYE had a 83.7%diagnostic accuracy,74.2%sensitivity,and 100.0%specificity.Diagnostic performances of the endoscopist alone(before AI)increased nonsignificantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE(AI-assisted endoscopist).Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems,except for specificity for which CAD EYE performed best.CONCLUSION Real-time use of AI4CRP was feasible.Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP.
文摘This article proposes a comprehensive monitoring system for tunnel operation to address the risks associated with tunnel operations.These risks include safety control risks,increased traffic flow,extreme weather events,and movement of tectonic plates.The proposed system is based on the Internet of Things and artificial intelligence identification technology.The monitoring system will cover various aspects of tunnel operations,such as the slope of the entrance,the structural safety of the cave body,toxic and harmful gases that may appear during operation,excessively high and low-temperature humidity,poor illumination,water leakage or road water accumulation caused by extreme weather,combustion and smoke caused by fires,and more.The system will enable comprehensive monitoring and early warning of fire protection systems,accident vehicles,and overheating vehicles.This will effectively improve safety during tunnel operation.
基金Sanming Project of Medicine in Shenzhen(No.SZSM201911007)Shenzhen Stability Support Plan(20200824145152001)。
文摘BACKGROUND:Rapid on-site triage is critical after mass-casualty incidents(MCIs)and other mass injury events.Unmanned aerial vehicles(UAVs)have been used in MCIs to search and rescue wounded individuals,but they mainly depend on the UAV operator’s experience.We used UAVs and artificial intelligence(AI)to provide a new technique for the triage of MCIs and more efficient solutions for emergency rescue.METHODS:This was a preliminary experimental study.We developed an intelligent triage system based on two AI algorithms,namely OpenPose and YOLO.Volunteers were recruited to simulate the MCI scene and triage,combined with UAV and Fifth Generation(5G)Mobile Communication Technology real-time transmission technique,to achieve triage in the simulated MCI scene.RESULTS:Seven postures were designed and recognized to achieve brief but meaningful triage in MCIs.Eight volunteers participated in the MCI simulation scenario.The results of simulation scenarios showed that the proposed method was feasible in tasks of triage for MCIs.CONCLUSION:The proposed technique may provide an alternative technique for the triage of MCIs and is an innovative method in emergency rescue.
基金Fujian Provincial Health Technology Project,No.2020GGA079Natural Science Foundation of Fujian Province,No.2021J011380National Natural Science Foundation of China,No.62276146.
文摘Acute pancreatitis(AP)is a potentially life-threatening inflammatory disease of the pancreas,with clinical management determined by the severity of the disease.Diagnosis,severity prediction,and prognosis assessment of AP typically involve the use of imaging technologies,such as computed tomography,magnetic resonance imaging,and ultrasound,and scoring systems,including Ranson,Acute Physiology and Chronic Health Evaluation II,and Bedside Index for Severity in AP scores.Computed tomography is considered the gold standard imaging modality for AP due to its high sensitivity and specificity,while magnetic resonance imaging and ultrasound can provide additional information on biliary obstruction and vascular complications.Scoring systems utilize clinical and laboratory parameters to classify AP patients into mild,moderate,or severe categories,guiding treatment decisions,such as intensive care unit admission,early enteral feeding,and antibiotic use.Despite the central role of imaging technologies and scoring systems in AP management,these methods have limitations in terms of accuracy,reproducibility,practicality and economics.Recent advancements of artificial intelligence(AI)provide new opportunities to enhance their performance by analyzing vast amounts of clinical and imaging data.AI algorithms can analyze large amounts of clinical and imaging data,identify scoring system patterns,and predict the clinical course of disease.AI-based models have shown promising results in predicting the severity and mortality of AP,but further validation and standardization are required before widespread clinical application.In addition,understanding the correlation between these three technologies will aid in developing new methods that can accurately,sensitively,and specifically be used in the diagnosis,severity prediction,and prognosis assessment of AP through complementary advantages.
文摘BACKGROUND Barrett’s esophagus(BE),which has increased in prevalence worldwide,is a precursor for esophageal adenocarcinoma.Although there is a gap in the detection rates between endoscopic BE and histological BE in current research,we trained our artificial intelligence(AI)system with images of endoscopic BE and tested the system with images of histological BE.AIM To assess whether an AI system can aid in the detection of BE in our setting.METHODS Endoscopic narrow-band imaging(NBI)was collected from Chung Shan Medical University Hospital and Changhua Christian Hospital,resulting in 724 cases,with 86 patients having pathological results.Three senior endoscopists,who were instructing physicians of the Digestive Endoscopy Society of Taiwan,independently annotated the images in the development set to determine whether each image was classified as an endoscopic BE.The test set consisted of 160 endoscopic images of 86 cases with histological results.RESULTS Six pre-trained models were compared,and EfficientNetV2B2(accuracy[ACC]:0.8)was selected as the backbone architecture for further evaluation due to better ACC results.In the final test,the AI system correctly identified 66 of 70 cases of BE and 85 of 90 cases without BE,resulting in an ACC of 94.37%.CONCLUSION Our AI system,which was trained by NBI of endoscopic BE,can adequately predict endoscopic images of histological BE.The ACC,sensitivity,and specificity are 94.37%,94.29%,and 94.44%,respectively.
基金supported by the Capital’s Funds for Health Improvement and Research,No.2022-2-2072(to YG).
文摘Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.
基金supported in part by the National Natural Science Foundation of China(82072019)the Shenzhen Basic Research Program(JCYJ20210324130209023)+5 种基金the Shenzhen-Hong Kong-Macao S&T Program(Category C)(SGDX20201103095002019)the Mainland-Hong Kong Joint Funding Scheme(MHKJFS)(MHP/005/20),the Project of Strategic Importance Fund(P0035421)the Projects of RISA(P0043001)from the Hong Kong Polytechnic University,the Natural Science Foundation of Jiangsu Province(BK20201441)the Provincial and Ministry Co-constructed Project of Henan Province Medical Science and Technology Research(SBGJ202103038,SBGJ202102056)the Henan Province Key R&D and Promotion Project(Science and Technology Research)(222102310015)the Natural Science Foundation of Henan Province(222300420575),and the Henan Province Science and Technology Research(222102310322).
文摘Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.
文摘In the context of the era of continuous development of artificial intelligence, the labor value of university students is impacted by technological substitution. Simultaneously, university students are also required to constantly update their skills. All of the above will be the challenge of university students’ employment prospects. However, artificial intelligence will also bring new opportunities, which will stimulate the innovation ability of university students and bring new directions for employment. In order to better cope with the possible impact of artificial intelligence, universities should incorporate employment guidance services into the “three-wide education” system. To achieve this, universities need to take the following measures: developing the dynamic monitoring system of university employment based on big data, constructing the employment guidance curriculum system of university students throughout the whole process, updating the mode of diversified employment guidance service as well as establishing a team of employment guidance teachers keeping pace with the times. These measures aim to better adapt to the job market demands in the context of artificial intelligence, guide students to actively respond to the possible impact of artificial intelligence technology, cultivate their core competencies and qualities that are less likely to be replaced by artificial intelligence, and promote the high-quality employment of university students.
基金This work was supported by the grant“Development of an intellectual system prototype for online-psychological support that can diagnose and improve youth’s psychoemotional state”funded by the Ministry of Education of the Republic of Kazakhstan.Grant No.IRN AP09259140.
文摘Clinical applications of Artificial Intelligence(AI)for mental health care have experienced a meteoric rise in the past few years.AIenabled chatbot software and applications have been administering significant medical treatments that were previously only available from experienced and competent healthcare professionals.Such initiatives,which range from“virtual psychiatrists”to“social robots”in mental health,strive to improve nursing performance and cost management,as well as meeting the mental health needs of vulnerable and underserved populations.Nevertheless,there is still a substantial gap between recent progress in AI mental health and the widespread use of these solutions by healthcare practitioners in clinical settings.Furthermore,treatments are frequently developed without clear ethical concerns.While AI-enabled solutions show promise in the realm of mental health,further research is needed to address the ethical and social aspects of these technologies,as well as to establish efficient research and medical practices in this innovative sector.Moreover,the current relevant literature still lacks a formal and objective review that specifically focuses on research questions from both developers and psychiatrists in AI-enabled chatbotpsychologists development.Taking into account all the problems outlined in this study,we conducted a systematic review of AI-enabled chatbots in mental healthcare that could cover some issues concerning psychotherapy and artificial intelligence.In this systematic review,we put five research questions related to technologies in chatbot development,psychological disorders that can be treated by using chatbots,types of therapies that are enabled in chatbots,machine learning models and techniques in chatbot psychologists,as well as ethical challenges.
基金The authors would like to acknowledge the support of the Deputy for Research and Innovation-Ministry of Education,Kingdom of Saudi Arabia,for this research through a grant(NU/IFC/ENT/01/020)under the institutional Funding Committee at Najran University,Kingdom of Saudi Arabia。
文摘Obesity poses several challenges to healthcare and the well-being of individuals.It can be linked to several life-threatening diseases.Surgery is a viable option in some instances to reduce obesity-related risks and enable weight loss.State-of-the-art technologies have the potential for long-term benefits in post-surgery living.In this work,an Internet of Things(IoT)framework is proposed to effectively communicate the daily living data and exercise routine of surgery patients and patients with excessive weight.The proposed IoT framework aims to enable seamless communications from wearable sensors and body networks to the cloud to create an accurate profile of the patients.It also attempts to automate the data analysis and represent the facts about a patient.The IoT framework proposes a co-channel interference avoidance mechanism and the ability to communicate higher activity data with minimal impact on the bandwidth requirements of the system.The proposed IoT framework also benefits from machine learning based activity classification systems,with relatively high accuracy,which allow the communicated data to be translated into meaningful information.
基金the Deanship of Scientific Research(DSR),King Abdulaziz University,Jeddah,under grant no.(G:366-140-38).
文摘With the increasing and rapid growth rate of COVID-19 cases,the healthcare scheme of several developed countries have reached the point of collapse.An important and critical steps in fighting against COVID-19 is powerful screening of diseased patients,in such a way that positive patient can be treated and isolated.A chest radiology image-based diagnosis scheme might have several benefits over traditional approach.The accomplishment of artificial intelligence(AI)based techniques in automated diagnoses in the healthcare sector and rapid increase in COVID-19 cases have demanded the requirement of AI based automated diagnosis and recognition systems.This study develops an Intelligent Firefly Algorithm Deep Transfer Learning Based COVID-19Monitoring System(IFFA-DTLMS).The proposed IFFADTLMSmodelmajorly aims at identifying and categorizing the occurrence of COVID19 on chest radiographs.To attain this,the presented IFFA-DTLMS model primarily applies densely connected networks(DenseNet121)model to generate a collection of feature vectors.In addition,the firefly algorithm(FFA)is applied for the hyper parameter optimization of DenseNet121 model.Moreover,autoencoder-long short term memory(AE-LSTM)model is exploited for the classification and identification of COVID19.For ensuring the enhanced performance of the IFFA-DTLMS model,a wide-ranging experiments were performed and the results are reviewed under distinctive aspects.The experimental value reports the betterment of IFFA-DTLMS model over recent approaches.
基金The work of Vinay Chamola and F.Richard Yu was supported in part by the SICI SICRG Grant through the Project Artificial Intelligence Enabled Security Provisioning and Vehicular Vision Innovations for Autonomous Vehicles,and in part by the Government of Canada's National Crime Prevention Strategy and Natural Sciences and Engineering Research Council of Canada(NSERC)CREATE Program for Building Trust in Connected and Autonomous Vehicles(TrustCAV).
文摘With the rise of the Internet of Vehicles(IoV)and the number of connected vehicles increasing on the roads,Cooperative Intelligent Transportation Systems(C-ITSs)have become an important area of research.As the number of Vehicle to Vehicle(V2V)and Vehicle to Interface(V2I)communication links increases,the amount of data received and processed in the network also increases.In addition,networking interfaces need to be made more secure for which existing cryptography-based security schemes may not be sufficient.Thus,there is a need to augment them with intelligent network intrusion detection techniques.Some machine learning-based intrusion detection and anomaly detection techniques for vehicular networks have been proposed in recent times.However,given the expected large network size,there is a necessity for extensive data processing for use in such anomaly detection methods.Deep learning solutions are lucrative options as they remove the necessity for feature selection.Therefore,with the amount of vehicular network traffic increasing at an unprecedented rate in the C-ITS scenario,the need for deep learning-based techniques is all the more heightened.This work presents three deep learning-based misbehavior classification schemes for intrusion detection in IoV networks using Long Short Term Memory(LSTM)and Convolutional Neural Networks(CNNs).The proposed Deep Learning Classification Engines(DCLE)comprise of single or multi-step classification done by deep learning models that are deployed on the vehicular edge servers.Vehicular data received by the Road Side Units(RSUs)is pre-processed and forwarded to the edge server for classifications following the three classification schemes proposed in this paper.The proposed classifiers identify 18 different vehicular behavior types,the F1-scores ranging from 95.58%to 96.75%,much higher than the existing works.By running the classifiers on testbeds emulating edge servers,the prediction performance and prediction time comparison of the proposed scheme is compared with those of the existing studies.
基金The authors would like to confirm that this research work was funded by Institutional Fund Projects under Grant No.(IFPIP:646-829-1443)。
文摘Lung cancer is the leading cause of cancer-related death around the globe.The treatment and survival rates among lung cancer patients are significantly impacted by early diagnosis.Most diagnostic techniques can identify and classify only one type of lung cancer.It is crucial to close this gap with a system that detects all lung cancer types.This paper proposes an intelligent decision support system for this purpose.This system aims to support the quick and early detection and classification of all lung cancer types and subtypes to improve treatment and save lives.Its algorithm uses a Convolutional Neural Network(CNN)tool to perform deep learning and a Random Forest Algorithm(RFA)to help classify the type of cancer present using several extracted features,including histograms and energy.Numerous simulation experiments were conducted on MATLAB,evidencing that this system achieves 98.7%accuracy and over 98%precision and recall.A comparative assessment assessing accuracy,recall,precision,specificity,and F-score between the proposed algorithm and works from the literature shows that the proposed system in this study outperforms existing methods in all considered metrics.This study found that using CNNs and RFAs is highly effective in detecting lung cancer,given the high accuracy,precision,and recall results.These results lead us to believe that bringing this kind of technology to doctors diagnosing lung cancer is critical.