Objective To observe the value of artificial intelligence(AI)models based on non-contrast chest CT for measuring bone mineral density(BMD).Methods Totally 380 subjects who underwent both non-contrast chest CT and quan...Objective To observe the value of artificial intelligence(AI)models based on non-contrast chest CT for measuring bone mineral density(BMD).Methods Totally 380 subjects who underwent both non-contrast chest CT and quantitative CT(QCT)BMD examination were retrospectively enrolled and divided into training set(n=304)and test set(n=76)at a ratio of 8∶2.The mean BMD of L1—L3 vertebrae were measured based on QCT.Spongy bones of T5—T10 vertebrae were segmented as ROI,radiomics(Rad)features were extracted,and machine learning(ML),Rad and deep learning(DL)models were constructed for classification of osteoporosis(OP)and evaluating BMD,respectively.Receiver operating characteristic curves were drawn,and area under the curves(AUC)were calculated to evaluate the efficacy of each model for classification of OP.Bland-Altman analysis and Pearson correlation analysis were performed to explore the consistency and correlation of each model with QCT for measuring BMD.Results Among ML and Rad models,ML Bagging-OP and Rad Bagging-OP had the best performances for classification of OP.In test set,AUC of ML Bagging-OP,Rad Bagging-OP and DL OP for classification of OP was 0.943,0.944 and 0.947,respectively,with no significant difference(all P>0.05).BMD obtained with all the above models had good consistency with those measured with QCT(most of the differences were within the range of Ax-G±1.96 s),which were highly positively correlated(r=0.910—0.974,all P<0.001).Conclusion AI models based on non-contrast chest CT had high efficacy for classification of OP,and good consistency of BMD measurements were found between AI models and QCT.展开更多
Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Metho...Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Methods Eighty-eight urinary calculi patients were prospectively enrolled.Low dose CT(LDCT)and ULDCT scanning were performed,and the effective dose(ED)of each scanning protocol were calculated.The patients were then randomly divided into training set(n=75)and test set(n=13),and a self-supervised deep learning AI noise reduction system based on the nearest adjacent layer constructed with ULDCT images in training set was used for reducing noise of ULDCT images in test set.In test set,the quality of ULDCT images before and after AI noise reduction were compared with LDCT images,i.e.Blind/Referenceless Image Spatial Quality Evaluator(BRISQUE)scores,image noise(SD ROI)and signal-to-noise ratio(SNR).Results The tube current,the volume CT dose index and the dose length product of abdominal ULDCT scanning protocol were all lower compared with those of LDCT scanning protocol(all P<0.05),with a decrease of ED for approximately 82.66%.For 13 patients with urinary calculi in test set,BRISQUE score showed that the quality level of ULDCT images before AI noise reduction reached 54.42%level but raised to 95.76%level of LDCT images after AI noise reduction.Both ULDCT images after AI noise reduction and LDCT images had lower SD ROI and higher SNR than ULDCT images before AI noise reduction(all adjusted P<0.05),whereas no significant difference was found between the former two(both adjusted P>0.05).Conclusion Self-supervised learning AI noise reduction technology based on the nearest adjacent layer could effectively reduce noise and improve image quality of urinary calculi ULDCT images,being conducive for clinical application of ULDCT.展开更多
BACKGROUND With the increasingly extensive application of artificial intelligence(AI)in medical systems,the accuracy of AI in medical diagnosis in the real world deserves attention and objective evaluation.AIM To inve...BACKGROUND With the increasingly extensive application of artificial intelligence(AI)in medical systems,the accuracy of AI in medical diagnosis in the real world deserves attention and objective evaluation.AIM To investigate the accuracy of AI diagnostic software(Shukun)in assessing ischemic penumbra/core infarction in acute ischemic stroke patients due to large vessel occlusion.METHODS From November 2021 to March 2022,consecutive acute stroke patients with large vessel occlusion who underwent mechanical thrombectomy(MT)post-Shukun AI penumbra assessment were included.Computed tomography angiography(CTA)and perfusion exams were analyzed by AI,reviewed by senior neurointerventional experts.In the case of divergences among the three experts,discussions were held to reach a final conclusion.When the results of AI were inconsistent with the neurointerventional experts’diagnosis,the diagnosis by AI was considered inaccurate.RESULTS A total of 22 patients were included in the study.The vascular recanalization rate was 90.9%,and 63.6%of patients had modified Rankin scale scores of 0-2 at the 3-month follow-up.The computed tomography(CT)perfusion diagnosis by Shukun(AI)was confirmed to be invalid in 3 patients(inaccuracy rate:13.6%).CONCLUSION AI(Shukun)has limits in assessing ischemic penumbra.Integrating clinical and imaging data(CT,CTA,and even magnetic resonance imaging)is crucial for MT decision-making.展开更多
Missile interception problem can be regarded as a two-person zero-sum differential games problem,which depends on the solution of Hamilton-Jacobi-Isaacs(HJI)equa-tion.It has been proved impossible to obtain a closed-f...Missile interception problem can be regarded as a two-person zero-sum differential games problem,which depends on the solution of Hamilton-Jacobi-Isaacs(HJI)equa-tion.It has been proved impossible to obtain a closed-form solu-tion due to the nonlinearity of HJI equation,and many iterative algorithms are proposed to solve the HJI equation.Simultane-ous policy updating algorithm(SPUA)is an effective algorithm for solving HJI equation,but it is an on-policy integral reinforce-ment learning(IRL).For online implementation of SPUA,the dis-turbance signals need to be adjustable,which is unrealistic.In this paper,an off-policy IRL algorithm based on SPUA is pro-posed without making use of any knowledge of the systems dynamics.Then,a neural-network based online adaptive critic implementation scheme of the off-policy IRL algorithm is pre-sented.Based on the online off-policy IRL method,a computa-tional intelligence interception guidance(CIIG)law is developed for intercepting high-maneuvering target.As a model-free method,intercepting targets can be achieved through measur-ing system data online.The effectiveness of the CIIG is verified through two missile and target engagement scenarios.展开更多
Computational intelligence(CI)is a group of nature-simulated computationalmodels and processes for addressing difficult real-life problems.The CI is useful in the UAV domain as it produces efficient,precise,and rapid ...Computational intelligence(CI)is a group of nature-simulated computationalmodels and processes for addressing difficult real-life problems.The CI is useful in the UAV domain as it produces efficient,precise,and rapid solutions.Besides,unmanned aerial vehicles(UAV)developed a hot research topic in the smart city environment.Despite the benefits of UAVs,security remains a major challenging issue.In addition,deep learning(DL)enabled image classification is useful for several applications such as land cover classification,smart buildings,etc.This paper proposes novel meta-heuristics with a deep learning-driven secure UAV image classification(MDLS-UAVIC)model in a smart city environment.Themajor purpose of the MDLS-UAVIC algorithm is to securely encrypt the images and classify them into distinct class labels.The proposedMDLS-UAVIC model follows a two-stage process:encryption and image classification.The encryption technique for image encryption effectively encrypts the UAV images.Next,the image classification process involves anXception-based deep convolutional neural network for the feature extraction process.Finally,shuffled shepherd optimization(SSO)with a recurrent neural network(RNN)model is applied for UAV image classification,showing the novelty of the work.The experimental validation of the MDLS-UAVIC approach is tested utilizing a benchmark dataset,and the outcomes are examined in various measures.It achieved a high accuracy of 98%.展开更多
Computational psychiatry is an emerging field that not only explores the biological basis of mental illness but also considers the diagnoses and identifies the underlying mechanisms.One of the key strengths of computa...Computational psychiatry is an emerging field that not only explores the biological basis of mental illness but also considers the diagnoses and identifies the underlying mechanisms.One of the key strengths of computational psychiatry is that it may identify patterns in large datasets that are not easily identifiable.This may help researchers develop more effective treatments and interventions for mental health problems.This paper is a narrative review that reviews the literature and produces an artificial intelligence ecosystem for computational psychiatry.The artificial intelligence ecosystem for computational psychiatry includes data acquisition,preparation,modeling,application,and evaluation.This approach allows researchers to integrate data from a variety of sources,such as brain imaging,genetics,and behavioral experiments,to obtain a more complete understanding of mental health conditions.Through the process of data preprocessing,training,and testing,the data that are required for model building can be prepared.By using machine learning,neural networks,artificial intelligence,and other methods,researchers have been able to develop diagnostic tools that can accurately identify mental health conditions based on a patient’s symptoms and other factors.Despite the continuous development and breakthrough of computational psychiatry,it has not yet influenced routine clinical practice and still faces many challenges,such as data availability and quality,biological risks,equity,and data protection.As we move progress in this field,it is vital to ensure that computational psychiatry remains accessible and inclusive so that all researchers may contribute to this significant and exciting field.展开更多
Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise...Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise their presence and capabilities in the form of services so that they can be discovered and, if desired, exploited by the user or other networked devices. With the increasing number of these devices attached to the network, the complexity to configure and control them increases, which may lead to major processing and communication overhead. Hence, the devices are no longer expected to just act as primitive stand-alone appliances that only provide the facilities and services to the user they are designed for, but also offer complex services that emerge from unique combinations of devices. This creates the necessity for these devices to be equipped with some sort of intelligence and self-awareness to enable them to be self-configuring and self-programming. However, with this "smart evolution", the cognitive load to configure and control such spaces becomes immense. One way to relieve this load is by employing artificial intelligence (AI) techniques to create an intelligent "presence" where the system will be able to recognize the users and autonomously program the environment to be energy efficient and responsive to the user's needs and behaviours. These AI mechanisms should be embedded in the user's environments and should operate in a non-intrusive manner. This paper will show how computational intelligence (CI), which is an emerging domain of AI, could be employed and embedded in our living spaces to help such environments to be more energy efficient, intelligent, adaptive and convenient to the users.展开更多
The most significant invention made in recent years to serve various applications is software.Developing a faultless software system requires the soft-ware system design to be resilient.To make the software design more...The most significant invention made in recent years to serve various applications is software.Developing a faultless software system requires the soft-ware system design to be resilient.To make the software design more efficient,it is essential to assess the reusability of the components used.This paper proposes a software reusability prediction model named Flexible Random Fit(FRF)based on aging resilience for a Service Net(SN)software system.The reusability predic-tion model is developed based on a multilevel optimization technique based on software characteristics such as cohesion,coupling,and complexity.Metrics are obtained from the SN software system,which is then subjected to min-max nor-malization to avoid any saturation during the learning process.The feature extrac-tion process is made more feasible by enriching the data quality via outlier detection.The reusability of the classes is estimated based on a tool called Soft Audit.Software reusability can be predicted more effectively based on the pro-posed FRF-ANN(Flexible Random Fit-Artificial Neural Network)algorithm.Performance evaluation shows that the proposed algorithm outperforms all the other techniques,thus ensuring the optimization of software reusability based on aging resilient.The model is then tested using constraint-based testing techni-ques to make sure that it is perfect at optimizing and making predictions.展开更多
White blood cells (WBC) or leukocytes are a vital component ofthe blood which forms the immune system, which is accountable to fightforeign elements. The WBC images can be exposed to different data analysisapproaches ...White blood cells (WBC) or leukocytes are a vital component ofthe blood which forms the immune system, which is accountable to fightforeign elements. The WBC images can be exposed to different data analysisapproaches which categorize different kinds of WBC. Conventionally, laboratorytests are carried out to determine the kind of WBC which is erroneousand time consuming. Recently, deep learning (DL) models can be employedfor automated investigation of WBC images in short duration. Therefore,this paper introduces an Aquila Optimizer with Transfer Learning basedAutomated White Blood Cells Classification (AOTL-WBCC) technique. Thepresented AOTL-WBCC model executes data normalization and data augmentationprocess (rotation and zooming) at the initial stage. In addition,the residual network (ResNet) approach was used for feature extraction inwhich the initial hyperparameter values of the ResNet model are tuned by theuse of AO algorithm. Finally, Bayesian neural network (BNN) classificationtechnique has been implied for the identification of WBC images into distinctclasses. The experimental validation of the AOTL-WBCC methodology isperformed with the help of Kaggle dataset. The experimental results foundthat the AOTL-WBCC model has outperformed other techniques which arebased on image processing and manual feature engineering approaches underdifferent dimensions.展开更多
In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(...In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.展开更多
Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitor...Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitoring.The HIoT systems find use in several applications such as smart cities,healthcare,transportation,etc.Besides,the HIoT system and explainable artificial intelligence(XAI)tools can be deployed in the healthcare sector for effective decision-making.The COVID-19 pandemic has become a global health issue that necessitates automated and effective diagnostic tools to detect the disease at the initial stage.This article presents a new quantum-inspired differential evolution with explainable artificial intelligence based COVID-19 Detection and Classification(QIDEXAI-CDC)model for HIoT systems.The QIDEXAI-CDC model aims to identify the occurrence of COVID-19 using the XAI tools on HIoT systems.The QIDEXAI-CDC model primarily uses bilateral filtering(BF)as a preprocessing tool to eradicate the noise.In addition,RetinaNet is applied for the generation of useful feature vectors from radiological images.For COVID-19 detection and classification,quantum-inspired differential evolution(QIDE)with kernel extreme learning machine(KELM)model is utilized.The utilization of the QIDE algorithm helps to appropriately choose the weight and bias values of the KELM model.In order to report the enhanced COVID-19 detection outcomes of the QIDEXAI-CDC model,a wide range of simulations was carried out.Extensive comparative studies reported the supremacy of the QIDEXAI-CDC model over the recent approaches.展开更多
Computed tomography has made significant advances since its intro-duction in the early 1970s,where researchers have mainly focused on the quality of image reconstruction in the early stage.However,radiation exposure p...Computed tomography has made significant advances since its intro-duction in the early 1970s,where researchers have mainly focused on the quality of image reconstruction in the early stage.However,radiation exposure poses a health risk,prompting the demand of the lowest possible dose when carrying out CT examinations.To acquire high-quality reconstruction images with low dose radiation,CT reconstruction techniques have evolved from conventional reconstruction such as analytical and iterative reconstruction,to reconstruction methods based on artificial intelligence(AI).All these efforts are devoted to con-structing high-quality images using only low doses with fast reconstruction speed.In particular,conventional reconstruction methods usually optimize one aspect,while AI-based reconstruction has finally managed to attain all goals in one shot.However,there are limitations such as the requirements on large datasets,unstable performance,and weak generalizability in AI-based reconstruction methods.This work presents the review and discussion on the classification,the commercial use,the advantages,and the limitations of AI-based image reconstruction methods in CT.展开更多
Regenerative medicine and anti-aging research have made great strides at the molecular and cellular levels in dermatology and the medical aesthetic field,targeting potential treatments with skin therapeutic and interv...Regenerative medicine and anti-aging research have made great strides at the molecular and cellular levels in dermatology and the medical aesthetic field,targeting potential treatments with skin therapeutic and intervention pathways,which make it possible to develop effective skin regeneration and repair ingredients.With the rapid development of computational biology,bioinformatics as well as artificial intelligence(A.I.),the development of new ingredients for regenerative medicine has been greatly accelerated,and the success rate has been improved.Some application cases have appeared in topical skin regeneration and repair scenarios.This review will briefly introduce the application of bioactive peptides in skin repair and anti-aging as emerging ingredients in cosmeceutics and emphasize how A.I.based computational biology technology may accelerate the development of innovative peptide molecules and ultimately translate them into potential skin regenerative and anti-aging scenarios.Typically,two research routines have been summarized and current limitations as well as directions were discussed for border applications in future research.展开更多
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ...AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.展开更多
The application of computational technology for medical purpose is a very interesting topic.Knowledge content development and new technology search using computational technology becomes the newest approach in medicin...The application of computational technology for medical purpose is a very interesting topic.Knowledge content development and new technology search using computational technology becomes the newest approach in medicine.With advanced computational technology,several omics sciences are available for clarification and prediction in medicine.The computational intelligence is an important application that should be mentioned.Here,the author details and discusses on computational intelligence in tropical medicine.展开更多
The paper proposes an innovative approach aimed at fostering AI literacy through interactive gaming experiences.This paper designs a game-based prototype for preparing pre-service teachers to innovate teaching practic...The paper proposes an innovative approach aimed at fostering AI literacy through interactive gaming experiences.This paper designs a game-based prototype for preparing pre-service teachers to innovate teaching practices across disciplines.The simulation,Color Conquest,serves as a strategic game to encourage educators to reconsider their pedagogical practices.It allows teachers to use and develop various scenarios by customizing maps,giving students agency to engage in the complex decision-making process.Additionally,this engagement process provides teachers with an opportunity to develop students’skills in artificial intelligence literacy as students actively develop strategic thinking,problem-solving,and critical reasoning skills.展开更多
The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of e...The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of employing intelligent reflective surfaces(IRS)andUAVs as relay nodes to efficiently offload user computing tasks to theMEC server system model.Specifically,the user node accesses the primary user spectrum,while adhering to the constraint of satisfying the primary user peak interference power.Furthermore,the UAV acquires energy without interrupting the primary user’s regular communication by employing two energy harvesting schemes,namely time switching(TS)and power splitting(PS).The selection of the optimal UAV is based on the maximization of the instantaneous signal-to-noise ratio.Subsequently,the analytical expression for the outage probability of the system in Rayleigh channels is derived and analyzed.The study investigates the impact of various system parameters,including the number of UAVs,peak interference power,TS,and PS factors,on the system’s outage performance through simulation.The proposed system is also compared to two conventional benchmark schemes:the optimal UAV link transmission and the IRS link transmission.The simulation results validate the theoretical derivation and demonstrate the superiority of the proposed scheme over the benchmark schemes.展开更多
In this paper,we investigate the energy efficiency maximization for mobile edge computing(MEC)in intelligent reflecting surface(IRS)assisted unmanned aerial vehicle(UAV)communications.In particular,UAVcan collect the ...In this paper,we investigate the energy efficiency maximization for mobile edge computing(MEC)in intelligent reflecting surface(IRS)assisted unmanned aerial vehicle(UAV)communications.In particular,UAVcan collect the computing tasks of the terrestrial users and transmit the results back to them after computing.We jointly optimize the users’transmitted beamforming and uploading ratios,the phase shift matrix of IRS,and the UAV trajectory to improve the energy efficiency.The formulated optimization problem is highly non-convex and difficult to be solved directly.Therefore,we decompose the original problem into three sub-problems.We first propose the successive convex approximation(SCA)based method to design the beamforming of the users and the phase shift matrix of IRS,and apply the Lagrange dual method to obtain a closed-form expression of the uploading ratios.For the trajectory optimization,we propose a block coordinate descent(BCD)based method to obtain a local optimal solution.Finally,we propose the alternating optimization(AO)based overall algorithmand analyzed its complexity to be equivalent or lower than existing algorithms.Simulation results show the superiority of the proposedmethod compared with existing schemes in energy efficiency.展开更多
文摘Objective To observe the value of artificial intelligence(AI)models based on non-contrast chest CT for measuring bone mineral density(BMD).Methods Totally 380 subjects who underwent both non-contrast chest CT and quantitative CT(QCT)BMD examination were retrospectively enrolled and divided into training set(n=304)and test set(n=76)at a ratio of 8∶2.The mean BMD of L1—L3 vertebrae were measured based on QCT.Spongy bones of T5—T10 vertebrae were segmented as ROI,radiomics(Rad)features were extracted,and machine learning(ML),Rad and deep learning(DL)models were constructed for classification of osteoporosis(OP)and evaluating BMD,respectively.Receiver operating characteristic curves were drawn,and area under the curves(AUC)were calculated to evaluate the efficacy of each model for classification of OP.Bland-Altman analysis and Pearson correlation analysis were performed to explore the consistency and correlation of each model with QCT for measuring BMD.Results Among ML and Rad models,ML Bagging-OP and Rad Bagging-OP had the best performances for classification of OP.In test set,AUC of ML Bagging-OP,Rad Bagging-OP and DL OP for classification of OP was 0.943,0.944 and 0.947,respectively,with no significant difference(all P>0.05).BMD obtained with all the above models had good consistency with those measured with QCT(most of the differences were within the range of Ax-G±1.96 s),which were highly positively correlated(r=0.910—0.974,all P<0.001).Conclusion AI models based on non-contrast chest CT had high efficacy for classification of OP,and good consistency of BMD measurements were found between AI models and QCT.
文摘Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Methods Eighty-eight urinary calculi patients were prospectively enrolled.Low dose CT(LDCT)and ULDCT scanning were performed,and the effective dose(ED)of each scanning protocol were calculated.The patients were then randomly divided into training set(n=75)and test set(n=13),and a self-supervised deep learning AI noise reduction system based on the nearest adjacent layer constructed with ULDCT images in training set was used for reducing noise of ULDCT images in test set.In test set,the quality of ULDCT images before and after AI noise reduction were compared with LDCT images,i.e.Blind/Referenceless Image Spatial Quality Evaluator(BRISQUE)scores,image noise(SD ROI)and signal-to-noise ratio(SNR).Results The tube current,the volume CT dose index and the dose length product of abdominal ULDCT scanning protocol were all lower compared with those of LDCT scanning protocol(all P<0.05),with a decrease of ED for approximately 82.66%.For 13 patients with urinary calculi in test set,BRISQUE score showed that the quality level of ULDCT images before AI noise reduction reached 54.42%level but raised to 95.76%level of LDCT images after AI noise reduction.Both ULDCT images after AI noise reduction and LDCT images had lower SD ROI and higher SNR than ULDCT images before AI noise reduction(all adjusted P<0.05),whereas no significant difference was found between the former two(both adjusted P>0.05).Conclusion Self-supervised learning AI noise reduction technology based on the nearest adjacent layer could effectively reduce noise and improve image quality of urinary calculi ULDCT images,being conducive for clinical application of ULDCT.
文摘BACKGROUND With the increasingly extensive application of artificial intelligence(AI)in medical systems,the accuracy of AI in medical diagnosis in the real world deserves attention and objective evaluation.AIM To investigate the accuracy of AI diagnostic software(Shukun)in assessing ischemic penumbra/core infarction in acute ischemic stroke patients due to large vessel occlusion.METHODS From November 2021 to March 2022,consecutive acute stroke patients with large vessel occlusion who underwent mechanical thrombectomy(MT)post-Shukun AI penumbra assessment were included.Computed tomography angiography(CTA)and perfusion exams were analyzed by AI,reviewed by senior neurointerventional experts.In the case of divergences among the three experts,discussions were held to reach a final conclusion.When the results of AI were inconsistent with the neurointerventional experts’diagnosis,the diagnosis by AI was considered inaccurate.RESULTS A total of 22 patients were included in the study.The vascular recanalization rate was 90.9%,and 63.6%of patients had modified Rankin scale scores of 0-2 at the 3-month follow-up.The computed tomography(CT)perfusion diagnosis by Shukun(AI)was confirmed to be invalid in 3 patients(inaccuracy rate:13.6%).CONCLUSION AI(Shukun)has limits in assessing ischemic penumbra.Integrating clinical and imaging data(CT,CTA,and even magnetic resonance imaging)is crucial for MT decision-making.
文摘Missile interception problem can be regarded as a two-person zero-sum differential games problem,which depends on the solution of Hamilton-Jacobi-Isaacs(HJI)equa-tion.It has been proved impossible to obtain a closed-form solu-tion due to the nonlinearity of HJI equation,and many iterative algorithms are proposed to solve the HJI equation.Simultane-ous policy updating algorithm(SPUA)is an effective algorithm for solving HJI equation,but it is an on-policy integral reinforce-ment learning(IRL).For online implementation of SPUA,the dis-turbance signals need to be adjustable,which is unrealistic.In this paper,an off-policy IRL algorithm based on SPUA is pro-posed without making use of any knowledge of the systems dynamics.Then,a neural-network based online adaptive critic implementation scheme of the off-policy IRL algorithm is pre-sented.Based on the online off-policy IRL method,a computa-tional intelligence interception guidance(CIIG)law is developed for intercepting high-maneuvering target.As a model-free method,intercepting targets can be achieved through measur-ing system data online.The effectiveness of the CIIG is verified through two missile and target engagement scenarios.
基金Deputyship for Research&Inno-vation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number RI-44-0446.
文摘Computational intelligence(CI)is a group of nature-simulated computationalmodels and processes for addressing difficult real-life problems.The CI is useful in the UAV domain as it produces efficient,precise,and rapid solutions.Besides,unmanned aerial vehicles(UAV)developed a hot research topic in the smart city environment.Despite the benefits of UAVs,security remains a major challenging issue.In addition,deep learning(DL)enabled image classification is useful for several applications such as land cover classification,smart buildings,etc.This paper proposes novel meta-heuristics with a deep learning-driven secure UAV image classification(MDLS-UAVIC)model in a smart city environment.Themajor purpose of the MDLS-UAVIC algorithm is to securely encrypt the images and classify them into distinct class labels.The proposedMDLS-UAVIC model follows a two-stage process:encryption and image classification.The encryption technique for image encryption effectively encrypts the UAV images.Next,the image classification process involves anXception-based deep convolutional neural network for the feature extraction process.Finally,shuffled shepherd optimization(SSO)with a recurrent neural network(RNN)model is applied for UAV image classification,showing the novelty of the work.The experimental validation of the MDLS-UAVIC approach is tested utilizing a benchmark dataset,and the outcomes are examined in various measures.It achieved a high accuracy of 98%.
文摘Computational psychiatry is an emerging field that not only explores the biological basis of mental illness but also considers the diagnoses and identifies the underlying mechanisms.One of the key strengths of computational psychiatry is that it may identify patterns in large datasets that are not easily identifiable.This may help researchers develop more effective treatments and interventions for mental health problems.This paper is a narrative review that reviews the literature and produces an artificial intelligence ecosystem for computational psychiatry.The artificial intelligence ecosystem for computational psychiatry includes data acquisition,preparation,modeling,application,and evaluation.This approach allows researchers to integrate data from a variety of sources,such as brain imaging,genetics,and behavioral experiments,to obtain a more complete understanding of mental health conditions.Through the process of data preprocessing,training,and testing,the data that are required for model building can be prepared.By using machine learning,neural networks,artificial intelligence,and other methods,researchers have been able to develop diagnostic tools that can accurately identify mental health conditions based on a patient’s symptoms and other factors.Despite the continuous development and breakthrough of computational psychiatry,it has not yet influenced routine clinical practice and still faces many challenges,such as data availability and quality,biological risks,equity,and data protection.As we move progress in this field,it is vital to ensure that computational psychiatry remains accessible and inclusive so that all researchers may contribute to this significant and exciting field.
文摘Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise their presence and capabilities in the form of services so that they can be discovered and, if desired, exploited by the user or other networked devices. With the increasing number of these devices attached to the network, the complexity to configure and control them increases, which may lead to major processing and communication overhead. Hence, the devices are no longer expected to just act as primitive stand-alone appliances that only provide the facilities and services to the user they are designed for, but also offer complex services that emerge from unique combinations of devices. This creates the necessity for these devices to be equipped with some sort of intelligence and self-awareness to enable them to be self-configuring and self-programming. However, with this "smart evolution", the cognitive load to configure and control such spaces becomes immense. One way to relieve this load is by employing artificial intelligence (AI) techniques to create an intelligent "presence" where the system will be able to recognize the users and autonomously program the environment to be energy efficient and responsive to the user's needs and behaviours. These AI mechanisms should be embedded in the user's environments and should operate in a non-intrusive manner. This paper will show how computational intelligence (CI), which is an emerging domain of AI, could be employed and embedded in our living spaces to help such environments to be more energy efficient, intelligent, adaptive and convenient to the users.
文摘The most significant invention made in recent years to serve various applications is software.Developing a faultless software system requires the soft-ware system design to be resilient.To make the software design more efficient,it is essential to assess the reusability of the components used.This paper proposes a software reusability prediction model named Flexible Random Fit(FRF)based on aging resilience for a Service Net(SN)software system.The reusability predic-tion model is developed based on a multilevel optimization technique based on software characteristics such as cohesion,coupling,and complexity.Metrics are obtained from the SN software system,which is then subjected to min-max nor-malization to avoid any saturation during the learning process.The feature extrac-tion process is made more feasible by enriching the data quality via outlier detection.The reusability of the classes is estimated based on a tool called Soft Audit.Software reusability can be predicted more effectively based on the pro-posed FRF-ANN(Flexible Random Fit-Artificial Neural Network)algorithm.Performance evaluation shows that the proposed algorithm outperforms all the other techniques,thus ensuring the optimization of software reusability based on aging resilient.The model is then tested using constraint-based testing techni-ques to make sure that it is perfect at optimizing and making predictions.
基金The Deanship of Scientific Research (DSR)at King Abdulaziz University (KAU),Jeddah,Saudi Arabia has funded this project,under Grant No.KEP-1–120–42.
文摘White blood cells (WBC) or leukocytes are a vital component ofthe blood which forms the immune system, which is accountable to fightforeign elements. The WBC images can be exposed to different data analysisapproaches which categorize different kinds of WBC. Conventionally, laboratorytests are carried out to determine the kind of WBC which is erroneousand time consuming. Recently, deep learning (DL) models can be employedfor automated investigation of WBC images in short duration. Therefore,this paper introduces an Aquila Optimizer with Transfer Learning basedAutomated White Blood Cells Classification (AOTL-WBCC) technique. Thepresented AOTL-WBCC model executes data normalization and data augmentationprocess (rotation and zooming) at the initial stage. In addition,the residual network (ResNet) approach was used for feature extraction inwhich the initial hyperparameter values of the ResNet model are tuned by theuse of AO algorithm. Finally, Bayesian neural network (BNN) classificationtechnique has been implied for the identification of WBC images into distinctclasses. The experimental validation of the AOTL-WBCC methodology isperformed with the help of Kaggle dataset. The experimental results foundthat the AOTL-WBCC model has outperformed other techniques which arebased on image processing and manual feature engineering approaches underdifferent dimensions.
基金This research work was funded by Institutional Fund Projects under grant no.(IFPIP:624-611-1443)。
文摘In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%.
文摘Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitoring.The HIoT systems find use in several applications such as smart cities,healthcare,transportation,etc.Besides,the HIoT system and explainable artificial intelligence(XAI)tools can be deployed in the healthcare sector for effective decision-making.The COVID-19 pandemic has become a global health issue that necessitates automated and effective diagnostic tools to detect the disease at the initial stage.This article presents a new quantum-inspired differential evolution with explainable artificial intelligence based COVID-19 Detection and Classification(QIDEXAI-CDC)model for HIoT systems.The QIDEXAI-CDC model aims to identify the occurrence of COVID-19 using the XAI tools on HIoT systems.The QIDEXAI-CDC model primarily uses bilateral filtering(BF)as a preprocessing tool to eradicate the noise.In addition,RetinaNet is applied for the generation of useful feature vectors from radiological images.For COVID-19 detection and classification,quantum-inspired differential evolution(QIDE)with kernel extreme learning machine(KELM)model is utilized.The utilization of the QIDE algorithm helps to appropriately choose the weight and bias values of the KELM model.In order to report the enhanced COVID-19 detection outcomes of the QIDEXAI-CDC model,a wide range of simulations was carried out.Extensive comparative studies reported the supremacy of the QIDEXAI-CDC model over the recent approaches.
基金This work is supported by the National Key Research and Development Program of China(2020YFC2003400)Qiang Ni’s work was funded by the UK EPSRC project under grant number EP/K011693/1.
文摘Computed tomography has made significant advances since its intro-duction in the early 1970s,where researchers have mainly focused on the quality of image reconstruction in the early stage.However,radiation exposure poses a health risk,prompting the demand of the lowest possible dose when carrying out CT examinations.To acquire high-quality reconstruction images with low dose radiation,CT reconstruction techniques have evolved from conventional reconstruction such as analytical and iterative reconstruction,to reconstruction methods based on artificial intelligence(AI).All these efforts are devoted to con-structing high-quality images using only low doses with fast reconstruction speed.In particular,conventional reconstruction methods usually optimize one aspect,while AI-based reconstruction has finally managed to attain all goals in one shot.However,there are limitations such as the requirements on large datasets,unstable performance,and weak generalizability in AI-based reconstruction methods.This work presents the review and discussion on the classification,the commercial use,the advantages,and the limitations of AI-based image reconstruction methods in CT.
基金supported by the Guangdong Basic and Applied Basic Research Foundation(No.2023A1515030047)Zhejiang Provincial Department of Agriculture and Rural Affairs(2022SNJF078).
文摘Regenerative medicine and anti-aging research have made great strides at the molecular and cellular levels in dermatology and the medical aesthetic field,targeting potential treatments with skin therapeutic and intervention pathways,which make it possible to develop effective skin regeneration and repair ingredients.With the rapid development of computational biology,bioinformatics as well as artificial intelligence(A.I.),the development of new ingredients for regenerative medicine has been greatly accelerated,and the success rate has been improved.Some application cases have appeared in topical skin regeneration and repair scenarios.This review will briefly introduce the application of bioactive peptides in skin repair and anti-aging as emerging ingredients in cosmeceutics and emphasize how A.I.based computational biology technology may accelerate the development of innovative peptide molecules and ultimately translate them into potential skin regenerative and anti-aging scenarios.Typically,two research routines have been summarized and current limitations as well as directions were discussed for border applications in future research.
基金Project supported in part by the National Key Research and Development Program of China(Grant No.2021YFA0716400)the National Natural Science Foundation of China(Grant Nos.62225405,62150027,61974080,61991443,61975093,61927811,61875104,62175126,and 62235011)+2 种基金the Ministry of Science and Technology of China(Grant Nos.2021ZD0109900 and 2021ZD0109903)the Collaborative Innovation Center of Solid-State Lighting and Energy-Saving ElectronicsTsinghua University Initiative Scientific Research Program.
文摘AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.
文摘The application of computational technology for medical purpose is a very interesting topic.Knowledge content development and new technology search using computational technology becomes the newest approach in medicine.With advanced computational technology,several omics sciences are available for clarification and prediction in medicine.The computational intelligence is an important application that should be mentioned.Here,the author details and discusses on computational intelligence in tropical medicine.
文摘The paper proposes an innovative approach aimed at fostering AI literacy through interactive gaming experiences.This paper designs a game-based prototype for preparing pre-service teachers to innovate teaching practices across disciplines.The simulation,Color Conquest,serves as a strategic game to encourage educators to reconsider their pedagogical practices.It allows teachers to use and develop various scenarios by customizing maps,giving students agency to engage in the complex decision-making process.Additionally,this engagement process provides teachers with an opportunity to develop students’skills in artificial intelligence literacy as students actively develop strategic thinking,problem-solving,and critical reasoning skills.
基金the National Natural Science Foundation of China(62271192)Henan Provincial Scientists Studio(GZS2022015)+10 种基金Central Plains Talents Plan(ZYYCYU202012173)NationalKeyR&DProgramofChina(2020YFB2008400)the Program ofCEMEE(2022Z00202B)LAGEO of Chinese Academy of Sciences(LAGEO-2019-2)Program for Science&Technology Innovation Talents in the University of Henan Province(20HASTIT022)Natural Science Foundation of Henan under Grant 202300410126Program for Innovative Research Team in University of Henan Province(21IRTSTHN015)Equipment Pre-Research Joint Research Program of Ministry of Education(8091B032129)Training Program for Young Scholar of Henan Province for Colleges and Universities(2020GGJS172)Program for Science&Technology Innovation Talents in Universities of Henan Province under Grand(22HASTIT020)Henan Province Science Fund for Distinguished Young Scholars(222300420006).
文摘The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of employing intelligent reflective surfaces(IRS)andUAVs as relay nodes to efficiently offload user computing tasks to theMEC server system model.Specifically,the user node accesses the primary user spectrum,while adhering to the constraint of satisfying the primary user peak interference power.Furthermore,the UAV acquires energy without interrupting the primary user’s regular communication by employing two energy harvesting schemes,namely time switching(TS)and power splitting(PS).The selection of the optimal UAV is based on the maximization of the instantaneous signal-to-noise ratio.Subsequently,the analytical expression for the outage probability of the system in Rayleigh channels is derived and analyzed.The study investigates the impact of various system parameters,including the number of UAVs,peak interference power,TS,and PS factors,on the system’s outage performance through simulation.The proposed system is also compared to two conventional benchmark schemes:the optimal UAV link transmission and the IRS link transmission.The simulation results validate the theoretical derivation and demonstrate the superiority of the proposed scheme over the benchmark schemes.
基金the Key Scientific and Technological Project of Henan Province(Grant Number 222102210212)Doctoral Research Start Project of Henan Institute of Technology(Grant Number KQ2005)+1 种基金Doctoral Research Start Project of Henan Institute of Technology(Grant Number KQ2110)Key Research Projects of Colleges and Universities in Henan Province(Grant Number 23B510006).
文摘In this paper,we investigate the energy efficiency maximization for mobile edge computing(MEC)in intelligent reflecting surface(IRS)assisted unmanned aerial vehicle(UAV)communications.In particular,UAVcan collect the computing tasks of the terrestrial users and transmit the results back to them after computing.We jointly optimize the users’transmitted beamforming and uploading ratios,the phase shift matrix of IRS,and the UAV trajectory to improve the energy efficiency.The formulated optimization problem is highly non-convex and difficult to be solved directly.Therefore,we decompose the original problem into three sub-problems.We first propose the successive convex approximation(SCA)based method to design the beamforming of the users and the phase shift matrix of IRS,and apply the Lagrange dual method to obtain a closed-form expression of the uploading ratios.For the trajectory optimization,we propose a block coordinate descent(BCD)based method to obtain a local optimal solution.Finally,we propose the alternating optimization(AO)based overall algorithmand analyzed its complexity to be equivalent or lower than existing algorithms.Simulation results show the superiority of the proposedmethod compared with existing schemes in energy efficiency.