期刊文献+
共找到7,078篇文章
< 1 2 250 >
每页显示 20 50 100
Artificial intelligence models based on non-contrast chest CT for measuring bone mineral density
1
作者 DUAN Wei YANG Guoqing +6 位作者 LI Yang SHI Feng YANG Lian XIONG Xin CHEN Bei LI Yong FU Quanshui 《中国医学影像技术》 CSCD 北大核心 2024年第8期1231-1235,共5页
Objective To observe the value of artificial intelligence(AI)models based on non-contrast chest CT for measuring bone mineral density(BMD).Methods Totally 380 subjects who underwent both non-contrast chest CT and quan... Objective To observe the value of artificial intelligence(AI)models based on non-contrast chest CT for measuring bone mineral density(BMD).Methods Totally 380 subjects who underwent both non-contrast chest CT and quantitative CT(QCT)BMD examination were retrospectively enrolled and divided into training set(n=304)and test set(n=76)at a ratio of 8∶2.The mean BMD of L1—L3 vertebrae were measured based on QCT.Spongy bones of T5—T10 vertebrae were segmented as ROI,radiomics(Rad)features were extracted,and machine learning(ML),Rad and deep learning(DL)models were constructed for classification of osteoporosis(OP)and evaluating BMD,respectively.Receiver operating characteristic curves were drawn,and area under the curves(AUC)were calculated to evaluate the efficacy of each model for classification of OP.Bland-Altman analysis and Pearson correlation analysis were performed to explore the consistency and correlation of each model with QCT for measuring BMD.Results Among ML and Rad models,ML Bagging-OP and Rad Bagging-OP had the best performances for classification of OP.In test set,AUC of ML Bagging-OP,Rad Bagging-OP and DL OP for classification of OP was 0.943,0.944 and 0.947,respectively,with no significant difference(all P>0.05).BMD obtained with all the above models had good consistency with those measured with QCT(most of the differences were within the range of Ax-G±1.96 s),which were highly positively correlated(r=0.910—0.974,all P<0.001).Conclusion AI models based on non-contrast chest CT had high efficacy for classification of OP,and good consistency of BMD measurements were found between AI models and QCT. 展开更多
关键词 OSTEOPOROSIS bone density tomography X-ray computed artificial intelligence
下载PDF
Self-supervised learning artificial intelligence noise reduction technology based on the nearest adjacent layer in ultra-low dose CT of urinary calculi
2
作者 ZHOU Cheng LIU Yang +4 位作者 QIU Yingwei HE Daijun YAN Yu LUO Min LEI Youyuan 《中国医学影像技术》 CSCD 北大核心 2024年第8期1249-1253,共5页
Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Metho... Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Methods Eighty-eight urinary calculi patients were prospectively enrolled.Low dose CT(LDCT)and ULDCT scanning were performed,and the effective dose(ED)of each scanning protocol were calculated.The patients were then randomly divided into training set(n=75)and test set(n=13),and a self-supervised deep learning AI noise reduction system based on the nearest adjacent layer constructed with ULDCT images in training set was used for reducing noise of ULDCT images in test set.In test set,the quality of ULDCT images before and after AI noise reduction were compared with LDCT images,i.e.Blind/Referenceless Image Spatial Quality Evaluator(BRISQUE)scores,image noise(SD ROI)and signal-to-noise ratio(SNR).Results The tube current,the volume CT dose index and the dose length product of abdominal ULDCT scanning protocol were all lower compared with those of LDCT scanning protocol(all P<0.05),with a decrease of ED for approximately 82.66%.For 13 patients with urinary calculi in test set,BRISQUE score showed that the quality level of ULDCT images before AI noise reduction reached 54.42%level but raised to 95.76%level of LDCT images after AI noise reduction.Both ULDCT images after AI noise reduction and LDCT images had lower SD ROI and higher SNR than ULDCT images before AI noise reduction(all adjusted P<0.05),whereas no significant difference was found between the former two(both adjusted P>0.05).Conclusion Self-supervised learning AI noise reduction technology based on the nearest adjacent layer could effectively reduce noise and improve image quality of urinary calculi ULDCT images,being conducive for clinical application of ULDCT. 展开更多
关键词 urinary calculi tomography X-ray computed artificial intelligence prospective studies
下载PDF
Artificial intelligence software for assessing brain ischemic penumbra/core infarction on computed tomography perfusion:A real-world accuracy study
3
作者 Zhu-Qin Li Wu Liu +2 位作者 Wei-Liang Luo Su-Qin Chen Yu-Ping Deng 《World Journal of Radiology》 2024年第8期329-336,共8页
BACKGROUND With the increasingly extensive application of artificial intelligence(AI)in medical systems,the accuracy of AI in medical diagnosis in the real world deserves attention and objective evaluation.AIM To inve... BACKGROUND With the increasingly extensive application of artificial intelligence(AI)in medical systems,the accuracy of AI in medical diagnosis in the real world deserves attention and objective evaluation.AIM To investigate the accuracy of AI diagnostic software(Shukun)in assessing ischemic penumbra/core infarction in acute ischemic stroke patients due to large vessel occlusion.METHODS From November 2021 to March 2022,consecutive acute stroke patients with large vessel occlusion who underwent mechanical thrombectomy(MT)post-Shukun AI penumbra assessment were included.Computed tomography angiography(CTA)and perfusion exams were analyzed by AI,reviewed by senior neurointerventional experts.In the case of divergences among the three experts,discussions were held to reach a final conclusion.When the results of AI were inconsistent with the neurointerventional experts’diagnosis,the diagnosis by AI was considered inaccurate.RESULTS A total of 22 patients were included in the study.The vascular recanalization rate was 90.9%,and 63.6%of patients had modified Rankin scale scores of 0-2 at the 3-month follow-up.The computed tomography(CT)perfusion diagnosis by Shukun(AI)was confirmed to be invalid in 3 patients(inaccuracy rate:13.6%).CONCLUSION AI(Shukun)has limits in assessing ischemic penumbra.Integrating clinical and imaging data(CT,CTA,and even magnetic resonance imaging)is crucial for MT decision-making. 展开更多
关键词 Artificial intelligence Acute ischemic stroke PENUMBRA Core infarction Computed tomography perfusion
下载PDF
Computational intelligence interception guidance law using online off-policy integral reinforcement learning
4
作者 WANG Qi LIAO Zhizhong 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第4期1042-1052,共11页
Missile interception problem can be regarded as a two-person zero-sum differential games problem,which depends on the solution of Hamilton-Jacobi-Isaacs(HJI)equa-tion.It has been proved impossible to obtain a closed-f... Missile interception problem can be regarded as a two-person zero-sum differential games problem,which depends on the solution of Hamilton-Jacobi-Isaacs(HJI)equa-tion.It has been proved impossible to obtain a closed-form solu-tion due to the nonlinearity of HJI equation,and many iterative algorithms are proposed to solve the HJI equation.Simultane-ous policy updating algorithm(SPUA)is an effective algorithm for solving HJI equation,but it is an on-policy integral reinforce-ment learning(IRL).For online implementation of SPUA,the dis-turbance signals need to be adjustable,which is unrealistic.In this paper,an off-policy IRL algorithm based on SPUA is pro-posed without making use of any knowledge of the systems dynamics.Then,a neural-network based online adaptive critic implementation scheme of the off-policy IRL algorithm is pre-sented.Based on the online off-policy IRL method,a computa-tional intelligence interception guidance(CIIG)law is developed for intercepting high-maneuvering target.As a model-free method,intercepting targets can be achieved through measur-ing system data online.The effectiveness of the CIIG is verified through two missile and target engagement scenarios. 展开更多
关键词 two-person zero-sum differential games Hamilton–Jacobi–Isaacs(HJI)equation off-policy integral reinforcement learning(IRL) online learning computational intelligence inter-ception guidance(ciIG)law
下载PDF
Computational Intelligence Driven Secure Unmanned Aerial Vehicle Image Classification in Smart City Environment
5
作者 Firas Abedi Hayder M.A.Ghanimi +6 位作者 Abeer D.Algarni Naglaa F.Soliman Walid El-Shafai Ali Hashim Abbas Zahraa H.Kareem Hussein Muhi Hariz Ahmed Alkhayyat 《Computer Systems Science & Engineering》 SCIE EI 2023年第12期3127-3144,共18页
Computational intelligence(CI)is a group of nature-simulated computationalmodels and processes for addressing difficult real-life problems.The CI is useful in the UAV domain as it produces efficient,precise,and rapid ... Computational intelligence(CI)is a group of nature-simulated computationalmodels and processes for addressing difficult real-life problems.The CI is useful in the UAV domain as it produces efficient,precise,and rapid solutions.Besides,unmanned aerial vehicles(UAV)developed a hot research topic in the smart city environment.Despite the benefits of UAVs,security remains a major challenging issue.In addition,deep learning(DL)enabled image classification is useful for several applications such as land cover classification,smart buildings,etc.This paper proposes novel meta-heuristics with a deep learning-driven secure UAV image classification(MDLS-UAVIC)model in a smart city environment.Themajor purpose of the MDLS-UAVIC algorithm is to securely encrypt the images and classify them into distinct class labels.The proposedMDLS-UAVIC model follows a two-stage process:encryption and image classification.The encryption technique for image encryption effectively encrypts the UAV images.Next,the image classification process involves anXception-based deep convolutional neural network for the feature extraction process.Finally,shuffled shepherd optimization(SSO)with a recurrent neural network(RNN)model is applied for UAV image classification,showing the novelty of the work.The experimental validation of the MDLS-UAVIC approach is tested utilizing a benchmark dataset,and the outcomes are examined in various measures.It achieved a high accuracy of 98%. 展开更多
关键词 computational intelligence unmanned aerial vehicles deep learning metaheuristics smart city image encryption image classification
下载PDF
Artificial intelligence ecosystem for computational psychiatry:Ideas to practice
6
作者 Xin-Qiao Liu Xin-Yu Ji +1 位作者 Xing Weng Yi-Fan Zhang 《World Journal of Meta-Analysis》 2023年第4期79-91,共13页
Computational psychiatry is an emerging field that not only explores the biological basis of mental illness but also considers the diagnoses and identifies the underlying mechanisms.One of the key strengths of computa... Computational psychiatry is an emerging field that not only explores the biological basis of mental illness but also considers the diagnoses and identifies the underlying mechanisms.One of the key strengths of computational psychiatry is that it may identify patterns in large datasets that are not easily identifiable.This may help researchers develop more effective treatments and interventions for mental health problems.This paper is a narrative review that reviews the literature and produces an artificial intelligence ecosystem for computational psychiatry.The artificial intelligence ecosystem for computational psychiatry includes data acquisition,preparation,modeling,application,and evaluation.This approach allows researchers to integrate data from a variety of sources,such as brain imaging,genetics,and behavioral experiments,to obtain a more complete understanding of mental health conditions.Through the process of data preprocessing,training,and testing,the data that are required for model building can be prepared.By using machine learning,neural networks,artificial intelligence,and other methods,researchers have been able to develop diagnostic tools that can accurately identify mental health conditions based on a patient’s symptoms and other factors.Despite the continuous development and breakthrough of computational psychiatry,it has not yet influenced routine clinical practice and still faces many challenges,such as data availability and quality,biological risks,equity,and data protection.As we move progress in this field,it is vital to ensure that computational psychiatry remains accessible and inclusive so that all researchers may contribute to this significant and exciting field. 展开更多
关键词 computational psychiatry Big data Artificial intelligence Medical ethics Large-scale online data
下载PDF
大庆油田CIFLog测井数智云平台建设应用实践 被引量:1
7
作者 李宁 刘英明 +2 位作者 王才志 原野 夏守姬 《大庆石油地质与开发》 CAS 北大核心 2024年第3期17-25,共9页
针对大庆油田生产中测井数据量大、类型多和数据来源复杂等问题,以中国石油天然气集团有限公司大型测井处理解释软件CIFLog为基础,以业务需求为主导,采用微服务架构和测井分布式云计算技术体系,研发测井大数据存储管理、中间服务层和云... 针对大庆油田生产中测井数据量大、类型多和数据来源复杂等问题,以中国石油天然气集团有限公司大型测井处理解释软件CIFLog为基础,以业务需求为主导,采用微服务架构和测井分布式云计算技术体系,研发测井大数据存储管理、中间服务层和云端测井处理解释应用等新功能,形成了大庆油田测井数智云应用平台。目前,平台已全面安装部署到大庆油田相关单位,应用效果显著。特别在大庆油田智能决策中心,平台直接用于重点水平井随钻地质导向的现场决策,大幅提升了Ⅰ类储层的钻遇率。未来平台将重点围绕新功能研发、油田数智化应用场景建设和标准化技术体系构建等开展工作,并将取得的成果及时推广复制到西南油田、塔里木油田等油气田。CIFLog云平台作为中国油气工业软件数智化建设应用的先行典范,必将发挥越来越重要的示范引领作用。 展开更多
关键词 大庆油田 ciFLog测井数智云平台 大数据 人工智能 微服务架构 分布式云计算
下载PDF
AI+BCI硅基碳基融合新智能的开始
8
作者 尹奎英 遇涛 《指挥控制与仿真》 2024年第3期1-11,共11页
我们正迎来人类发展的第四次浪潮,正处于从信息社会向人类社会-物理世界-信息空间融合的智能社会的关键转型期。近年来,计算和信息技术飞速发展,深度学习的空前普及和成功将人工智能(AI)确立为人类探索机器智能的前沿领域。与此同时,得... 我们正迎来人类发展的第四次浪潮,正处于从信息社会向人类社会-物理世界-信息空间融合的智能社会的关键转型期。近年来,计算和信息技术飞速发展,深度学习的空前普及和成功将人工智能(AI)确立为人类探索机器智能的前沿领域。与此同时,得益于器件的革命性进展和人工智能(AI)的发展,脑机接口(BCI)植入技术同样快速落地,这意味着BCI+AI碳基硅基融合的开始,然而,硅基和碳基运算的底层逻辑存在根本差异,脑的智能机制仍有待进一步探索。本研究提出的视觉认知引导的孪生AI深度网络,是由个人意识驱动的深度网络技术,通过捕捉并解析个体的思维模式和创意灵感,为每个用户量身打造独特的视觉世界。在这样的环境中,每个人都成为自己创造世界的视觉主导者,打破物质和意识的壁垒,得以展现丰富的个性和创造力。 展开更多
关键词 人工智能 脑机接口 人脑视觉表征 脑视觉重构 意识孪生
下载PDF
Employing Computational Intelligence to Generate More Intelligent and Energy Efficient Living Spaces 被引量:2
9
作者 Hani Hagras 《International Journal of Automation and computing》 EI 2008年第1期1-9,共9页
Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise... Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise their presence and capabilities in the form of services so that they can be discovered and, if desired, exploited by the user or other networked devices. With the increasing number of these devices attached to the network, the complexity to configure and control them increases, which may lead to major processing and communication overhead. Hence, the devices are no longer expected to just act as primitive stand-alone appliances that only provide the facilities and services to the user they are designed for, but also offer complex services that emerge from unique combinations of devices. This creates the necessity for these devices to be equipped with some sort of intelligence and self-awareness to enable them to be self-configuring and self-programming. However, with this "smart evolution", the cognitive load to configure and control such spaces becomes immense. One way to relieve this load is by employing artificial intelligence (AI) techniques to create an intelligent "presence" where the system will be able to recognize the users and autonomously program the environment to be energy efficient and responsive to the user's needs and behaviours. These AI mechanisms should be embedded in the user's environments and should operate in a non-intrusive manner. This paper will show how computational intelligence (CI), which is an emerging domain of AI, could be employed and embedded in our living spaces to help such environments to be more energy efficient, intelligent, adaptive and convenient to the users. 展开更多
关键词 computational intelligence ci fuzzy systems neural networks (NNs) genetic algorithms (GAs) intelligent buildings energy efficiency.
下载PDF
Artificial Intelligence Model for Software Reusability Prediction System
10
作者 R.Subha Anandakumar Haldorai Arulmurugan Ramu 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期2639-2654,共16页
The most significant invention made in recent years to serve various applications is software.Developing a faultless software system requires the soft-ware system design to be resilient.To make the software design more... The most significant invention made in recent years to serve various applications is software.Developing a faultless software system requires the soft-ware system design to be resilient.To make the software design more efficient,it is essential to assess the reusability of the components used.This paper proposes a software reusability prediction model named Flexible Random Fit(FRF)based on aging resilience for a Service Net(SN)software system.The reusability predic-tion model is developed based on a multilevel optimization technique based on software characteristics such as cohesion,coupling,and complexity.Metrics are obtained from the SN software system,which is then subjected to min-max nor-malization to avoid any saturation during the learning process.The feature extrac-tion process is made more feasible by enriching the data quality via outlier detection.The reusability of the classes is estimated based on a tool called Soft Audit.Software reusability can be predicted more effectively based on the pro-posed FRF-ANN(Flexible Random Fit-Artificial Neural Network)algorithm.Performance evaluation shows that the proposed algorithm outperforms all the other techniques,thus ensuring the optimization of software reusability based on aging resilient.The model is then tested using constraint-based testing techni-ques to make sure that it is perfect at optimizing and making predictions. 展开更多
关键词 Service net aging resilient software reusability evolutionary computing intelligent computing
下载PDF
Automated Artificial Intelligence Empowered White Blood Cells Classification Model
11
作者 Mohammad Yamin Abdullah M.Basahel +3 位作者 Mona Abusurrah Sulafah M Basahel Sachi Nandan Mohanty E.Laxmi Lydia 《Computers, Materials & Continua》 SCIE EI 2023年第4期409-425,共17页
White blood cells (WBC) or leukocytes are a vital component ofthe blood which forms the immune system, which is accountable to fightforeign elements. The WBC images can be exposed to different data analysisapproaches ... White blood cells (WBC) or leukocytes are a vital component ofthe blood which forms the immune system, which is accountable to fightforeign elements. The WBC images can be exposed to different data analysisapproaches which categorize different kinds of WBC. Conventionally, laboratorytests are carried out to determine the kind of WBC which is erroneousand time consuming. Recently, deep learning (DL) models can be employedfor automated investigation of WBC images in short duration. Therefore,this paper introduces an Aquila Optimizer with Transfer Learning basedAutomated White Blood Cells Classification (AOTL-WBCC) technique. Thepresented AOTL-WBCC model executes data normalization and data augmentationprocess (rotation and zooming) at the initial stage. In addition,the residual network (ResNet) approach was used for feature extraction inwhich the initial hyperparameter values of the ResNet model are tuned by theuse of AO algorithm. Finally, Bayesian neural network (BNN) classificationtechnique has been implied for the identification of WBC images into distinctclasses. The experimental validation of the AOTL-WBCC methodology isperformed with the help of Kaggle dataset. The experimental results foundthat the AOTL-WBCC model has outperformed other techniques which arebased on image processing and manual feature engineering approaches underdifferent dimensions. 展开更多
关键词 White blood cells cell engineering computational intelligence image classification transfer learning
下载PDF
Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for Clustered IoT Driven Ubiquitous Computing System
12
作者 Reda Salama Mahmoud Ragab 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期2917-2932,共16页
In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(... In the Internet of Things(IoT)based system,the multi-level client’s requirements can be fulfilled by incorporating communication technologies with distributed homogeneous networks called ubiquitous computing systems(UCS).The UCS necessitates heterogeneity,management level,and data transmission for distributed users.Simultaneously,security remains a major issue in the IoT-driven UCS.Besides,energy-limited IoT devices need an effective clustering strategy for optimal energy utilization.The recent developments of explainable artificial intelligence(XAI)concepts can be employed to effectively design intrusion detection systems(IDS)for accomplishing security in UCS.In this view,this study designs a novel Blockchain with Explainable Artificial Intelligence Driven Intrusion Detection for IoT Driven Ubiquitous Computing System(BXAI-IDCUCS)model.The major intention of the BXAI-IDCUCS model is to accomplish energy efficacy and security in the IoT environment.The BXAI-IDCUCS model initially clusters the IoT nodes using an energy-aware duck swarm optimization(EADSO)algorithm to accomplish this.Besides,deep neural network(DNN)is employed for detecting and classifying intrusions in the IoT network.Lastly,blockchain technology is exploited for secure inter-cluster data transmission processes.To ensure the productive performance of the BXAI-IDCUCS model,a comprehensive experimentation study is applied,and the outcomes are assessed under different aspects.The comparison study emphasized the superiority of the BXAI-IDCUCS model over the current state-of-the-art approaches with a packet delivery ratio of 99.29%,a packet loss rate of 0.71%,a throughput of 92.95 Mbps,energy consumption of 0.0891 mJ,a lifetime of 3529 rounds,and accuracy of 99.38%. 展开更多
关键词 Blockchain internet of things ubiquitous computing explainable artificial intelligence CLUSTERING deep learning
下载PDF
Quantum Inspired Differential Evolution with Explainable Artificial Intelligence-Based COVID-19 Detection
13
作者 Abdullah M.Basahel Mohammad Yamin 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期209-224,共16页
Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitor... Recent advancements in the Internet of Things(Io),5G networks,and cloud computing(CC)have led to the development of Human-centric IoT(HIoT)applications that transform human physical monitoring based on machine monitoring.The HIoT systems find use in several applications such as smart cities,healthcare,transportation,etc.Besides,the HIoT system and explainable artificial intelligence(XAI)tools can be deployed in the healthcare sector for effective decision-making.The COVID-19 pandemic has become a global health issue that necessitates automated and effective diagnostic tools to detect the disease at the initial stage.This article presents a new quantum-inspired differential evolution with explainable artificial intelligence based COVID-19 Detection and Classification(QIDEXAI-CDC)model for HIoT systems.The QIDEXAI-CDC model aims to identify the occurrence of COVID-19 using the XAI tools on HIoT systems.The QIDEXAI-CDC model primarily uses bilateral filtering(BF)as a preprocessing tool to eradicate the noise.In addition,RetinaNet is applied for the generation of useful feature vectors from radiological images.For COVID-19 detection and classification,quantum-inspired differential evolution(QIDE)with kernel extreme learning machine(KELM)model is utilized.The utilization of the QIDE algorithm helps to appropriately choose the weight and bias values of the KELM model.In order to report the enhanced COVID-19 detection outcomes of the QIDEXAI-CDC model,a wide range of simulations was carried out.Extensive comparative studies reported the supremacy of the QIDEXAI-CDC model over the recent approaches. 展开更多
关键词 Human-centric IoT quantum computing explainable artificial intelligence healthcare COVID-19 diagnosis
下载PDF
Artificial Intelligence-Based Image Reconstruction for Computed Tomography: A Survey
14
作者 Quan Yan Yunfan Ye +3 位作者 Jing Xia Zhiping Cai Zhilin Wang Qiang Ni 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期2545-2558,共14页
Computed tomography has made significant advances since its intro-duction in the early 1970s,where researchers have mainly focused on the quality of image reconstruction in the early stage.However,radiation exposure p... Computed tomography has made significant advances since its intro-duction in the early 1970s,where researchers have mainly focused on the quality of image reconstruction in the early stage.However,radiation exposure poses a health risk,prompting the demand of the lowest possible dose when carrying out CT examinations.To acquire high-quality reconstruction images with low dose radiation,CT reconstruction techniques have evolved from conventional reconstruction such as analytical and iterative reconstruction,to reconstruction methods based on artificial intelligence(AI).All these efforts are devoted to con-structing high-quality images using only low doses with fast reconstruction speed.In particular,conventional reconstruction methods usually optimize one aspect,while AI-based reconstruction has finally managed to attain all goals in one shot.However,there are limitations such as the requirements on large datasets,unstable performance,and weak generalizability in AI-based reconstruction methods.This work presents the review and discussion on the classification,the commercial use,the advantages,and the limitations of AI-based image reconstruction methods in CT. 展开更多
关键词 Computed tomography image reconstruction artificial intelligence
下载PDF
Computational biology in topical bioactive peptide discovery for cosmeceutical application:a concise review
15
作者 Xu-Hui Li Wen-Rou Su +7 位作者 Fei-Fei Wang Ke Li Jing-Yong Zhu Si-Yu Zhu Si-Ning Kang Cong-Fen He Jun-Xiang Li Xiao Lin 《Biomedical Engineering Communications》 2023年第3期7-15,共9页
Regenerative medicine and anti-aging research have made great strides at the molecular and cellular levels in dermatology and the medical aesthetic field,targeting potential treatments with skin therapeutic and interv... Regenerative medicine and anti-aging research have made great strides at the molecular and cellular levels in dermatology and the medical aesthetic field,targeting potential treatments with skin therapeutic and intervention pathways,which make it possible to develop effective skin regeneration and repair ingredients.With the rapid development of computational biology,bioinformatics as well as artificial intelligence(A.I.),the development of new ingredients for regenerative medicine has been greatly accelerated,and the success rate has been improved.Some application cases have appeared in topical skin regeneration and repair scenarios.This review will briefly introduce the application of bioactive peptides in skin repair and anti-aging as emerging ingredients in cosmeceutics and emphasize how A.I.based computational biology technology may accelerate the development of innovative peptide molecules and ultimately translate them into potential skin regenerative and anti-aging scenarios.Typically,two research routines have been summarized and current limitations as well as directions were discussed for border applications in future research. 展开更多
关键词 computational biology artificial intelligence COSMECEUTICAL bioactive peptide regenerative medicine
下载PDF
Advances in neuromorphic computing:Expanding horizons for AI development through novel artificial neurons and in-sensor computing
16
作者 杨玉波 赵吉哲 +11 位作者 刘胤洁 华夏扬 王天睿 郑纪元 郝智彪 熊兵 孙长征 韩彦军 王健 李洪涛 汪莱 罗毅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期1-23,共23页
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ... AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI. 展开更多
关键词 neuromorphic computing spiking neural network(SNN) in-sensor computing artificial intelligence
下载PDF
Computational intelligence in tropical medicine
17
作者 Somsri Wiwanitkit Viroj Wiwanitkit 《Asian Pacific Journal of Tropical Biomedicine》 SCIE CAS 2016年第4期350-352,共3页
The application of computational technology for medical purpose is a very interesting topic.Knowledge content development and new technology search using computational technology becomes the newest approach in medicin... The application of computational technology for medical purpose is a very interesting topic.Knowledge content development and new technology search using computational technology becomes the newest approach in medicine.With advanced computational technology,several omics sciences are available for clarification and prediction in medicine.The computational intelligence is an important application that should be mentioned.Here,the author details and discusses on computational intelligence in tropical medicine. 展开更多
关键词 COMPUTER TROPICAL MEDIciNE computational intelligence Application
下载PDF
Play by Design:Developing Artificial Intelligence Literacy through Game-based Learning
18
作者 Xiaoxue Du Xi Wang 《Journal of Computer Science Research》 2023年第4期1-12,共12页
The paper proposes an innovative approach aimed at fostering AI literacy through interactive gaming experiences.This paper designs a game-based prototype for preparing pre-service teachers to innovate teaching practic... The paper proposes an innovative approach aimed at fostering AI literacy through interactive gaming experiences.This paper designs a game-based prototype for preparing pre-service teachers to innovate teaching practices across disciplines.The simulation,Color Conquest,serves as a strategic game to encourage educators to reconsider their pedagogical practices.It allows teachers to use and develop various scenarios by customizing maps,giving students agency to engage in the complex decision-making process.Additionally,this engagement process provides teachers with an opportunity to develop students’skills in artificial intelligence literacy as students actively develop strategic thinking,problem-solving,and critical reasoning skills. 展开更多
关键词 Game-based learning Game-based assessment Artificial intelligence literacy Design thinking computational thinking Teacher education
下载PDF
Outage Analysis of Optimal UAV Cooperation with IRS via Energy Harvesting Enhancement Assisted Computational Offloading
19
作者 Baofeng Ji Ying Wang +2 位作者 Weixing Wang Shahid Mumtaz Charalampos Tsimenidis 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1885-1905,共21页
The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of e... The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of employing intelligent reflective surfaces(IRS)andUAVs as relay nodes to efficiently offload user computing tasks to theMEC server system model.Specifically,the user node accesses the primary user spectrum,while adhering to the constraint of satisfying the primary user peak interference power.Furthermore,the UAV acquires energy without interrupting the primary user’s regular communication by employing two energy harvesting schemes,namely time switching(TS)and power splitting(PS).The selection of the optimal UAV is based on the maximization of the instantaneous signal-to-noise ratio.Subsequently,the analytical expression for the outage probability of the system in Rayleigh channels is derived and analyzed.The study investigates the impact of various system parameters,including the number of UAVs,peak interference power,TS,and PS factors,on the system’s outage performance through simulation.The proposed system is also compared to two conventional benchmark schemes:the optimal UAV link transmission and the IRS link transmission.The simulation results validate the theoretical derivation and demonstrate the superiority of the proposed scheme over the benchmark schemes. 展开更多
关键词 Unmanned aerial vehicle(UAV) intelligent reflective surface(IRS) energy harvesting computational offloading outage probability
下载PDF
Energy Efficiency Maximization in Mobile Edge Computing Networks via IRS assisted UAV Communications
20
作者 Ying Zhang Weiming Niu +1 位作者 Supu Xiu Guangchen Mu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1865-1884,共20页
In this paper,we investigate the energy efficiency maximization for mobile edge computing(MEC)in intelligent reflecting surface(IRS)assisted unmanned aerial vehicle(UAV)communications.In particular,UAVcan collect the ... In this paper,we investigate the energy efficiency maximization for mobile edge computing(MEC)in intelligent reflecting surface(IRS)assisted unmanned aerial vehicle(UAV)communications.In particular,UAVcan collect the computing tasks of the terrestrial users and transmit the results back to them after computing.We jointly optimize the users’transmitted beamforming and uploading ratios,the phase shift matrix of IRS,and the UAV trajectory to improve the energy efficiency.The formulated optimization problem is highly non-convex and difficult to be solved directly.Therefore,we decompose the original problem into three sub-problems.We first propose the successive convex approximation(SCA)based method to design the beamforming of the users and the phase shift matrix of IRS,and apply the Lagrange dual method to obtain a closed-form expression of the uploading ratios.For the trajectory optimization,we propose a block coordinate descent(BCD)based method to obtain a local optimal solution.Finally,we propose the alternating optimization(AO)based overall algorithmand analyzed its complexity to be equivalent or lower than existing algorithms.Simulation results show the superiority of the proposedmethod compared with existing schemes in energy efficiency. 展开更多
关键词 Mobile edge computing(MEC) unmanned aerial vehicle(UAV) intelligent reflecting surface(IRS) energy efficiency
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部