为了实现在微处理器上运行眼动信号分类算法,精简嵌入式系统设计,提高系统效率,文章对比研究了STM32CubeMX AI和NanoEdge AI 2种可在微处理器上部署人工智能算法的技术手段。首先以眼电数据为基础,分别利用2种技术实现分类算法在微处理...为了实现在微处理器上运行眼动信号分类算法,精简嵌入式系统设计,提高系统效率,文章对比研究了STM32CubeMX AI和NanoEdge AI 2种可在微处理器上部署人工智能算法的技术手段。首先以眼电数据为基础,分别利用2种技术实现分类算法在微处理器上进行部署;然后在微处理器中运行分类算法,对眼电信号进行分类;最后对比分析2种分类方法的优缺点。实验结果表明,2种部署方式各有利弊,利用STM32CubeMXAI实现分类部署的方法首先需要在上位机中实现分类算法,有一定的执行难度,但可以更加有效地提高分类准确度;利用NanoEdge AI实现分类部署的方法可以避免上位机算法的调试,但无法实现针对不同信号进行具体设计。展开更多
Jiu Ai Tu(The Moxa Treatment)from the Song dynasty is the earliest surviving painting that focuses on the subject of acupuncture and moxibustion.This paper takes the medical activities depicted in the artwork as its r...Jiu Ai Tu(The Moxa Treatment)from the Song dynasty is the earliest surviving painting that focuses on the subject of acupuncture and moxibustion.This paper takes the medical activities depicted in the artwork as its research object and systematically analyzes the external treatment methods for abscesses during the Song dynasty reflected in Jiu Ai Tu.By examining the understanding of abscesses during that period,the paper explores the level of development in external medicine techniques.By analyzing the medical awareness and behaviors of patients when facing such severe illnesses,it aims to explore the societal cognition and experiences regarding health and disease.The paper attempts to present the folk medical ecology of the Song dynasty represented by Jiu Ai Tu.展开更多
The risk of bias is widely noticed in the entire process of generative artificial intelligence(generative AI)systems.To protect the rights of the public and improve the effectiveness of AI regulations,feasible measure...The risk of bias is widely noticed in the entire process of generative artificial intelligence(generative AI)systems.To protect the rights of the public and improve the effectiveness of AI regulations,feasible measures to address the bias problem in the context of large data should be proposed as soon as possible.Since bias originates in every part and various aspects of AI product lifecycles,laws and technical measures should consider each of these layers and take different causes of bias into account,from data training,modeling,and application design.The Interim Measures for the Administration of Generative AI Service(the Interim Measures),formulated by the Office of the Central Cyberspace Affairs Commission(CAC)and other departments have taken the initiatives to govern AI.However,it lacks specific details on issues such as how to prevent the risk of bias and reduce the effect of bias in decision-making.The Interim Measures also fail to take causes of bias into account,and several principles must be further interpreted.Meanwhile,regulations on generative AI at the global level are still in their early stages.By forming a governance framework,this paper could provide the community with useful experiences and play a leading role.The framework includes at least three parts:first,determining the realm of governance and unifying related concepts;second,developing measures for different layers to identify the causes and specific aspects of bias;third,identifying parties with the skills to take responsibility for detecting bias intrusions and proposing a program for the allocation of liabilities among the large-scale platform developers.展开更多
Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousa...Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025.展开更多
Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs...Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs. This practice involves strategically designing and structuring prompts to guide AI models toward desired outcomes, ensuring that they generate relevant, informative, and accurate responses. The significance of prompt engineering cannot be overstated. Well-crafted prompts can significantly enhance the capabilities of AI models, enabling them to perform tasks that were once thought to be exclusively human domain. By providing clear and concise instructions, prompts can guide AI models to generate creative text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Moreover, prompt engineering can help mitigate biases and ensure that AI models produce outputs that are fair, equitable, and inclusive. However, prompt engineering is not without its challenges. Crafting effective prompts requires a deep understanding of both the AI model’s capabilities and the specific task at hand. Additionally, the quality of the prompts can be influenced by factors such as the model’s training data [1] and the complexity of the task. As AI models continue to evolve, prompt engineering will likely become even more critical in unlocking their full potential.展开更多
In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at proc...In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics.展开更多
This paper represents a detailed and systematic review of one of the most ongoing applications of computational fluid dynamics(CFD)in biomedical applications.Beyond its various engineering applications,CFD has started...This paper represents a detailed and systematic review of one of the most ongoing applications of computational fluid dynamics(CFD)in biomedical applications.Beyond its various engineering applications,CFD has started to establish a presence in the biomedical field.Cardiac abnormality,a familiar health issue,is an essential point of investigation by research analysts.Diagnostic modalities provide cardiovascular structural information but give insufficient information about the hemodynamics of blood.The study of hemodynamic parameters can be a potential measure for determining cardiovascular abnormalities.Numerous studies have explored the rheological behavior of blood experimentally and numerically.This paper provides insight into how researchers have incorporated the pulsatile nature of the blood experimentally,numerically,or through various simulations over the years.It focuses on how machine learning platforms derive outputs based on mass and momentum conservation to predict the velocity and pressure profile,analyzing various cardiac diseases for clinical applications.This will pave the way toward responsive AI in cardiac healthcare,improving productivity and quality in the healthcare industry.The paper shows how CFD is a vital tool for efficiently studying the flow in arteries.The review indicates this biomedical simulation and its applications in healthcare using machine learning and AI.Developing AI-based CFD models can impact society and foster the advancement towards responsive AI.展开更多
In the era of the Internet of Things(IoT),the proliferation of connected devices has raised security concerns,increasing the risk of intrusions into diverse systems.Despite the convenience and efficiency offered by Io...In the era of the Internet of Things(IoT),the proliferation of connected devices has raised security concerns,increasing the risk of intrusions into diverse systems.Despite the convenience and efficiency offered by IoT technology,the growing number of IoT devices escalates the likelihood of attacks,emphasizing the need for robust security tools to automatically detect and explain threats.This paper introduces a deep learning methodology for detecting and classifying distributed denial of service(DDoS)attacks,addressing a significant security concern within IoT environments.An effective procedure of deep transfer learning is applied to utilize deep learning backbones,which is then evaluated on two benchmarking datasets of DDoS attacks in terms of accuracy and time complexity.By leveraging several deep architectures,the study conducts thorough binary and multiclass experiments,each varying in the complexity of classifying attack types and demonstrating real-world scenarios.Additionally,this study employs an explainable artificial intelligence(XAI)AI technique to elucidate the contribution of extracted features in the process of attack detection.The experimental results demonstrate the effectiveness of the proposed method,achieving a recall of 99.39%by the XAI bidirectional long short-term memory(XAI-BiLSTM)model.展开更多
The issue of opacity within data-driven artificial intelligence(AI)algorithms has become an impediment to these algorithms’extensive utilization,especially within sensitive domains concerning health,safety,and high p...The issue of opacity within data-driven artificial intelligence(AI)algorithms has become an impediment to these algorithms’extensive utilization,especially within sensitive domains concerning health,safety,and high profitability,such as chemical engineering(CE).In order to promote reliable AI utilization in CE,this review discusses the concept of transparency within AI utilizations,which is defined based on both explainable AI(XAI)concepts and key features from within the CE field.This review also highlights the requirements of reliable AI from the aspects of causality(i.e.,the correlations between the predictions and inputs of an AI),explainability(i.e.,the operational rationales of the workflows),and informativeness(i.e.,the mechanistic insights of the investigating systems).Related techniques are evaluated together with state-of-the-art applications to highlight the significance of establishing reliable AI applications in CE.Furthermore,a comprehensive transparency analysis case study is provided as an example to enhance understanding.Overall,this work provides a thorough discussion of this subject matter in a way that—for the first time—is particularly geared toward chemical engineers in order to raise awareness of responsible AI utilization.With this vital missing link,AI is anticipated to serve as a novel and powerful tool that can tremendously aid chemical engineers in solving bottleneck challenges in CE.展开更多
Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research l...Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research leverages the representation ability of pretrained EfficientNet-B0 model and the classification ability of the XGBoost model for the binary classification of breast tumors.In addition,the above transfer learning model is modified in such a way that it will focus more on tumor cells in the input mammogram.Accordingly,the work proposed an EfficientNet-B0 having a Spatial Attention Layer with XGBoost(ESA-XGBNet)for binary classification of mammograms.For this,the work is trained,tested,and validated using original and augmented mammogram images of three public datasets namely CBIS-DDSM,INbreast,and MIAS databases.Maximumclassification accuracy of 97.585%(CBISDDSM),98.255%(INbreast),and 98.91%(MIAS)is obtained using the proposed ESA-XGBNet architecture as compared with the existing models.Furthermore,the decision-making of the proposed ESA-XGBNet architecture is visualized and validated using the Attention Guided GradCAM-based Explainable AI technique.展开更多
In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.De...In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.Despite its potential,deep learning’s“black box”nature has been a major impediment to its broader acceptance in clinical environments,where transparency in decision-making is imperative.To bridge this gap,our research integrates Explainable AI(XAI)techniques,specifically the Local Interpretable Model-Agnostic Explanations(LIME)method,with advanced deep learning models.This integration forms a sophisticated and transparent framework for COVID-19 identification,enhancing the capability of standard Convolutional Neural Network(CNN)models through transfer learning and data augmentation.Our approach leverages the refined DenseNet201 architecture for superior feature extraction and employs data augmentation strategies to foster robust model generalization.The pivotal element of our methodology is the use of LIME,which demystifies the AI decision-making process,providing clinicians with clear,interpretable insights into the AI’s reasoning.This unique combination of an optimized Deep Neural Network(DNN)with LIME not only elevates the precision in detecting COVID-19 cases but also equips healthcare professionals with a deeper understanding of the diagnostic process.Our method,validated on the SARS-COV-2 CT-Scan dataset,demonstrates exceptional diagnostic accuracy,with performance metrics that reinforce its potential for seamless integration into modern healthcare systems.This innovative approach marks a significant advancement in creating explainable and trustworthy AI tools for medical decisionmaking in the ongoing battle against COVID-19.展开更多
The rapid integration of artificial intelligence (AI) into critical sectors has revealed a complex landscape of cybersecurity challenges that are unique to these advanced technologies. AI systems, with their extensive...The rapid integration of artificial intelligence (AI) into critical sectors has revealed a complex landscape of cybersecurity challenges that are unique to these advanced technologies. AI systems, with their extensive data dependencies and algorithmic complexities, are susceptible to a broad spectrum of cyber threats that can undermine their functionality and compromise their integrity. This paper provides a detailed analysis of these threats, which include data poisoning, adversarial attacks, and systemic vulnerabilities that arise from the AI’s operational and infrastructural frameworks. This paper critically examines the effectiveness of existing defensive mechanisms, such as adversarial training and threat modeling, that aim to fortify AI systems against such vulnerabilities. In response to the limitations of current approaches, this paper explores a comprehensive framework for the design and implementation of robust AI systems. This framework emphasizes the development of dynamic, adaptive security measures that can evolve in response to new and emerging cyber threats, thereby enhancing the resilience of AI systems. Furthermore, the paper addresses the ethical dimensions of AI cybersecurity, highlighting the need for strategies that not only protect systems but also preserve user privacy and ensure fairness across all operations. In addition to current strategies and ethical concerns, this paper explores future directions in AI cybersecurity.展开更多
Machine fault diagnostics are essential for industrial operations,and advancements in machine learning have significantly advanced these systems by providing accurate predictions and expedited solutions.Machine learni...Machine fault diagnostics are essential for industrial operations,and advancements in machine learning have significantly advanced these systems by providing accurate predictions and expedited solutions.Machine learning models,especially those utilizing complex algorithms like deep learning,have demonstrated major potential in extracting important information fromlarge operational datasets.Despite their efficiency,machine learningmodels face challenges,making Explainable AI(XAI)crucial for improving their understandability and fine-tuning.The importance of feature contribution and selection using XAI in the diagnosis of machine faults is examined in this study.The technique is applied to evaluate different machine-learning algorithms.Extreme Gradient Boosting,Support Vector Machine,Gaussian Naive Bayes,and Random Forest classifiers are used alongside Logistic Regression(LR)as a baseline model because their efficacy and simplicity are evaluated thoroughly with empirical analysis.The XAI is used as a targeted feature selection technique to select among 29 features of the time and frequency domain.The XAI approach is lightweight,trained with only targeted features,and achieved similar results as the traditional approach.The accuracy without XAI on baseline LR is 79.57%,whereas the approach with XAI on LR is 80.28%.展开更多
Text-to-video artificial intelligence(AI)is a new product that has arisen from the continuous development of digital technology over recent years.The emergence of various text-to-video AI models,including Sora,is driv...Text-to-video artificial intelligence(AI)is a new product that has arisen from the continuous development of digital technology over recent years.The emergence of various text-to-video AI models,including Sora,is driving the proliferation of content generated through concrete imagery.However,the content generated by text-to-video AI raises significant issues such as unclear work identification,ambiguous copyright ownership,and widespread copyright infringement.These issues can hinder the development of text-to-video AI in the creative fields and impede the prosperity of China’s social and cultural arts.Therefore,this paper proposes three recommendations within a legal framework:(a)categorizing the content generated by text-to-video AI as audiovisual works;(b)clarifying the copyright ownership model for text-to-video AI works;(c)reasonably delineating the responsibilities of the parties who are involved in the text-to-video AI works.The aim is to mitigate the copyright risks associated with content generated by text-to-video AI and to promote the healthy development of text-to-video AI in the creative fields.展开更多
Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Mag...Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging(MRI).It focuses on distinguishing between Low-Grade Gliomas(LGG)and High-Grade Gliomas(HGG).LGGs are benign and typically manageable with surgical resection,while HGGs are malignant and more aggressive.The research introduces an innovative custom convolutional neural network(CNN)model,Glioma-CNN.GliomaCNN stands out as a lightweight CNN model compared to its predecessors.The research utilized the BraTS 2020 dataset for its experiments.Integrated with the gradient-boosting algorithm,GliomaCNN has achieved an impressive accuracy of 99.1569%.The model’s interpretability is ensured through SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM++).They provide insights into critical decision-making regions for classification outcomes.Despite challenges in identifying tumors in images without visible signs,the model demonstrates remarkable performance in this critical medical application,offering a promising tool for accurate brain tumor diagnosis which paves the way for enhanced early detection and treatment of brain tumors.展开更多
文摘为了实现在微处理器上运行眼动信号分类算法,精简嵌入式系统设计,提高系统效率,文章对比研究了STM32CubeMX AI和NanoEdge AI 2种可在微处理器上部署人工智能算法的技术手段。首先以眼电数据为基础,分别利用2种技术实现分类算法在微处理器上进行部署;然后在微处理器中运行分类算法,对眼电信号进行分类;最后对比分析2种分类方法的优缺点。实验结果表明,2种部署方式各有利弊,利用STM32CubeMXAI实现分类部署的方法首先需要在上位机中实现分类算法,有一定的执行难度,但可以更加有效地提高分类准确度;利用NanoEdge AI实现分类部署的方法可以避免上位机算法的调试,但无法实现针对不同信号进行具体设计。
基金financed from the grant of the National Social Science Foundation General Project(No.23BZS010)。
文摘Jiu Ai Tu(The Moxa Treatment)from the Song dynasty is the earliest surviving painting that focuses on the subject of acupuncture and moxibustion.This paper takes the medical activities depicted in the artwork as its research object and systematically analyzes the external treatment methods for abscesses during the Song dynasty reflected in Jiu Ai Tu.By examining the understanding of abscesses during that period,the paper explores the level of development in external medicine techniques.By analyzing the medical awareness and behaviors of patients when facing such severe illnesses,it aims to explore the societal cognition and experiences regarding health and disease.The paper attempts to present the folk medical ecology of the Song dynasty represented by Jiu Ai Tu.
文摘The risk of bias is widely noticed in the entire process of generative artificial intelligence(generative AI)systems.To protect the rights of the public and improve the effectiveness of AI regulations,feasible measures to address the bias problem in the context of large data should be proposed as soon as possible.Since bias originates in every part and various aspects of AI product lifecycles,laws and technical measures should consider each of these layers and take different causes of bias into account,from data training,modeling,and application design.The Interim Measures for the Administration of Generative AI Service(the Interim Measures),formulated by the Office of the Central Cyberspace Affairs Commission(CAC)and other departments have taken the initiatives to govern AI.However,it lacks specific details on issues such as how to prevent the risk of bias and reduce the effect of bias in decision-making.The Interim Measures also fail to take causes of bias into account,and several principles must be further interpreted.Meanwhile,regulations on generative AI at the global level are still in their early stages.By forming a governance framework,this paper could provide the community with useful experiences and play a leading role.The framework includes at least three parts:first,determining the realm of governance and unifying related concepts;second,developing measures for different layers to identify the causes and specific aspects of bias;third,identifying parties with the skills to take responsibility for detecting bias intrusions and proposing a program for the allocation of liabilities among the large-scale platform developers.
文摘Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025.
文摘Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs. This practice involves strategically designing and structuring prompts to guide AI models toward desired outcomes, ensuring that they generate relevant, informative, and accurate responses. The significance of prompt engineering cannot be overstated. Well-crafted prompts can significantly enhance the capabilities of AI models, enabling them to perform tasks that were once thought to be exclusively human domain. By providing clear and concise instructions, prompts can guide AI models to generate creative text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Moreover, prompt engineering can help mitigate biases and ensure that AI models produce outputs that are fair, equitable, and inclusive. However, prompt engineering is not without its challenges. Crafting effective prompts requires a deep understanding of both the AI model’s capabilities and the specific task at hand. Additionally, the quality of the prompts can be influenced by factors such as the model’s training data [1] and the complexity of the task. As AI models continue to evolve, prompt engineering will likely become even more critical in unlocking their full potential.
基金supported by the National Natural Science Foundation of China(Grant Numbers:62372083,62072074,62076054,62027827,62002047)the Sichuan Provincial Science and Technology Innovation Platform and Talent Program(Grant Number:2022JDJQ0039)+1 种基金the Sichuan Provincial Science and Technology Support Program(Grant Numbers:2022YFQ0045,2022YFS0220,2021YFG0131,2023YFS0020,2023YFS0197,2023YFG0148)the CCF-Baidu Open Fund(Grant Number:202312).
文摘In the intelligent medical diagnosis area,Artificial Intelligence(AI)’s trustworthiness,reliability,and interpretability are critical,especially in cancer diagnosis.Traditional neural networks,while excellent at processing natural images,often lack interpretability and adaptability when processing high-resolution digital pathological images.This limitation is particularly evident in pathological diagnosis,which is the gold standard of cancer diagnosis and relies on a pathologist’s careful examination and analysis of digital pathological slides to identify the features and progression of the disease.Therefore,the integration of interpretable AI into smart medical diagnosis is not only an inevitable technological trend but also a key to improving diagnostic accuracy and reliability.In this paper,we introduce an innovative Multi-Scale Multi-Branch Feature Encoder(MSBE)and present the design of the CrossLinkNet Framework.The MSBE enhances the network’s capability for feature extraction by allowing the adjustment of hyperparameters to configure the number of branches and modules.The CrossLinkNet Framework,serving as a versatile image segmentation network architecture,employs cross-layer encoder-decoder connections for multi-level feature fusion,thereby enhancing feature integration and segmentation accuracy.Comprehensive quantitative and qualitative experiments on two datasets demonstrate that CrossLinkNet,equipped with the MSBE encoder,not only achieves accurate segmentation results but is also adaptable to various tumor segmentation tasks and scenarios by replacing different feature encoders.Crucially,CrossLinkNet emphasizes the interpretability of the AI model,a crucial aspect for medical professionals,providing an in-depth understanding of the model’s decisions and thereby enhancing trust and reliability in AI-assisted diagnostics.
文摘This paper represents a detailed and systematic review of one of the most ongoing applications of computational fluid dynamics(CFD)in biomedical applications.Beyond its various engineering applications,CFD has started to establish a presence in the biomedical field.Cardiac abnormality,a familiar health issue,is an essential point of investigation by research analysts.Diagnostic modalities provide cardiovascular structural information but give insufficient information about the hemodynamics of blood.The study of hemodynamic parameters can be a potential measure for determining cardiovascular abnormalities.Numerous studies have explored the rheological behavior of blood experimentally and numerically.This paper provides insight into how researchers have incorporated the pulsatile nature of the blood experimentally,numerically,or through various simulations over the years.It focuses on how machine learning platforms derive outputs based on mass and momentum conservation to predict the velocity and pressure profile,analyzing various cardiac diseases for clinical applications.This will pave the way toward responsive AI in cardiac healthcare,improving productivity and quality in the healthcare industry.The paper shows how CFD is a vital tool for efficiently studying the flow in arteries.The review indicates this biomedical simulation and its applications in healthcare using machine learning and AI.Developing AI-based CFD models can impact society and foster the advancement towards responsive AI.
文摘In the era of the Internet of Things(IoT),the proliferation of connected devices has raised security concerns,increasing the risk of intrusions into diverse systems.Despite the convenience and efficiency offered by IoT technology,the growing number of IoT devices escalates the likelihood of attacks,emphasizing the need for robust security tools to automatically detect and explain threats.This paper introduces a deep learning methodology for detecting and classifying distributed denial of service(DDoS)attacks,addressing a significant security concern within IoT environments.An effective procedure of deep transfer learning is applied to utilize deep learning backbones,which is then evaluated on two benchmarking datasets of DDoS attacks in terms of accuracy and time complexity.By leveraging several deep architectures,the study conducts thorough binary and multiclass experiments,each varying in the complexity of classifying attack types and demonstrating real-world scenarios.Additionally,this study employs an explainable artificial intelligence(XAI)AI technique to elucidate the contribution of extracted features in the process of attack detection.The experimental results demonstrate the effectiveness of the proposed method,achieving a recall of 99.39%by the XAI bidirectional long short-term memory(XAI-BiLSTM)model.
文摘The issue of opacity within data-driven artificial intelligence(AI)algorithms has become an impediment to these algorithms’extensive utilization,especially within sensitive domains concerning health,safety,and high profitability,such as chemical engineering(CE).In order to promote reliable AI utilization in CE,this review discusses the concept of transparency within AI utilizations,which is defined based on both explainable AI(XAI)concepts and key features from within the CE field.This review also highlights the requirements of reliable AI from the aspects of causality(i.e.,the correlations between the predictions and inputs of an AI),explainability(i.e.,the operational rationales of the workflows),and informativeness(i.e.,the mechanistic insights of the investigating systems).Related techniques are evaluated together with state-of-the-art applications to highlight the significance of establishing reliable AI applications in CE.Furthermore,a comprehensive transparency analysis case study is provided as an example to enhance understanding.Overall,this work provides a thorough discussion of this subject matter in a way that—for the first time—is particularly geared toward chemical engineers in order to raise awareness of responsible AI utilization.With this vital missing link,AI is anticipated to serve as a novel and powerful tool that can tremendously aid chemical engineers in solving bottleneck challenges in CE.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R432),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research leverages the representation ability of pretrained EfficientNet-B0 model and the classification ability of the XGBoost model for the binary classification of breast tumors.In addition,the above transfer learning model is modified in such a way that it will focus more on tumor cells in the input mammogram.Accordingly,the work proposed an EfficientNet-B0 having a Spatial Attention Layer with XGBoost(ESA-XGBNet)for binary classification of mammograms.For this,the work is trained,tested,and validated using original and augmented mammogram images of three public datasets namely CBIS-DDSM,INbreast,and MIAS databases.Maximumclassification accuracy of 97.585%(CBISDDSM),98.255%(INbreast),and 98.91%(MIAS)is obtained using the proposed ESA-XGBNet architecture as compared with the existing models.Furthermore,the decision-making of the proposed ESA-XGBNet architecture is visualized and validated using the Attention Guided GradCAM-based Explainable AI technique.
基金the Deanship for Research Innovation,Ministry of Education in Saudi Arabia,for funding this research work through project number IFKSUDR-H122.
文摘In the current landscape of the COVID-19 pandemic,the utilization of deep learning in medical imaging,especially in chest computed tomography(CT)scan analysis for virus detection,has become increasingly significant.Despite its potential,deep learning’s“black box”nature has been a major impediment to its broader acceptance in clinical environments,where transparency in decision-making is imperative.To bridge this gap,our research integrates Explainable AI(XAI)techniques,specifically the Local Interpretable Model-Agnostic Explanations(LIME)method,with advanced deep learning models.This integration forms a sophisticated and transparent framework for COVID-19 identification,enhancing the capability of standard Convolutional Neural Network(CNN)models through transfer learning and data augmentation.Our approach leverages the refined DenseNet201 architecture for superior feature extraction and employs data augmentation strategies to foster robust model generalization.The pivotal element of our methodology is the use of LIME,which demystifies the AI decision-making process,providing clinicians with clear,interpretable insights into the AI’s reasoning.This unique combination of an optimized Deep Neural Network(DNN)with LIME not only elevates the precision in detecting COVID-19 cases but also equips healthcare professionals with a deeper understanding of the diagnostic process.Our method,validated on the SARS-COV-2 CT-Scan dataset,demonstrates exceptional diagnostic accuracy,with performance metrics that reinforce its potential for seamless integration into modern healthcare systems.This innovative approach marks a significant advancement in creating explainable and trustworthy AI tools for medical decisionmaking in the ongoing battle against COVID-19.
文摘The rapid integration of artificial intelligence (AI) into critical sectors has revealed a complex landscape of cybersecurity challenges that are unique to these advanced technologies. AI systems, with their extensive data dependencies and algorithmic complexities, are susceptible to a broad spectrum of cyber threats that can undermine their functionality and compromise their integrity. This paper provides a detailed analysis of these threats, which include data poisoning, adversarial attacks, and systemic vulnerabilities that arise from the AI’s operational and infrastructural frameworks. This paper critically examines the effectiveness of existing defensive mechanisms, such as adversarial training and threat modeling, that aim to fortify AI systems against such vulnerabilities. In response to the limitations of current approaches, this paper explores a comprehensive framework for the design and implementation of robust AI systems. This framework emphasizes the development of dynamic, adaptive security measures that can evolve in response to new and emerging cyber threats, thereby enhancing the resilience of AI systems. Furthermore, the paper addresses the ethical dimensions of AI cybersecurity, highlighting the need for strategies that not only protect systems but also preserve user privacy and ensure fairness across all operations. In addition to current strategies and ethical concerns, this paper explores future directions in AI cybersecurity.
基金funded by Woosong University Academic Research 2024.
文摘Machine fault diagnostics are essential for industrial operations,and advancements in machine learning have significantly advanced these systems by providing accurate predictions and expedited solutions.Machine learning models,especially those utilizing complex algorithms like deep learning,have demonstrated major potential in extracting important information fromlarge operational datasets.Despite their efficiency,machine learningmodels face challenges,making Explainable AI(XAI)crucial for improving their understandability and fine-tuning.The importance of feature contribution and selection using XAI in the diagnosis of machine faults is examined in this study.The technique is applied to evaluate different machine-learning algorithms.Extreme Gradient Boosting,Support Vector Machine,Gaussian Naive Bayes,and Random Forest classifiers are used alongside Logistic Regression(LR)as a baseline model because their efficacy and simplicity are evaluated thoroughly with empirical analysis.The XAI is used as a targeted feature selection technique to select among 29 features of the time and frequency domain.The XAI approach is lightweight,trained with only targeted features,and achieved similar results as the traditional approach.The accuracy without XAI on baseline LR is 79.57%,whereas the approach with XAI on LR is 80.28%.
基金This research is supported by“Research on Legal Issues Caused by Sora from the Perspective of Copyright Law”(YK20240094)of the Xihua University Science and Technology Innovation Competition Project for Postgraduate Students(cultivation project).
文摘Text-to-video artificial intelligence(AI)is a new product that has arisen from the continuous development of digital technology over recent years.The emergence of various text-to-video AI models,including Sora,is driving the proliferation of content generated through concrete imagery.However,the content generated by text-to-video AI raises significant issues such as unclear work identification,ambiguous copyright ownership,and widespread copyright infringement.These issues can hinder the development of text-to-video AI in the creative fields and impede the prosperity of China’s social and cultural arts.Therefore,this paper proposes three recommendations within a legal framework:(a)categorizing the content generated by text-to-video AI as audiovisual works;(b)clarifying the copyright ownership model for text-to-video AI works;(c)reasonably delineating the responsibilities of the parties who are involved in the text-to-video AI works.The aim is to mitigate the copyright risks associated with content generated by text-to-video AI and to promote the healthy development of text-to-video AI in the creative fields.
基金This research is funded by the Researchers Supporting Project Number(RSPD2024R1027),King Saud University,Riyadh,Saudi Arabia.
文摘Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging(MRI).It focuses on distinguishing between Low-Grade Gliomas(LGG)and High-Grade Gliomas(HGG).LGGs are benign and typically manageable with surgical resection,while HGGs are malignant and more aggressive.The research introduces an innovative custom convolutional neural network(CNN)model,Glioma-CNN.GliomaCNN stands out as a lightweight CNN model compared to its predecessors.The research utilized the BraTS 2020 dataset for its experiments.Integrated with the gradient-boosting algorithm,GliomaCNN has achieved an impressive accuracy of 99.1569%.The model’s interpretability is ensured through SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM++).They provide insights into critical decision-making regions for classification outcomes.Despite challenges in identifying tumors in images without visible signs,the model demonstrates remarkable performance in this critical medical application,offering a promising tool for accurate brain tumor diagnosis which paves the way for enhanced early detection and treatment of brain tumors.