期刊文献+
共找到233,615篇文章
< 1 2 250 >
每页显示 20 50 100
Federated Learning on Internet of Things:Extensive and Systematic Review
1
作者 Meenakshi Aggarwal Vikas Khullar +4 位作者 Sunita Rani Thomas AndréProla Shyama Barna Bhattacharjee Sarowar Morshed Shawon Nitin Goyal 《Computers, Materials & Continua》 SCIE EI 2024年第5期1795-1834,共40页
The proliferation of IoT devices requires innovative approaches to gaining insights while preserving privacy and resources amid unprecedented data generation.However,FL development for IoT is still in its infancy and ... The proliferation of IoT devices requires innovative approaches to gaining insights while preserving privacy and resources amid unprecedented data generation.However,FL development for IoT is still in its infancy and needs to be explored in various areas to understand the key challenges for deployment in real-world scenarios.The paper systematically reviewed the available literature using the PRISMA guiding principle.The study aims to provide a detailed overview of the increasing use of FL in IoT networks,including the architecture and challenges.A systematic review approach is used to collect,categorize and analyze FL-IoT-based articles.Asearch was performed in the IEEE,Elsevier,Arxiv,ACM,and WOS databases and 92 articles were finally examined.Inclusion measures were published in English and with the keywords“FL”and“IoT”.The methodology begins with an overview of recent advances in FL and the IoT,followed by a discussion of how these two technologies can be integrated.To be more specific,we examine and evaluate the capabilities of FL by talking about communication protocols,frameworks and architecture.We then present a comprehensive analysis of the use of FL in a number of key IoT applications,including smart healthcare,smart transportation,smart cities,smart industry,smart finance,and smart agriculture.The key findings from this analysis of FL IoT services and applications are also presented.Finally,we performed a comparative analysis with FL IID(independent and identical data)and non-ID,traditional centralized deep learning(DL)approaches.We concluded that FL has better performance,especially in terms of privacy protection and resource utilization.FL is excellent for preserving privacy becausemodel training takes place on individual devices or edge nodes,eliminating the need for centralized data aggregation,which poses significant privacy risks.To facilitate development in this rapidly evolving field,the insights presented are intended to help practitioners and researchers navigate the complex terrain of FL and IoT. 展开更多
关键词 Internet of things federated learning PRISMA framework of FL applications of FL data privacy COMMUNICATION
下载PDF
Astrocytic endothelin-1 overexpression impairs learning and memory ability in ischemic stroke via altered hippocampal neurogenesis and lipid metabolism 被引量:2
2
作者 Jie Li Wen Jiang +9 位作者 Yuefang Cai Zhenqiu Ning Yingying Zhou Chengyi Wang Sookja Ki Chung Yan Huang Jingbo Sun Minzhen Deng Lihua Zhou Xiao Cheng 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第3期650-656,共7页
Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However... Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction. 展开更多
关键词 astrocytic endothelin-1 dentate gyrus differentially expressed proteins HIPPOCAMPUS ischemic stroke learning and memory deficits lipid metabolism neural stem cells NEUROGENESIS proliferation
下载PDF
ThyroidNet:A Deep Learning Network for Localization and Classification of Thyroid Nodules
3
作者 Lu Chen Huaqiang Chen +6 位作者 Zhikai Pan Sheng Xu Guangsheng Lai Shuwen Chen Shuihua Wang Xiaodong Gu Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期361-382,共22页
Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on... Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules.First,we propose the multitask TransUnet,which combines the TransUnet encoder and decoder with multitask learning.Second,we propose the DualLoss function,tailored to the thyroid nodule localization and classification tasks.It balances the learning of the localization and classification tasks to help improve the model’s generalization ability.Third,we introduce strategies for augmenting the data.Finally,we submit a novel deep learning model,ThyroidNet,to accurately detect thyroid nodules.Results:ThyroidNet was evaluated on private datasets and was comparable to other existing methods,including U-Net and TransUnet.Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules.It achieved improved accuracy of 3.9%and 1.5%,respectively.Conclusion:ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks.Future research directions include optimization of the model structure,expansion of the dataset size,reduction of computational complexity and memory requirements,and exploration of additional applications of ThyroidNet in medical image analysis. 展开更多
关键词 ThyroidNet deep learning TransUnet multitask learning medical image analysis
下载PDF
Machine Learning Empowered Security and Privacy Architecture for IoT Networks with the Integration of Blockchain
4
作者 Sohaib Latif M.Saad Bin Ilyas +3 位作者 Azhar Imran Hamad Ali Abosaq Abdulaziz Alzubaidi Vincent Karovic Jr. 《Intelligent Automation & Soft Computing》 2024年第2期353-379,共27页
The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes ... The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively. 展开更多
关键词 Machine learning internet of things blockchain data privacy SECURITY Industry 4.0
下载PDF
Model Agnostic Meta-Learning(MAML)-Based Ensemble Model for Accurate Detection of Wheat Diseases Using Vision Transformer and Graph Neural Networks
5
作者 Yasir Maqsood Syed Muhammad Usman +3 位作者 Musaed Alhussein Khursheed Aurangzeb Shehzad Khalid Muhammad Zubair 《Computers, Materials & Continua》 SCIE EI 2024年第5期2795-2811,共17页
Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly di... Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed. 展开更多
关键词 Wheat disease detection deep learning vision transformer graph neural network model agnostic meta learning
下载PDF
Cybernet Model:A New Deep Learning Model for Cyber DDoS Attacks Detection and Recognition
6
作者 Azar Abid Salih Maiwan Bahjat Abdulrazaq 《Computers, Materials & Continua》 SCIE EI 2024年第1期1275-1295,共21页
Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being... Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being one of the most crucial due to their rapid cyberattack detection capabilities on networks and hosts.The capabilities of DL in feature learning and analyzing extensive data volumes lead to the recognition of network traffic patterns.This study presents novel lightweight DL models,known as Cybernet models,for the detection and recognition of various cyber Distributed Denial of Service(DDoS)attacks.These models were constructed to have a reasonable number of learnable parameters,i.e.,less than 225,000,hence the name“lightweight.”This not only helps reduce the number of computations required but also results in faster training and inference times.Additionally,these models were designed to extract features in parallel from 1D Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM),which makes them unique compared to earlier existing architectures and results in better performance measures.To validate their robustness and effectiveness,they were tested on the CIC-DDoS2019 dataset,which is an imbalanced and large dataset that contains different types of DDoS attacks.Experimental results revealed that bothmodels yielded promising results,with 99.99% for the detectionmodel and 99.76% for the recognition model in terms of accuracy,precision,recall,and F1 score.Furthermore,they outperformed the existing state-of-the-art models proposed for the same task.Thus,the proposed models can be used in cyber security research domains to successfully identify different types of attacks with a high detection and recognition rate. 展开更多
关键词 Deep learning CNN LSTM Cybernet model DdoS recognition
下载PDF
Machine learning and bioinformatics to identify biomarkers in response to Burkholderia pseudomallei infection in mice
7
作者 YAO FANG FEI XIA +5 位作者 FEIFEI TIAN L EI QU FANG YANG JUAN FANG ZHENHONG HU HAICHAO LIU 《BIOCELL》 SCIE 2024年第4期613-621,共9页
Objective:In the realm of Class I pathogens,Burkholderia pseudomallei(BP)stands out for its propensity to induce severe pathogenicity.Investigating the intricate interactions between BP and host cells is imperative fo... Objective:In the realm of Class I pathogens,Burkholderia pseudomallei(BP)stands out for its propensity to induce severe pathogenicity.Investigating the intricate interactions between BP and host cells is imperative for comprehending the dynamics of BP infection and discerning biomarkers indicative of the host cell response process.Methods:mRNA extraction from BP-infected mouse macrophages constituted the initial step of our study.Employing gene expression arrays,the extracted RNA underwent conversion into digital signals.The percentile shift method facilitated data processing,with the identification of genes manifesting significant differences accomplished through the application of the t-test.Subsequently,a comprehensive analysis involving Gene Ontology enrichment and Kyoto Encyclopedia of Genes and Genomes pathway was conducted on the differentially expressed genes(DEGs).Leveraging the ESTIMATE algorithm,gene signatures were utilized to compute risk scores for gene expression data.Support vector machine analysis and gene enrichment scores were instrumental in establishing correlations between biomarkers and macrophages,followed by an evaluation of the predictive power of the identified biomarkers.Results:The functional and pathway associations of the DEGs predominantly centered around G protein-coupled receptors.A noteworthy positive correlation emerged between the blue module,consisting of 416 genes,and the StromaScore.FZD4,identified through support vector machine analysis among intersecting genes,indicated a robust interaction with macrophages,suggesting its potential as a robust biomarker.FZD4 exhibited commendable predictive efficacy,with BP infection inducing its expression in both macrophages and mouse lung tissue.Western blotting in macrophages confirmed a significant upregulation of FZD4 expression from 0.5 to 24 h post-infection.In mouse lung tissue,FZD4 manifested higher expression in the cytoplasm of pulmonary epithelial cells in BP-infected lungs than in the control group.Conclusion:Thesefindings underscore the upregulation of FZD4 expression by BP in both macrophages and lung tissue,pointing to its prospective role as a biomarker in the pathogenesis of BP infection. 展开更多
关键词 Burkholderia pseudomallei Microarray assay Machine learning BIOINFORMATICS FZD4
下载PDF
Autonomous Vehicle Platoons In Urban Road Networks:A Joint Distributed Reinforcement Learning and Model Predictive Control Approach
8
作者 Luigi D’Alfonso Francesco Giannini +3 位作者 Giuseppe Franzè Giuseppe Fedele Francesco Pupo Giancarlo Fortino 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期141-156,共16页
In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory... In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory tubes by means of routing decisions complying with traffic congestion criteria. To this end, a novel distributed control architecture is conceived by taking advantage of two methodologies: deep reinforcement learning and model predictive control. On one hand, the routing decisions are obtained by using a distributed reinforcement learning algorithm that exploits available traffic data at each road junction. On the other hand, a bank of model predictive controllers is in charge of computing the more adequate control action for each involved vehicle. Such tasks are here combined into a single framework:the deep reinforcement learning output(action) is translated into a set-point to be tracked by the model predictive controller;conversely, the current vehicle position, resulting from the application of the control move, is exploited by the deep reinforcement learning unit for improving its reliability. The main novelty of the proposed solution lies in its hybrid nature: on one hand it fully exploits deep reinforcement learning capabilities for decisionmaking purposes;on the other hand, time-varying hard constraints are always satisfied during the dynamical platoon evolution imposed by the computed routing decisions. To efficiently evaluate the performance of the proposed control architecture, a co-design procedure, involving the SUMO and MATLAB platforms, is implemented so that complex operating environments can be used, and the information coming from road maps(links,junctions, obstacles, semaphores, etc.) and vehicle state trajectories can be shared and exchanged. Finally by considering as operating scenario a real entire city block and a platoon of eleven vehicles described by double-integrator models, several simulations have been performed with the aim to put in light the main f eatures of the proposed approach. Moreover, it is important to underline that in different operating scenarios the proposed reinforcement learning scheme is capable of significantly reducing traffic congestion phenomena when compared with well-reputed competitors. 展开更多
关键词 Distributed model predictive control distributed reinforcement learning routing decisions urban road networks
下载PDF
Heterophilic Graph Neural Network Based on Spatial and Frequency Domain Adaptive Embedding Mechanism
9
作者 Lanze Zhang Yijun Gu Jingjie Peng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1701-1731,共31页
Graph Neural Networks(GNNs)play a significant role in tasks related to homophilic graphs.Traditional GNNs,based on the assumption of homophily,employ low-pass filters for neighboring nodes to achieve information aggre... Graph Neural Networks(GNNs)play a significant role in tasks related to homophilic graphs.Traditional GNNs,based on the assumption of homophily,employ low-pass filters for neighboring nodes to achieve information aggregation and embedding.However,in heterophilic graphs,nodes from different categories often establish connections,while nodes of the same category are located further apart in the graph topology.This characteristic poses challenges to traditional GNNs,leading to issues of“distant node modeling deficiency”and“failure of the homophily assumption”.In response,this paper introduces the Spatial-Frequency domain Adaptive Heterophilic Graph Neural Networks(SFA-HGNN),which integrates adaptive embedding mechanisms for both spatial and frequency domains to address the aforementioned issues.Specifically,for the first problem,we propose the“Distant Spatial Embedding Module”,aiming to select and aggregate distant nodes through high-order randomwalk transition probabilities to enhance modeling capabilities.For the second issue,we design the“Proximal Frequency Domain Embedding Module”,constructing adaptive filters to separate high and low-frequency signals of nodes,and introduce frequency-domain guided attention mechanisms to fuse the relevant information,thereby reducing the noise introduced by the failure of the homophily assumption.We deploy the SFA-HGNN on six publicly available heterophilic networks,achieving state-of-the-art results in four of them.Furthermore,we elaborate on the hyperparameter selection mechanism and validate the performance of each module through experimentation,demonstrating a positive correlation between“node structural similarity”,“node attribute vector similarity”,and“node homophily”in heterophilic networks. 展开更多
关键词 Heterophilic graph graph neural network graph representation learning failure of the homophily assumption
下载PDF
Deep Learning for Financial Time Series Prediction:A State-of-the-Art Review of Standalone and HybridModels
10
作者 Weisi Chen Walayat Hussain +1 位作者 Francesco Cauteruccio Xu Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期187-224,共38页
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear... Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions. 展开更多
关键词 Financial time series prediction convolutional neural network long short-term memory deep learning attention mechanism FINANCE
下载PDF
Automated machine learning for rainfall-induced landslide hazard mapping in Luhe County of Guangdong Province,China
11
作者 Tao Li Chen-chen Xie +3 位作者 Chong Xu Wen-wen Qi Yuan-dong Huang Lei Li 《China Geology》 CAS CSCD 2024年第2期315-329,共15页
Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machin... Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machine learning framework(AutoGluon).A total of 2241 landslides were identified from satellite images before and after the rainfall event,and 10 impact factors including elevation,slope,aspect,normalized difference vegetation index(NDVI),topographic wetness index(TWI),lithology,land cover,distance to roads,distance to rivers,and rainfall were selected as indicators.The WeightedEnsemble model,which is an ensemble of 13 basic machine learning models weighted together,was used to output the landslide hazard assessment results.The results indicate that landslides mainly occurred in the central part of the study area,especially in Hetian and Shanghu.Totally 102.44 s were spent to train all the models,and the ensemble model WeightedEnsemble has an Area Under the Curve(AUC)value of92.36%in the test set.In addition,14.95%of the study area was determined to be at very high hazard,with a landslide density of 12.02 per square kilometer.This study serves as a significant reference for the prevention and mitigation of geological hazards and land use planning in Luhe County. 展开更多
关键词 Landslide hazard Heavy rainfall Harzard mapping Hazard assessment Automated machine learning Shallow landslide Visual interpretation Luhe County Geological hazards survey engineering
下载PDF
Security Monitoring and Management for the Network Services in the Orchestration of SDN-NFV Environment Using Machine Learning Techniques
12
作者 Nasser Alshammari Shumaila Shahzadi +7 位作者 Saad Awadh Alanazi Shahid Naseem Muhammad Anwar Madallah Alruwaili Muhammad Rizwan Abid Omar Alruwaili Ahmed Alsayat Fahad Ahmad 《Computer Systems Science & Engineering》 2024年第2期363-394,共32页
Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified ne... Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified network lifecycle,and policies management.Network vulnerabilities try to modify services provided by Network Function Virtualization MANagement and Orchestration(NFV MANO),and malicious attacks in different scenarios disrupt the NFV Orchestrator(NFVO)and Virtualized Infrastructure Manager(VIM)lifecycle management related to network services or individual Virtualized Network Function(VNF).This paper proposes an anomaly detection mechanism that monitors threats in NFV MANO and manages promptly and adaptively to implement and handle security functions in order to enhance the quality of experience for end users.An anomaly detector investigates these identified risks and provides secure network services.It enables virtual network security functions and identifies anomalies in Kubernetes(a cloud-based platform).For training and testing purpose of the proposed approach,an intrusion-containing dataset is used that hold multiple malicious activities like a Smurf,Neptune,Teardrop,Pod,Land,IPsweep,etc.,categorized as Probing(Prob),Denial of Service(DoS),User to Root(U2R),and Remote to User(R2L)attacks.An anomaly detector is anticipated with the capabilities of a Machine Learning(ML)technique,making use of supervised learning techniques like Logistic Regression(LR),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),and Extreme Gradient Boosting(XGBoost).The proposed framework has been evaluated by deploying the identified ML algorithm on a Jupyter notebook in Kubeflow to simulate Kubernetes for validation purposes.RF classifier has shown better outcomes(99.90%accuracy)than other classifiers in detecting anomalies/intrusions in the containerized environment. 展开更多
关键词 Software defined network network function virtualization network function virtualization management and orchestration virtual infrastructure manager virtual network function Kubernetes Kubectl artificial intelligence machine learning
下载PDF
Towards machine-learning-driven effective mashup recommendations from big data in mobile networks and the Internet-of-Things
13
作者 Yueshen Xu Zhiying Wang +3 位作者 Honghao Gao Zhiping Jiang Yuyu Yin Rui Li 《Digital Communications and Networks》 SCIE CSCD 2023年第1期138-145,共8页
A large number of Web APIs have been released as services in mobile communications,but the service provided by a single Web API is usually limited.To enrich the services in mobile communications,developers have combin... A large number of Web APIs have been released as services in mobile communications,but the service provided by a single Web API is usually limited.To enrich the services in mobile communications,developers have combined Web APIs and developed a new service,which is known as a mashup.The emergence of mashups greatly increases the number of services in mobile communications,especially in mobile networks and the Internet-of-Things(IoT),and has encouraged companies and individuals to develop even more mashups,which has led to the dramatic increase in the number of mashups.Such a trend brings with it big data,such as the massive text data from the mashups themselves and continually-generated usage data.Thus,the question of how to determine the most suitable mashups from big data has become a challenging problem.In this paper,we propose a mashup recommendation framework from big data in mobile networks and the IoT.The proposed framework is driven by machine learning techniques,including neural embedding,clustering,and matrix factorization.We employ neural embedding to learn the distributed representation of mashups and propose to use cluster analysis to learn the relationship among the mashups.We also develop a novel Joint Matrix Factorization(JMF)model to complete the mashup recommendation task,where we design a new objective function and an optimization algorithm.We then crawl through a real-world large mashup dataset and perform experiments.The experimental results demonstrate that our framework achieves high accuracy in mashup recommendation and performs better than all compared baselines. 展开更多
关键词 Mashup recommendation Big data Machine learning Mobile networks Internet-of-things
下载PDF
Defect inspection of indoor components in buildings using deep learning object detection and augmented reality
14
作者 Shun-Hsiang Hsu Ho-Tin Hung +1 位作者 Yu-Qi Lin Chia-Ming Chang 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2023年第1期41-54,共14页
Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implemen... Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implementations of visual inspection are substantially time-consuming,labor-intensive,and error-prone because useful auxiliary tools that can instantly highlight defects or damage locations from images are not available.Therefore,an advanced building inspection framework is developed and implemented with augmented reality(AR)and real-time damage detection in this study.In this framework,engineers should walk around and film every corner of the building interior to generate the three-dimensional(3D)environment through ARKit.Meanwhile,a trained YOLOv5 model real-time detects defects during this process,even in a large-scale field,and the defect locations indicating the detected defects are then marked in this 3D environment.The defects areas can be measured with centimeter-level accuracy with the light detection and ranging(LiDAR)on devices.All required damage information,including defect positions and sizes,is collected at a time and can be rendered in the 2D and 3D views.Finally,this visual inspection can be efficiently conducted,and the previously generated environment can also be loaded to re-localize existing defect marks for future maintenance and change observation.Moreover,the proposed framework is also implemented and verified by an underground parking lot in a building to detect and quantify surface defects on concrete components.As seen in the results,the conventional building inspection is significantly improved with the aid of the proposed framework in terms of damage localization,damage quantification,and inspection efficiency. 展开更多
关键词 visual inspection damage detection augmented reality damage quantification deep learning
下载PDF
GaitDONet: Gait Recognition Using Deep Features Optimization and Neural Network
15
作者 Muhammad Attique Khan Awais Khan +6 位作者 Majed Alhaisoni Abdullah Alqahtani Ammar Armghan Sara A.Althubiti Fayadh Alenezi Senghour Mey Yunyoung Nam 《Computers, Materials & Continua》 SCIE EI 2023年第6期5087-5103,共17页
Human gait recognition(HGR)is the process of identifying a sub-ject(human)based on their walking pattern.Each subject is a unique walking pattern and cannot be simulated by other subjects.But,gait recognition is not e... Human gait recognition(HGR)is the process of identifying a sub-ject(human)based on their walking pattern.Each subject is a unique walking pattern and cannot be simulated by other subjects.But,gait recognition is not easy and makes the system difficult if any object is carried by a subject,such as a bag or coat.This article proposes an automated architecture based on deep features optimization for HGR.To our knowledge,it is the first architecture in which features are fused using multiset canonical correlation analysis(MCCA).In the proposed method,original video frames are processed for all 11 selected angles of the CASIA B dataset and utilized to train two fine-tuned deep learning models such as Squeezenet and Efficientnet.Deep transfer learning was used to train both fine-tuned models on selected angles,yielding two new targeted models that were later used for feature engineering.Features are extracted from the deep layer of both fine-tuned models and fused into one vector using MCCA.An improved manta ray foraging optimization algorithm is also proposed to select the best features from the fused feature matrix and classified using a narrow neural network classifier.The experimental process was conducted on all 11 angles of the large multi-view gait dataset(CASIA B)dataset and obtained improved accuracy than the state-of-the-art techniques.Moreover,a detailed confidence interval based analysis also shows the effectiveness of the proposed architecture for HGR. 展开更多
关键词 Human gait recognition BIOMETRIC deep learning features fusion OPTIMIZATION neural network
下载PDF
Understanding the hydrogen evolution reaction activity of doped single-atom catalysts on two-dimensional GaPS_(4) by DFT and machine learning
16
作者 Tianyun Liu Xin Zhao +5 位作者 Xuefei Liu Wenjun Xiao Zijiang Luo Wentao Wang Yuefei Zhang Jin-Cheng Liu 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2023年第6期93-100,I0004,共9页
As a zero-carbon fuel,hydrogen can be produced via electrochemical water splitting using clean electric energy by the hydrogen evolution reaction(HER)process.The ultimate goal of HER catalyst is to replace the expensi... As a zero-carbon fuel,hydrogen can be produced via electrochemical water splitting using clean electric energy by the hydrogen evolution reaction(HER)process.The ultimate goal of HER catalyst is to replace the expensive Pt metal benchmark with a cheap one with equivalent activities.In this work,we investigated the possibility of HER process on single-atom catalysts(SACs)doped on two-dimensional(2D)GaPS_(4)materials,which have a large intrinsic band gap that can be regulated by doping and tensile strain.Based on the machine learning regression analysis,we can expand the prediction of HER performance to more catalysts without expensive DFT calculation.The electron affinity and first ionization energy are the two most important descriptors related to the HER behavior.Furthermore,constrain molecular dynamics with solvation models and constant potentials were applied to understand the dynamics barrier of HER process of Pt SAC on GaPS_(4)materials.These findings not only provide important insights into the catalytic properties of single-atom catalysts on GaPS_(4)2D materials,but also provides theoretical guidance paradigm for exploration of new catalysts. 展开更多
关键词 two-dimensional GaPS_(4) Hydrogen evolution reaction Single-atom catalysis First-principles calculation Machine learning
下载PDF
Machine learning applications in stroke medicine:advancements,challenges,and future prospectives 被引量:1
17
作者 Mario Daidone Sergio Ferrantelli Antonino Tuttolomondo 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第4期769-773,共5页
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique... Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease. 展开更多
关键词 cerebrovascular disease deep learning machine learning reinforcement learning STROKE stroke therapy supervised learning unsupervised learning
下载PDF
Deep Learning-Based Secure Transmission Strategy with Sensor-Transmission-Computing Linkage for Power Internet of Things
18
作者 Bin Li Linghui Kong +3 位作者 Xiangyi Zhang Bochuo Kou Hui Yu Bowen Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期3267-3282,共16页
The automatic collection of power grid situation information, along with real-time multimedia interaction between the front and back ends during the accident handling process, has generated a massive amount of power g... The automatic collection of power grid situation information, along with real-time multimedia interaction between the front and back ends during the accident handling process, has generated a massive amount of power grid data. While wireless communication offers a convenient channel for grid terminal access and data transmission, it is important to note that the bandwidth of wireless communication is limited. Additionally, the broadcast nature of wireless transmission raises concerns about the potential for unauthorized eavesdropping during data transmission. To address these challenges and achieve reliable, secure, and real-time transmission of power grid data, an intelligent security transmission strategy with sensor-transmission-computing linkage is proposed in this paper. The primary objective of this strategy is to maximize the confidentiality capacity of the system. To tackle this, an optimization problem is formulated, taking into consideration interruption probability and interception probability as constraints. To efficiently solve this optimization problem, a low-complexity algorithm rooted in deep reinforcement learning is designed, which aims to derive a suboptimal solution for the problem at hand. Ultimately, through simulation results, the validity of the proposed strategy in guaranteed communication security, stability, and timeliness is substantiated. The results confirm that the proposed intelligent security transmission strategy significantly contributes to the safeguarding of communication integrity, system stability, and timely data delivery. 展开更多
关键词 Secure transmission deep learning power Internet of things sensor-transmission-computing
下载PDF
RL and AHP-Based Multi-Timescale Multi-Clock Source Time Synchronization for Distribution Power Internet of Things
19
作者 Jiangang Lu Ruifeng Zhao +2 位作者 Zhiwen Yu Yue Dai Kaiwen Zeng 《Computers, Materials & Continua》 SCIE EI 2024年第3期4453-4469,共17页
Time synchronization(TS)is crucial for ensuring the secure and reliable functioning of the distribution power Internet of Things(IoT).Multi-clock source time synchronization(MTS)has significant advantages of high reli... Time synchronization(TS)is crucial for ensuring the secure and reliable functioning of the distribution power Internet of Things(IoT).Multi-clock source time synchronization(MTS)has significant advantages of high reliability and accuracy but still faces challenges such as optimization of the multi-clock source selection and the clock source weight calculation at different timescales,and the coupling of synchronization latency jitter and pulse phase difference.In this paper,the multi-timescale MTS model is conducted,and the reinforcement learning(RL)and analytic hierarchy process(AHP)-based multi-timescale MTS algorithm is designed to improve the weighted summation of synchronization latency jitter standard deviation and average pulse phase difference.Specifically,the multi-clock source selection is optimized based on Softmax in the large timescale,and the clock source weight calculation is optimized based on lower confidence bound-assisted AHP in the small timescale.Simulation shows that the proposed algorithm can effectively reduce time synchronization delay standard deviation and average pulse phase difference. 展开更多
关键词 Multi-clock source time synchronization(TS) power Internet of things reinforcement learning analytic hierarchy process
下载PDF
Smart Healthcare Activity Recognition Using Statistical Regression and Intelligent Learning
20
作者 K.Akilandeswari Nithya Rekha Sivakumar +2 位作者 Hend Khalid Alkahtani Shakila Basheer Sara Abdelwahab Ghorashi 《Computers, Materials & Continua》 SCIE EI 2024年第1期1189-1205,共17页
In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health infor... In this present time,Human Activity Recognition(HAR)has been of considerable aid in the case of health monitoring and recovery.The exploitation of machine learning with an intelligent agent in the area of health informatics gathered using HAR augments the decision-making quality and significance.Although many research works conducted on Smart Healthcare Monitoring,there remain a certain number of pitfalls such as time,overhead,and falsification involved during analysis.Therefore,this paper proposes a Statistical Partial Regression and Support Vector Intelligent Agent Learning(SPR-SVIAL)for Smart Healthcare Monitoring.At first,the Statistical Partial Regression Feature Extraction model is used for data preprocessing along with the dimensionality-reduced features extraction process.Here,the input dataset the continuous beat-to-beat heart data,triaxial accelerometer data,and psychological characteristics were acquired from IoT wearable devices.To attain highly accurate Smart Healthcare Monitoring with less time,Partial Least Square helps extract the dimensionality-reduced features.After that,with these resulting features,SVIAL is proposed for Smart Healthcare Monitoring with the help of Machine Learning and Intelligent Agents to minimize both analysis falsification and overhead.Experimental evaluation is carried out for factors such as time,overhead,and false positive rate accuracy concerning several instances.The quantitatively analyzed results indicate the better performance of our proposed SPR-SVIAL method when compared with two state-of-the-art methods. 展开更多
关键词 Internet of things smart health care monitoring human activity recognition intelligent agent learning statistical partial regression support vector
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部