In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of ...In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes.展开更多
Edge technology aims to bring cloud resources(specifically,the computation,storage,and network)to the closed proximity of the edge devices,i.e.,smart devices where the data are produced and consumed.Embedding computin...Edge technology aims to bring cloud resources(specifically,the computation,storage,and network)to the closed proximity of the edge devices,i.e.,smart devices where the data are produced and consumed.Embedding computing and application in edge devices lead to emerging of two new concepts in edge technology:edge computing and edge analytics.Edge analytics uses some techniques or algorithms to analyse the data generated by the edge devices.With the emerging of edge analytics,the edge devices have become a complete set.Currently,edge analytics is unable to provide full support to the analytic techniques.The edge devices cannot execute advanced and sophisticated analytic algorithms following various constraints such as limited power supply,small memory size,limited resources,etc.This article aims to provide a detailed discussion on edge analytics.The key contributions of the paper are as follows-a clear explanation to distinguish between the three concepts of edge technology:edge devices,edge computing,and edge analytics,along with their issues.In addition,the article discusses the implementation of edge analytics to solve many problems and applications in various areas such as retail,agriculture,industry,and healthcare.Moreover,the research papers of the state-of-the-art edge analytics are rigorously reviewed in this article to explore the existing issues,emerging challenges,research opportunities and their directions,and applications.展开更多
Integrating Tiny Machine Learning(TinyML)with edge computing in remotely sensed images enhances the capabilities of road anomaly detection on a broader level.Constrained devices efficiently implement a Binary Neural N...Integrating Tiny Machine Learning(TinyML)with edge computing in remotely sensed images enhances the capabilities of road anomaly detection on a broader level.Constrained devices efficiently implement a Binary Neural Network(BNN)for road feature extraction,utilizing quantization and compression through a pruning strategy.The modifications resulted in a 28-fold decrease in memory usage and a 25%enhancement in inference speed while only experiencing a 2.5%decrease in accuracy.It showcases its superiority over conventional detection algorithms in different road image scenarios.Although constrained by computer resources and training datasets,our results indicate opportunities for future research,demonstrating that quantization and focused optimization can significantly improve machine learning models’accuracy and operational efficiency.ARM Cortex-M0 gives practical feasibility and substantial benefits while deploying our optimized BNN model on this low-power device:Advanced machine learning in edge computing.The analysis work delves into the educational significance of TinyML and its essential function in analyzing road networks using remote sensing,suggesting ways to improve smart city frameworks in road network assessment,traffic management,and autonomous vehicle navigation systems by emphasizing the importance of new technologies for maintaining and safeguarding road networks.展开更多
In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of sate...In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of satellites necessitate the use of edge computing to enhance secure communication.While edge computing reduces the burden on cloud computing, it introduces security and reliability challenges in open satellite communication channels. To address these challenges, we propose a blockchain architecture specifically designed for edge computing in mega-constellation communication systems. This architecture narrows down the consensus scope of the blockchain to meet the requirements of edge computing while ensuring comprehensive log storage across the network. Additionally, we introduce a reputation management mechanism for nodes within the blockchain, evaluating their trustworthiness, workload, and efficiency. Nodes with higher reputation scores are selected to participate in tasks and are appropriately incentivized. Simulation results demonstrate that our approach achieves a task result reliability of 95% while improving computational speed.展开更多
With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for clou...With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.展开更多
Mobile edge computing(MEC)is a promising paradigm by deploying edge servers(nodes)with computation and storage capacity close to IoT devices.Content Providers can cache data in edge servers and provide services for Io...Mobile edge computing(MEC)is a promising paradigm by deploying edge servers(nodes)with computation and storage capacity close to IoT devices.Content Providers can cache data in edge servers and provide services for IoT devices,which effectively reduces the delay for acquiring data.With the increasing number of IoT devices requesting for services,the spectrum resources are generally limited.In order to effectively meet the challenge of limited spectrum resources,the Non-Orthogonal Multiple Access(NOMA)is proposed to improve the transmission efficiency.In this paper,we consider the caching scenario in a NOMA-enabled MEC system.All the devices compete for the limited resources and tend to minimize their own cost.We formulate the caching problem,and the goal is to minimize the delay cost for each individual device subject to resource constraints.We reformulate the optimization as a non-cooperative game model.We prove the existence of Nash equilibrium(NE)solution in the game model.Then,we design the Game-based Cost-Efficient Edge Caching Algorithm(GCECA)to solve the problem.The effectiveness of our GCECA algorithm is validated by both parameter analysis and comparison experiments.展开更多
The Autonomous Underwater Glider(AUG)is a kind of prevailing underwater intelligent internet vehicle and occupies a dominant position in industrial applications,in which path planning is an essential problem.Due to th...The Autonomous Underwater Glider(AUG)is a kind of prevailing underwater intelligent internet vehicle and occupies a dominant position in industrial applications,in which path planning is an essential problem.Due to the complexity and variability of the ocean,accurate environment modeling and flexible path planning algorithms are pivotal challenges.The traditional models mainly utilize mathematical functions,which are not complete and reliable.Most existing path planning algorithms depend on the environment and lack flexibility.To overcome these challenges,we propose a path planning system for underwater intelligent internet vehicles.It applies digital twins and sensor data to map the real ocean environment to a virtual digital space,which provides a comprehensive and reliable environment for path simulation.We design a value-based reinforcement learning path planning algorithm and explore the optimal network structure parameters.The path simulation is controlled by a closed-loop model integrated into the terminal vehicle through edge computing.The integration of state input enriches the learning of neural networks and helps to improve generalization and flexibility.The task-related reward function promotes the rapid convergence of the training.The experimental results prove that our reinforcement learning based path planning algorithm has great flexibility and can effectively adapt to a variety of different ocean conditions.展开更多
The Internet of Things(IoT)has characteristics such as node mobility,node heterogeneity,link heterogeneity,and topology heterogeneity.In the face of the IoT characteristics and the explosive growth of IoT nodes,which ...The Internet of Things(IoT)has characteristics such as node mobility,node heterogeneity,link heterogeneity,and topology heterogeneity.In the face of the IoT characteristics and the explosive growth of IoT nodes,which brings about large-scale data processing requirements,edge computing architecture has become an emerging network architecture to support IoT applications due to its ability to provide powerful computing capabilities and good service functions.However,the defense mechanism of Edge Computing-enabled IoT Nodes(ECIoTNs)is still weak due to their limited resources,so that they are susceptible to malicious software spread,which can compromise data confidentiality and network service availability.Facing this situation,we put forward an epidemiology-based susceptible-curb-infectious-removed-dead(SCIRD)model.Then,we analyze the dynamics of ECIoTNs with different infection levels under different initial conditions to obtain the dynamic differential equations.Additionally,we establish the presence of equilibrium states in the SCIRD model.Furthermore,we conduct an analysis of the model’s stability and examine the conditions under which malicious software will either spread or disappear within Edge Computing-enabled IoT(ECIoT)networks.Lastly,we validate the efficacy and superiority of the SCIRD model through MATLAB simulations.These research findings offer a theoretical foundation for suppressing the propagation of malicious software in ECIoT networks.The experimental results indicate that the theoretical SCIRD model has instructive significance,deeply revealing the principles of malicious software propagation in ECIoT networks.This study solves a challenging security problem of ECIoT networks by determining the malicious software propagation threshold,which lays the foundation for buildingmore secure and reliable ECIoT networks.展开更多
By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-grow...By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC.展开更多
Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile termin...Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile terminals to exploit storage infrastructure,improve content delivery efficiency,and enhance user experience.However,due to the limited capacity of edge servers,it remains a significant challenge to meet the changing,time-varying,and customized needs for highly diversified content of users.Recently,techniques for caching content at the edge are becoming popular for addressing the above challenges.It is capable of filling the communication gap between the users and content providers while relieving pressure on remote cloud servers.However,existing static caching strategies are still inefficient in handling the dynamics of the time-varying popularity of content and meeting users’demands for highly diversified entity data.To address this challenge,we introduce a novel method for content caching over MEC,i.e.,PRIME.It synthesizes a content popularity prediction model,which takes users’stay time and their request traces as inputs,and a deep reinforcement learning model for yielding dynamic caching schedules.Experimental results demonstrate that PRIME,when tested upon the MovieLens 1M dataset for user request patterns and the Shanghai Telecom dataset for user mobility,outperforms its peers in terms of cache hit rates,transmission latency,and system cost.展开更多
To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network...To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network’s edge.However,resource-constrained mobile devices still suffer from a capacity mismatch when faced with latency-sensitive and compute-intensive emerging applications.To address the difficulty of running computationally intensive applications on resource-constrained clients,a model of the computation offloading problem in a network consisting of multiple mobile users and edge cloud servers is studied in this paper.Then a user benefit function EoU(Experience of Users)is proposed jointly considering energy consumption and time delay.The EoU maximization problem is decomposed into two steps,i.e.,resource allocation and offloading decision.The offloading decision is usually given by heuristic algorithms which are often faced with the challenge of slow convergence and poor stability.Thus,a combined offloading algorithm,i.e.,a Gini coefficient-based adaptive genetic algorithm(GCAGA),is proposed to alleviate the dilemma.The proposed algorithm optimizes the offloading decision by maximizing EoU and accelerates the convergence with the Gini coefficient.The simulation compares the proposed algorithm with the genetic algorithm(GA)and adaptive genetic algorithm(AGA).Experiment results show that the Gini coefficient and the adaptive heuristic operators can accelerate the convergence speed,and the proposed algorithm performs better in terms of convergence while obtaining higher EoU.The simulation code of the proposed algorithm is available:https://github.com/Grox888/Mobile_Edge_Computing/tree/GCAGA.展开更多
Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus t...Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.展开更多
Edge computing paradigm for 5G architecture has been considered as one of the most effective ways to realize low latency and highly reliable communication,which brings computing tasks and network resources to the edge...Edge computing paradigm for 5G architecture has been considered as one of the most effective ways to realize low latency and highly reliable communication,which brings computing tasks and network resources to the edge of network.The deployment of edge computing nodes is a key factor affecting the service performance of edge computing systems.In this paper,we propose a method for deploying edge computing nodes based on user location.Through the combination of Simulation of Urban Mobility(SUMO)and Network Simulator-3(NS-3),a simulation platform is built to generate data of hotspot areas in Io T scenario.By effectively using the data generated by the communication between users in Io T scenario,the location area of the user terminal can be obtained.On this basis,the deployment problem is expressed as a mixed integer linear problem,which can be solved by Simulated Annealing(SA)method.The analysis of the results shows that,compared with the traditional method,the proposed method has faster convergence speed and better performance.展开更多
In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amount...In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.展开更多
In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises e...In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises essential components such as base stations,edge servers,and numerous IIoT devices characterized by limited energy and computing capacities.The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption.The system operates in discrete time slots and employs a quasi-static approach,with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context.This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation,particularly relevant in real-time industrial applications.Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches,reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Qn O.Moreover,the algorithmeffectively balances complexity and network performance,as demonstratedwhen reducing the number of devices in each group(Ng)from 200 to 50,resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption.This research offers a practical solution for optimizing IIoT networks in real-time industrial settings.展开更多
The Internet of Things(IoT)has revolutionized how we interact with and gather data from our surrounding environment.IoT devices with various sensors and actuators generate vast amounts of data that can be harnessed to...The Internet of Things(IoT)has revolutionized how we interact with and gather data from our surrounding environment.IoT devices with various sensors and actuators generate vast amounts of data that can be harnessed to derive valuable insights.The rapid proliferation of Internet of Things(IoT)devices has ushered in an era of unprecedented data generation and connectivity.These IoT devices,equipped with many sensors and actuators,continuously produce vast volumes of data.However,the conventional approach of transmitting all this data to centralized cloud infrastructures for processing and analysis poses significant challenges.However,transmitting all this data to a centralized cloud infrastructure for processing and analysis can be inefficient and impractical due to bandwidth limitations,network latency,and scalability issues.This paper proposed a Self-Learning Internet Traffic Fuzzy Classifier(SLItFC)for traffic data analysis.The proposed techniques effectively utilize clustering and classification procedures to improve classification accuracy in analyzing network traffic data.SLItFC addresses the intricate task of efficiently managing and analyzing IoT data traffic at the edge.It employs a sophisticated combination of fuzzy clustering and self-learning techniques,allowing it to adapt and improve its classification accuracy over time.This adaptability is a crucial feature,given the dynamic nature of IoT environments where data patterns and traffic characteristics can evolve rapidly.With the implementation of the fuzzy classifier,the accuracy of the clustering process is improvised with the reduction of the computational time.SLItFC can reduce computational time while maintaining high classification accuracy.This efficiency is paramount in edge computing,where resource constraints demand streamlined data processing.Additionally,SLItFC’s performance advantages make it a compelling choice for organizations seeking to harness the potential of IoT data for real-time insights and decision-making.With the Self-Learning process,the SLItFC model monitors the network traffic data acquired from the IoT Devices.The Sugeno fuzzy model is implemented within the edge computing environment for improved classification accuracy.Simulation analysis stated that the proposed SLItFC achieves 94.5%classification accuracy with reduced classification time.展开更多
The development of communication technology will promote the application of Internet of Things,and Beyond 5G will become a new technology promoter.At the same time,Beyond 5G will become one of the important supports f...The development of communication technology will promote the application of Internet of Things,and Beyond 5G will become a new technology promoter.At the same time,Beyond 5G will become one of the important supports for the development of edge computing technology.This paper proposes a communication task allocation algorithm based on deep reinforcement learning for vehicle-to-pedestrian communication scenarios in edge computing.Through trial and error learning of agent,the optimal spectrum and power can be determined for transmission without global information,so as to balance the communication between vehicle-to-pedestrian and vehicle-to-infrastructure.The results show that the agent can effectively improve vehicle-to-infrastructure communication rate as well as meeting the delay constraints on the vehicle-to-pedestrian link.展开更多
Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy....Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.展开更多
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ...Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.展开更多
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t...In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.展开更多
基金This work was supported by the Key Scientific and Technological Project of Henan Province(Grant Number 222102210212)Doctoral Research Start Project of Henan Institute of Technology(Grant Number KQ2005)Key Research Projects of Colleges and Universities in Henan Province(Grant Number 23B510006).
文摘In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes.
文摘Edge technology aims to bring cloud resources(specifically,the computation,storage,and network)to the closed proximity of the edge devices,i.e.,smart devices where the data are produced and consumed.Embedding computing and application in edge devices lead to emerging of two new concepts in edge technology:edge computing and edge analytics.Edge analytics uses some techniques or algorithms to analyse the data generated by the edge devices.With the emerging of edge analytics,the edge devices have become a complete set.Currently,edge analytics is unable to provide full support to the analytic techniques.The edge devices cannot execute advanced and sophisticated analytic algorithms following various constraints such as limited power supply,small memory size,limited resources,etc.This article aims to provide a detailed discussion on edge analytics.The key contributions of the paper are as follows-a clear explanation to distinguish between the three concepts of edge technology:edge devices,edge computing,and edge analytics,along with their issues.In addition,the article discusses the implementation of edge analytics to solve many problems and applications in various areas such as retail,agriculture,industry,and healthcare.Moreover,the research papers of the state-of-the-art edge analytics are rigorously reviewed in this article to explore the existing issues,emerging challenges,research opportunities and their directions,and applications.
基金supported by the National Natural Science Foundation of China(61170147)Scientific Research Project of Zhejiang Provincial Department of Education in China(Y202146796)+2 种基金Natural Science Foundation of Zhejiang Province in China(LTY22F020003)Wenzhou Major Scientific and Technological Innovation Project of China(ZG2021029)Scientific and Technological Projects of Henan Province in China(202102210172).
文摘Integrating Tiny Machine Learning(TinyML)with edge computing in remotely sensed images enhances the capabilities of road anomaly detection on a broader level.Constrained devices efficiently implement a Binary Neural Network(BNN)for road feature extraction,utilizing quantization and compression through a pruning strategy.The modifications resulted in a 28-fold decrease in memory usage and a 25%enhancement in inference speed while only experiencing a 2.5%decrease in accuracy.It showcases its superiority over conventional detection algorithms in different road image scenarios.Although constrained by computer resources and training datasets,our results indicate opportunities for future research,demonstrating that quantization and focused optimization can significantly improve machine learning models’accuracy and operational efficiency.ARM Cortex-M0 gives practical feasibility and substantial benefits while deploying our optimized BNN model on this low-power device:Advanced machine learning in edge computing.The analysis work delves into the educational significance of TinyML and its essential function in analyzing road networks using remote sensing,suggesting ways to improve smart city frameworks in road network assessment,traffic management,and autonomous vehicle navigation systems by emphasizing the importance of new technologies for maintaining and safeguarding road networks.
基金supported in part by the National Natural Science Foundation of China under Grant No.U2268204,62172061 and 61871422National Key R&D Program of China under Grant No.2020YFB1711800 and 2020YFB1707900+2 种基金the Science and Technology Project of Sichuan Province under Grant No.2023ZHCG0014,2023ZHCG0011,2022YFG0155,2022YFG0157,2021GFW019,2021YFG0152,2021YFG0025,2020YFG0322Central Universities of Southwest Minzu University under Grant No.ZYN2022032,2023NYXXS034the State Scholarship Fund of the China Scholarship Council under Grant No.202008510081。
文摘In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of satellites necessitate the use of edge computing to enhance secure communication.While edge computing reduces the burden on cloud computing, it introduces security and reliability challenges in open satellite communication channels. To address these challenges, we propose a blockchain architecture specifically designed for edge computing in mega-constellation communication systems. This architecture narrows down the consensus scope of the blockchain to meet the requirements of edge computing while ensuring comprehensive log storage across the network. Additionally, we introduce a reputation management mechanism for nodes within the blockchain, evaluating their trustworthiness, workload, and efficiency. Nodes with higher reputation scores are selected to participate in tasks and are appropriately incentivized. Simulation results demonstrate that our approach achieves a task result reliability of 95% while improving computational speed.
基金sponsored by the National Natural Science Foundation of China under grant number No. 62172353, No. 62302114, No. U20B2046 and No. 62172115Innovation Fund Program of the Engineering Research Center for Integration and Application of Digital Learning Technology of Ministry of Education No.1331007 and No. 1311022+1 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No. 17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX-108
文摘With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.
基金supported in part by Beijing Natural Science Foundation under Grant L232050in part by the Project of Cultivation for young topmotch Talents of Beijing Municipal Institutions under Grant BPHR202203225in part by Young Elite Scientists Sponsorship Program by BAST under Grant BYESS2023031.
文摘Mobile edge computing(MEC)is a promising paradigm by deploying edge servers(nodes)with computation and storage capacity close to IoT devices.Content Providers can cache data in edge servers and provide services for IoT devices,which effectively reduces the delay for acquiring data.With the increasing number of IoT devices requesting for services,the spectrum resources are generally limited.In order to effectively meet the challenge of limited spectrum resources,the Non-Orthogonal Multiple Access(NOMA)is proposed to improve the transmission efficiency.In this paper,we consider the caching scenario in a NOMA-enabled MEC system.All the devices compete for the limited resources and tend to minimize their own cost.We formulate the caching problem,and the goal is to minimize the delay cost for each individual device subject to resource constraints.We reformulate the optimization as a non-cooperative game model.We prove the existence of Nash equilibrium(NE)solution in the game model.Then,we design the Game-based Cost-Efficient Edge Caching Algorithm(GCECA)to solve the problem.The effectiveness of our GCECA algorithm is validated by both parameter analysis and comparison experiments.
基金supported by the National Natural Science Foundation of China(No.61871283).
文摘The Autonomous Underwater Glider(AUG)is a kind of prevailing underwater intelligent internet vehicle and occupies a dominant position in industrial applications,in which path planning is an essential problem.Due to the complexity and variability of the ocean,accurate environment modeling and flexible path planning algorithms are pivotal challenges.The traditional models mainly utilize mathematical functions,which are not complete and reliable.Most existing path planning algorithms depend on the environment and lack flexibility.To overcome these challenges,we propose a path planning system for underwater intelligent internet vehicles.It applies digital twins and sensor data to map the real ocean environment to a virtual digital space,which provides a comprehensive and reliable environment for path simulation.We design a value-based reinforcement learning path planning algorithm and explore the optimal network structure parameters.The path simulation is controlled by a closed-loop model integrated into the terminal vehicle through edge computing.The integration of state input enriches the learning of neural networks and helps to improve generalization and flexibility.The task-related reward function promotes the rapid convergence of the training.The experimental results prove that our reinforcement learning based path planning algorithm has great flexibility and can effectively adapt to a variety of different ocean conditions.
基金in part by National Undergraduate Innovation and Entrepreneurship Training Program under Grant No.202310347039Zhejiang Provincial Natural Science Foundation of China under Grant No.LZ22F020002Huzhou Science and Technology Planning Foundation under Grant No.2023GZ04.
文摘The Internet of Things(IoT)has characteristics such as node mobility,node heterogeneity,link heterogeneity,and topology heterogeneity.In the face of the IoT characteristics and the explosive growth of IoT nodes,which brings about large-scale data processing requirements,edge computing architecture has become an emerging network architecture to support IoT applications due to its ability to provide powerful computing capabilities and good service functions.However,the defense mechanism of Edge Computing-enabled IoT Nodes(ECIoTNs)is still weak due to their limited resources,so that they are susceptible to malicious software spread,which can compromise data confidentiality and network service availability.Facing this situation,we put forward an epidemiology-based susceptible-curb-infectious-removed-dead(SCIRD)model.Then,we analyze the dynamics of ECIoTNs with different infection levels under different initial conditions to obtain the dynamic differential equations.Additionally,we establish the presence of equilibrium states in the SCIRD model.Furthermore,we conduct an analysis of the model’s stability and examine the conditions under which malicious software will either spread or disappear within Edge Computing-enabled IoT(ECIoT)networks.Lastly,we validate the efficacy and superiority of the SCIRD model through MATLAB simulations.These research findings offer a theoretical foundation for suppressing the propagation of malicious software in ECIoT networks.The experimental results indicate that the theoretical SCIRD model has instructive significance,deeply revealing the principles of malicious software propagation in ECIoT networks.This study solves a challenging security problem of ECIoT networks by determining the malicious software propagation threshold,which lays the foundation for buildingmore secure and reliable ECIoT networks.
基金supported in part by the National Natural Science Foundation of China under Grant 62171465,62072303,62272223,U22A2031。
文摘By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC.
文摘Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile terminals to exploit storage infrastructure,improve content delivery efficiency,and enhance user experience.However,due to the limited capacity of edge servers,it remains a significant challenge to meet the changing,time-varying,and customized needs for highly diversified content of users.Recently,techniques for caching content at the edge are becoming popular for addressing the above challenges.It is capable of filling the communication gap between the users and content providers while relieving pressure on remote cloud servers.However,existing static caching strategies are still inefficient in handling the dynamics of the time-varying popularity of content and meeting users’demands for highly diversified entity data.To address this challenge,we introduce a novel method for content caching over MEC,i.e.,PRIME.It synthesizes a content popularity prediction model,which takes users’stay time and their request traces as inputs,and a deep reinforcement learning model for yielding dynamic caching schedules.Experimental results demonstrate that PRIME,when tested upon the MovieLens 1M dataset for user request patterns and the Shanghai Telecom dataset for user mobility,outperforms its peers in terms of cache hit rates,transmission latency,and system cost.
文摘To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network’s edge.However,resource-constrained mobile devices still suffer from a capacity mismatch when faced with latency-sensitive and compute-intensive emerging applications.To address the difficulty of running computationally intensive applications on resource-constrained clients,a model of the computation offloading problem in a network consisting of multiple mobile users and edge cloud servers is studied in this paper.Then a user benefit function EoU(Experience of Users)is proposed jointly considering energy consumption and time delay.The EoU maximization problem is decomposed into two steps,i.e.,resource allocation and offloading decision.The offloading decision is usually given by heuristic algorithms which are often faced with the challenge of slow convergence and poor stability.Thus,a combined offloading algorithm,i.e.,a Gini coefficient-based adaptive genetic algorithm(GCAGA),is proposed to alleviate the dilemma.The proposed algorithm optimizes the offloading decision by maximizing EoU and accelerates the convergence with the Gini coefficient.The simulation compares the proposed algorithm with the genetic algorithm(GA)and adaptive genetic algorithm(AGA).Experiment results show that the Gini coefficient and the adaptive heuristic operators can accelerate the convergence speed,and the proposed algorithm performs better in terms of convergence while obtaining higher EoU.The simulation code of the proposed algorithm is available:https://github.com/Grox888/Mobile_Edge_Computing/tree/GCAGA.
基金supported in part by the National Key R&D Program of China under Grant 2020YFB1005900the National Natural Science Foundation of China under Grant 62001220+3 种基金the Jiangsu Provincial Key Research and Development Program under Grants BE2022068the Natural Science Foundation of Jiangsu Province under Grants BK20200440the Future Network Scientific Research Fund Project FNSRFP-2021-YB-03the Young Elite Scientist Sponsorship Program,China Association for Science and Technology.
文摘Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.
基金supported in part by the Beijing Natural Science Foundation under Grant L201011in part by the National Natural Science Foundation of China(U2001213 and 61971191)in part by National Key Research and Development Project(2020YFB1807204)。
文摘Edge computing paradigm for 5G architecture has been considered as one of the most effective ways to realize low latency and highly reliable communication,which brings computing tasks and network resources to the edge of network.The deployment of edge computing nodes is a key factor affecting the service performance of edge computing systems.In this paper,we propose a method for deploying edge computing nodes based on user location.Through the combination of Simulation of Urban Mobility(SUMO)and Network Simulator-3(NS-3),a simulation platform is built to generate data of hotspot areas in Io T scenario.By effectively using the data generated by the communication between users in Io T scenario,the location area of the user terminal can be obtained.On this basis,the deployment problem is expressed as a mixed integer linear problem,which can be solved by Simulated Annealing(SA)method.The analysis of the results shows that,compared with the traditional method,the proposed method has faster convergence speed and better performance.
基金supported in part by the National Natural Science Foundation of China(No.61701197)in part by the National Key Research and Development Program of China(No.2021YFA1000500(4))in part by the 111 Project(No.B23008).
文摘In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group research project under Grant Number RGP2/474/44.
文摘In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises essential components such as base stations,edge servers,and numerous IIoT devices characterized by limited energy and computing capacities.The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption.The system operates in discrete time slots and employs a quasi-static approach,with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context.This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation,particularly relevant in real-time industrial applications.Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches,reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Qn O.Moreover,the algorithmeffectively balances complexity and network performance,as demonstratedwhen reducing the number of devices in each group(Ng)from 200 to 50,resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption.This research offers a practical solution for optimizing IIoT networks in real-time industrial settings.
基金This research is funded by 2023 Henan Province Science and Technology Research Projects:Key Technology of Rapid Urban Flood Forecasting Based onWater Level Feature Analysis and Spatio-Temporal Deep Learning(No.232102320015)Henan Provincial Higher Education Key Research Project Program(Project No.23B520024)a Multi-Sensor-Based Indoor Environmental Parameters Monitoring and Control System.
文摘The Internet of Things(IoT)has revolutionized how we interact with and gather data from our surrounding environment.IoT devices with various sensors and actuators generate vast amounts of data that can be harnessed to derive valuable insights.The rapid proliferation of Internet of Things(IoT)devices has ushered in an era of unprecedented data generation and connectivity.These IoT devices,equipped with many sensors and actuators,continuously produce vast volumes of data.However,the conventional approach of transmitting all this data to centralized cloud infrastructures for processing and analysis poses significant challenges.However,transmitting all this data to a centralized cloud infrastructure for processing and analysis can be inefficient and impractical due to bandwidth limitations,network latency,and scalability issues.This paper proposed a Self-Learning Internet Traffic Fuzzy Classifier(SLItFC)for traffic data analysis.The proposed techniques effectively utilize clustering and classification procedures to improve classification accuracy in analyzing network traffic data.SLItFC addresses the intricate task of efficiently managing and analyzing IoT data traffic at the edge.It employs a sophisticated combination of fuzzy clustering and self-learning techniques,allowing it to adapt and improve its classification accuracy over time.This adaptability is a crucial feature,given the dynamic nature of IoT environments where data patterns and traffic characteristics can evolve rapidly.With the implementation of the fuzzy classifier,the accuracy of the clustering process is improvised with the reduction of the computational time.SLItFC can reduce computational time while maintaining high classification accuracy.This efficiency is paramount in edge computing,where resource constraints demand streamlined data processing.Additionally,SLItFC’s performance advantages make it a compelling choice for organizations seeking to harness the potential of IoT data for real-time insights and decision-making.With the Self-Learning process,the SLItFC model monitors the network traffic data acquired from the IoT Devices.The Sugeno fuzzy model is implemented within the edge computing environment for improved classification accuracy.Simulation analysis stated that the proposed SLItFC achieves 94.5%classification accuracy with reduced classification time.
基金supported by National Natural Science Foundation of China(No.61871283)the Foundation of Pre-Research on Equipment of China(No.61400010304)Major Civil-Military Integration Project in Tianjin,China(No.18ZXJMTG00170).
文摘The development of communication technology will promote the application of Internet of Things,and Beyond 5G will become a new technology promoter.At the same time,Beyond 5G will become one of the important supports for the development of edge computing technology.This paper proposes a communication task allocation algorithm based on deep reinforcement learning for vehicle-to-pedestrian communication scenarios in edge computing.Through trial and error learning of agent,the optimal spectrum and power can be determined for transmission without global information,so as to balance the communication between vehicle-to-pedestrian and vehicle-to-infrastructure.The results show that the agent can effectively improve vehicle-to-infrastructure communication rate as well as meeting the delay constraints on the vehicle-to-pedestrian link.
基金supported by Jilin Provincial Science and Technology Department Natural Science Foundation of China(20210101415JC)Jilin Provincial Science and Technology Department Free exploration research project of China(YDZJ202201ZYTS642).
文摘Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.
基金supported by National Key Research and Development Program of China(2018YFC1504502).
文摘Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.
基金funding from TECNALIA,Basque Research and Technology Alliance(BRTA)supported by the project aOptimization of Deep Learning algorithms for Edge IoT devices for sensorization and control in Buildings and Infrastructures(EMBED)funded by the Gipuzkoa Provincial Council and approved under the 2023 call of the Guipuzcoan Network of Science,Technology and Innovation Program with File Number 2023-CIEN-000051-01.
文摘In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.