The sixth generation(6G)of wireless cellular networks is expected to incorporate the latest developments in network infrastructure and emerging advances in technology.In the age of 6G,edge caching technology will evol...The sixth generation(6G)of wireless cellular networks is expected to incorporate the latest developments in network infrastructure and emerging advances in technology.In the age of 6G,edge caching technology will evolve towards intelligence,dynamics,and security.However,the security problems of edge caching,including data tampering and eavesdropping,are seldomly considered in most literatures.In this paper,we consider the two-hop edge caching where the blockchain and physical layer security technologies are adopted to prevent data from being tampered with and eavesdropped.We design blockchain-based framework to guarantee the reliability of important data such as the frequency of contents and jointly optimize content caching probability and redundancy rate to maximize the secure transmission probability.Extensive simulation shows that our optimization scheme can significantly improve the secure transmission probability of edge cache network,whether facing the threat of independent eavesdropping or joint eavesdropping.展开更多
Edge caching is an emerging technology for supporting massive content access in mobile edge networks to address rapidly growing Internet of Things(IoT)services and content applications.However,the edge server is limit...Edge caching is an emerging technology for supporting massive content access in mobile edge networks to address rapidly growing Internet of Things(IoT)services and content applications.However,the edge server is limited with the computation/storage capacity,which causes a low cache hit.Cooperative edge caching jointing neighbor edge servers is regarded as a promising technique to improve cache hit and reduce congestion of the networks.Further,recommender systems can provide personalized content services to meet user’s requirements in the entertainment-oriented mobile networks.Therefore,we investigate the issue of joint cooperative edge caching and recommender systems to achieve additional cache gains by the soft caching framework.To measure the cache profits,the optimization problem is formulated as a 0-1 Integer Linear Programming(ILP),which is NP-hard.Specifically,the method of processing content requests is defined as server actions,we determine the server actions to maximize the quality of experience(QoE).We propose a cachefriendly heuristic algorithm to solve it.Simulation results demonstrate that the proposed framework has superior performance in improving the QoE.展开更多
To relieve the backhaul link stress and reduce the content acquisition delay,mobile edge caching has become one of the promising approaches.In this paper,a novel federated reinforcement learning(FRL)method with adapti...To relieve the backhaul link stress and reduce the content acquisition delay,mobile edge caching has become one of the promising approaches.In this paper,a novel federated reinforcement learning(FRL)method with adaptive training times is proposed for edge caching.Through a new federated learning process with the asynchronous model training process and synchronous global aggregation process,the proposed FRL-based edge caching algorithm mitigates the performance degradation brought by the non-identically and independently distributed(noni.i.d.)characteristics of content popularity among edge nodes.The theoretical bound of the loss function difference is analyzed in the paper,based on which the training times adaption mechanism is proposed to deal with the tradeoff between local training and global aggregation for each edge node in the federation.Numerical simulations have verified that the proposed FRL-based edge caching method outperforms other baseline methods in terms of the caching benefit,the cache hit ratio and the convergence speed.展开更多
With the rapid development of mobile communication technology and intelligent applications,the quantity of mobile devices and data traffic in networks have been growing exponentially,which poses a great burden to netw...With the rapid development of mobile communication technology and intelligent applications,the quantity of mobile devices and data traffic in networks have been growing exponentially,which poses a great burden to networks and brings huge challenge to servicing user demand.Edge caching,which utilizes the storage and computation resources of the edge to bring resources closer to end users,is a promising way to relieve network burden and enhance user experience.In this paper,we aim to survey the edge caching techniques from a comprehensive and systematic perspective.We first present an overview of edge caching,summarizing the three key issues regarding edge caching,i.e.,where,what,and how to cache,and then introducing several significant caching metrics.We then carry out a detailed and in-depth elaboration on these three issues,which correspond to caching locations,caching objects,and caching strategies,respectively.In particular,we innovate on the issue“what to cache”,interpreting it as the classification of the“caching objects”,which can be further classified into content cache,data cache,and service cache.Finally,we discuss several open issues and challenges of edge caching to inspire future investigations in this research area.展开更多
Computing-intensive and latency-sensitive user requests pose significant challenges to traditional cloud computing.In response to these challenges,mobile edge computing(MEC)has emerged as a new paradigm that extends t...Computing-intensive and latency-sensitive user requests pose significant challenges to traditional cloud computing.In response to these challenges,mobile edge computing(MEC)has emerged as a new paradigm that extends the computational,caching,and communication capabilities of cloud computing.By caching certain services on edge nodes,computational support can be provided for requests that are offloaded to the edges.However,previous studies on task offloading have generally not considered the impact of caching mechanisms and the cache space occupied by services.This oversight can lead to problems,such as high delays in task executions and invalidation of offloading decisions.To optimize task response time and ensure the availability of task offloading decisions,we investigate a task offloading method that considers caching mechanism.First,we incorporate the cache information of MEC into the model of task offloading and reduce the task offloading problem as a mixed integer nonlinear programming(MINLP)problem.Then,we propose an integer particle swarm optimization and improved genetic algorithm(IPSO_IGA)to solve the MINLP.IPSO_IGA exploits the evolutionary framework of particle swarm optimization.And it uses a crossover operator to update the positions of particles and an improved mutation operator to maintain the diversity of particles.Finally,extensive simulation experiments are conducted to evaluate the performance of the proposed algorithm.The experimental results demonstrate that IPSO_IGA can save 20%to 82%of the task completion time,compared with state-of-theart and classical algorithms.Moreover,IPSO_IGA is suitable for scenarios with complex network structures and computing-intensive tasks.展开更多
The edge caching resource allocation problem in Fog Radio Access Networks(F-RANs)is investigated.An incentive mechanism is introduced to motivate Content Providers(CPs)to participate in the resource allocation procedu...The edge caching resource allocation problem in Fog Radio Access Networks(F-RANs)is investigated.An incentive mechanism is introduced to motivate Content Providers(CPs)to participate in the resource allocation procedure.We formulate the interaction between the cloud server and the CPs as a Stackelberg game,where the cloud server sets nonuniform prices for the Fog Access Points(F-APs)while the CPs lease the F-APs for caching their most popular contents.Then,by exploiting the multiplier penalty function method,we transform the constrained optimization problem of the cloud server into an equivalent non-constrained one,which is further solved by using the simplex search method.Moreover,the existence and uniqueness of the Nash Equilibrium(NE)of the Stackelberg game are analyzed theoretically.Furthermore,we propose a uniform pricing based resource allocation strategy by eliminating the competition among the CPs,and we also theoretically analyze the factors that affect the uniform pricing strategy of the cloud server.We also propose a global optimization-based resource allocation strategy by further eliminating the competition between the cloud server and the CPs.Simulation results are provided for quantifying the proposed strategies by showing their efficiency in pricing and resource allocation.展开更多
Cache-enabling unmanned aerial vehicles(UAVs)are considered for storing popular contents and providing downlink data offloading in cellular networks.In this context,we formulate a joint optimization problem of user as...Cache-enabling unmanned aerial vehicles(UAVs)are considered for storing popular contents and providing downlink data offloading in cellular networks.In this context,we formulate a joint optimization problem of user association,caching placement,and backhaul bandwidth allocation for minimizing content acquisition delay with consideration of UAVs’energy constraint.We decompose the formulated problem into two subproblems:i)user association and caching placement and ii)backhaul bandwidth allocation.We first obtain the optimal bandwidth allocation with given user association and caching placement by the Lagrangian multiplier approach.After that,embedding the backhaul bandwidth allocation algorithm,we solve the user association and caching placement problem by a threedimensional(3D)matching method.Then we decompose it into two two-dimensional(2D)matching problems and develop low-complexity algorithms.The proposed scheme converges and exhibits a low computational complexity.Simulation results demonstrate that the proposed cache-enabling UAV framework outperforms the conventional UAV-assisted cellular networks in terms of content acquisition delay and the proposed scheme achieves significantly lower content acquisition delay compared with other two benchmark schemes.展开更多
With the increasing maritime activities and the rapidly developing maritime economy, the fifth-generation(5G) mobile communication system is expected to be deployed at the ocean. New technologies need to be explored t...With the increasing maritime activities and the rapidly developing maritime economy, the fifth-generation(5G) mobile communication system is expected to be deployed at the ocean. New technologies need to be explored to meet the requirements of ultra-reliable and low latency communications(URLLC) in the maritime communication network(MCN). Mobile edge computing(MEC) can achieve high energy efficiency in MCN at the cost of suffering from high control plane latency and low reliability. In terms of this issue, the mobile edge communications, computing, and caching(MEC3) technology is proposed to sink mobile computing, network control, and storage to the edge of the network. New methods that enable resource-efficient configurations and reduce redundant data transmissions can enable the reliable implementation of computing-intension and latency-sensitive applications. The key technologies of MEC3 to enable URLLC are analyzed and optimized in MCN. The best response-based offloading algorithm(BROA) is adopted to optimize task offloading. The simulation results show that the task latency can be decreased by 26.5’ ms, and the energy consumption in terminal users can be reduced to 66.6%.展开更多
Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve th...Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve this problem effectively.This paper proposes a distributed edge collaborative caching mechanism for Internet online request services scenario.It solves the problem of large average access delay caused by unbalanced load of edge servers,meets users’differentiated service demands and improves user experience.In particular,the edge cache node selection algorithm is optimized,and a novel edge cache replacement strategy considering the differentiated user requests is proposed.This mechanism can shorten the response time to a large number of user requests.Experimental results show that,compared with the current advanced online edge caching algorithm,the proposed edge collaborative caching strategy in this paper can reduce the average response delay by 9%.It also increases the user utility by 4.5 times in differentiated service scenarios,and significantly reduces the time complexity of the edge caching algorithm.展开更多
The emerging mobile edge networks with content caching capability allows end users to receive information from adjacent edge servers directly instead of a centralized data warehouse,thus the network transmission delay...The emerging mobile edge networks with content caching capability allows end users to receive information from adjacent edge servers directly instead of a centralized data warehouse,thus the network transmission delay and system throughput can be improved significantly.Since the duplicate content transmissions between edge network and remote cloud can be reduced,the appropriate caching strategy can also improve the system energy efficiency of mobile edge networks to a great extent.This paper focuses on how to improve the network energy efficiency and proposes an intelligent caching strategy according to the cached content distribution model for mobile edge networks based on promising deep reinforcement learning algorithm.The deep neural network(DNN)and Q-learning algorithm are combined to design a deep reinforcement learning framework named as the deep-Q neural network(DQN),in which the DNN is adopted to represent the approximation of action-state value function in the Q-learning solution.The parameters iteration strategies in the proposed DQN algorithm were improved through stochastic gradient descent method,so the DQN algorithm could converge to the optimal solution quickly,and the network performance of the content caching policy can be optimized.The simulation results show that the proposed intelligent DQN-based content cache strategy with enough training steps could improve the energy efficiency of the mobile edge networks significantly.展开更多
The rapid growth of Internet content,applications and services require more computing and storage capacity and higher bandwidth.Traditionally,internet services are provided from the cloud(i.e.,from far away)and consum...The rapid growth of Internet content,applications and services require more computing and storage capacity and higher bandwidth.Traditionally,internet services are provided from the cloud(i.e.,from far away)and consumed on increasingly smart devices.Edge computing and caching provides these services from nearby smart devices.Blending both approaches should combine the power of cloud services and the responsiveness of edge networks.This paper investigates how to intelligently use the caching and computing capabilities of edge nodes/cloudlets through the use of artificial intelligence-based policies.We first analyze the scenarios of mobile edge networks with edge computing and caching abilities,then design a paradigm of virtualized edge network which includes an efficient way of isolating traffic flow in physical network layer.We develop the caching and communicating resource virtualization in virtual layer,and formulate the dynamic resource allocation problem into a reinforcement learning model,with the proposed self-adaptive and self-learning management,more flexible,better performance and more secure network services with lower cost will be obtained.Simulation results and analyzes show that addressing cached contents in proper edge nodes through a trained model is more efficient than requiring them from the cloud.展开更多
To accommodate the tremendous increase of mobile data traffic,cache-enabled device-to-device(D2D)communication has been taken as a promising technique to release the heavy burden of cellular networks since popular con...To accommodate the tremendous increase of mobile data traffic,cache-enabled device-to-device(D2D)communication has been taken as a promising technique to release the heavy burden of cellular networks since popular contents can be pre-fetched at user devices and shared among subscribers.As a result,cellular traffic can be offloaded and an enhanced system performance can be attainable.However,due to the limited cache capacity of mobile devices and the heterogeneous preferences among different users,the requested contents are most likely not be proactively cached,inducing lower cache hit ratio.Recommendation system,on the other hand,is able to reshape users’request schema,mitigating the heterogeneity to some extent,and hence it can boost the gain of edge caching.In this paper,the cost minimization problem for the social-aware cache-enabled D2D networks with recommendation consideration is investigated,taking into account the constraints on the cache capacity budget and the total number of recommended files per user,in which the contents are sharing between the users that trust each other.The minimization problem is an integer non-convex and non-linear programming,which is in general NP-hard.Therewith,we propose a timeefficient joint recommendation and caching decision scheme.Extensive simulation results show that the proposed scheme converges quickly and significantly reduces the average cost when compared with various benchmark strategies.展开更多
The authors of this paper have previously proposed the global virtual data space system (GVDS) to aggregate the scattered and autonomous storage resources in China’s national supercomputer grid (National Supercomputi...The authors of this paper have previously proposed the global virtual data space system (GVDS) to aggregate the scattered and autonomous storage resources in China’s national supercomputer grid (National Supercomputing Center in Guangzhou, National Supercomputing Center in Jinan, National Supercomputing Center in Changsha, Shanghai Supercomputing Center, and Computer Network Information Center in Chinese Academy of Sciences) into a storage system that spans the wide area network (WAN), which realizes the unified management of global storage resources in China. At present, the GVDS has been successfully deployed in the China National Grid environment. However, when accessing and sharing remote data in the WAN, the GVDS will cause redundant transmission of data and waste a lot of network bandwidth resources. In this paper, we propose an edge cache system as a supplementary system of the GVDS to improve the performance of upper-level applications accessing and sharing remote data. Specifically, we first designs the architecture of the edge cache system, and then study the key technologies of this architecture: the edge cache index mechanism based on double-layer hashing, the edge cache replacement strategy based on the GDSF algorithm, the request routing based on consistent hashing method, and the cluster member maintenance method based on the SWIM protocol. The experimental results show that the edge cache system has successfully implemented the relevant operation functions (read, write, deletion, modification, etc.) and is compatible with the POSIX interface in terms of function. Further, it can greatly reduce the amount of data transmission and increase the data access bandwidth when the accessed file is located at the edge cache system in terms of performance, i.e., its performance is close to the performance of the network file system in the local area network (LAN).展开更多
基金This work was supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 824019in part by Special Funds for Central Universities Construction of World-Class Universities(Disciplines)in part by China 111 Project(B16037).
文摘The sixth generation(6G)of wireless cellular networks is expected to incorporate the latest developments in network infrastructure and emerging advances in technology.In the age of 6G,edge caching technology will evolve towards intelligence,dynamics,and security.However,the security problems of edge caching,including data tampering and eavesdropping,are seldomly considered in most literatures.In this paper,we consider the two-hop edge caching where the blockchain and physical layer security technologies are adopted to prevent data from being tampered with and eavesdropped.We design blockchain-based framework to guarantee the reliability of important data such as the frequency of contents and jointly optimize content caching probability and redundancy rate to maximize the secure transmission probability.Extensive simulation shows that our optimization scheme can significantly improve the secure transmission probability of edge cache network,whether facing the threat of independent eavesdropping or joint eavesdropping.
基金supported in part by National Key R&D Program of China under Grant Nos. 2018YFB2100100 and 2018YFF0214700National NSFC under Grant Nos. 61902044 and 62072060+4 种基金Chongqing Research Program of Basic Research and Frontier Technology under Grant No. CSTC2019-jcyjmsxmX0589Key Research Program of Chongqing Science and Technology Commission under Grant Nos. CSTC2017jcyjBX0025 and CSTC2019jscxzdztzxX0031Fundamental Research Funds for the Central Universities under Grant No.2020CDJQY-A022Chinese National Engineering Laboratory for Big Data System Computing TechnologyCanadian NSERC
文摘Edge caching is an emerging technology for supporting massive content access in mobile edge networks to address rapidly growing Internet of Things(IoT)services and content applications.However,the edge server is limited with the computation/storage capacity,which causes a low cache hit.Cooperative edge caching jointing neighbor edge servers is regarded as a promising technique to improve cache hit and reduce congestion of the networks.Further,recommender systems can provide personalized content services to meet user’s requirements in the entertainment-oriented mobile networks.Therefore,we investigate the issue of joint cooperative edge caching and recommender systems to achieve additional cache gains by the soft caching framework.To measure the cache profits,the optimization problem is formulated as a 0-1 Integer Linear Programming(ILP),which is NP-hard.Specifically,the method of processing content requests is defined as server actions,we determine the server actions to maximize the quality of experience(QoE).We propose a cachefriendly heuristic algorithm to solve it.Simulation results demonstrate that the proposed framework has superior performance in improving the QoE.
基金supported by the National Key R&D Pro-gram of China(2020YFB1807800).
文摘To relieve the backhaul link stress and reduce the content acquisition delay,mobile edge caching has become one of the promising approaches.In this paper,a novel federated reinforcement learning(FRL)method with adaptive training times is proposed for edge caching.Through a new federated learning process with the asynchronous model training process and synchronous global aggregation process,the proposed FRL-based edge caching algorithm mitigates the performance degradation brought by the non-identically and independently distributed(noni.i.d.)characteristics of content popularity among edge nodes.The theoretical bound of the loss function difference is analyzed in the paper,based on which the training times adaption mechanism is proposed to deal with the tradeoff between local training and global aggregation for each edge node in the federation.Numerical simulations have verified that the proposed FRL-based edge caching method outperforms other baseline methods in terms of the caching benefit,the cache hit ratio and the convergence speed.
基金supported by the National Natural Science Foundation of China(No.92267104)the Natural Science Foundation of Jiangsu Province of China(No.BK20211284)Financial and Science Technology Plan Project of Xinjiang Production and Construction Corps(No.2020DB005).
文摘With the rapid development of mobile communication technology and intelligent applications,the quantity of mobile devices and data traffic in networks have been growing exponentially,which poses a great burden to networks and brings huge challenge to servicing user demand.Edge caching,which utilizes the storage and computation resources of the edge to bring resources closer to end users,is a promising way to relieve network burden and enhance user experience.In this paper,we aim to survey the edge caching techniques from a comprehensive and systematic perspective.We first present an overview of edge caching,summarizing the three key issues regarding edge caching,i.e.,where,what,and how to cache,and then introducing several significant caching metrics.We then carry out a detailed and in-depth elaboration on these three issues,which correspond to caching locations,caching objects,and caching strategies,respectively.In particular,we innovate on the issue“what to cache”,interpreting it as the classification of the“caching objects”,which can be further classified into content cache,data cache,and service cache.Finally,we discuss several open issues and challenges of edge caching to inspire future investigations in this research area.
基金supported by the Key Scientific and Technological Projects of Henan Province with Grant Nos.232102211084 and 222102210137,both received by B.W.(URL to the sponsor’s website is https://kjt.henan.gov.cn/)the National Natural Science Foundation of China with grant No.61975187,received by Z.Z(the URL to the sponsor’s website is https://www.nsfc.gov.cn/).
文摘Computing-intensive and latency-sensitive user requests pose significant challenges to traditional cloud computing.In response to these challenges,mobile edge computing(MEC)has emerged as a new paradigm that extends the computational,caching,and communication capabilities of cloud computing.By caching certain services on edge nodes,computational support can be provided for requests that are offloaded to the edges.However,previous studies on task offloading have generally not considered the impact of caching mechanisms and the cache space occupied by services.This oversight can lead to problems,such as high delays in task executions and invalidation of offloading decisions.To optimize task response time and ensure the availability of task offloading decisions,we investigate a task offloading method that considers caching mechanism.First,we incorporate the cache information of MEC into the model of task offloading and reduce the task offloading problem as a mixed integer nonlinear programming(MINLP)problem.Then,we propose an integer particle swarm optimization and improved genetic algorithm(IPSO_IGA)to solve the MINLP.IPSO_IGA exploits the evolutionary framework of particle swarm optimization.And it uses a crossover operator to update the positions of particles and an improved mutation operator to maintain the diversity of particles.Finally,extensive simulation experiments are conducted to evaluate the performance of the proposed algorithm.The experimental results demonstrate that IPSO_IGA can save 20%to 82%of the task completion time,compared with state-of-theart and classical algorithms.Moreover,IPSO_IGA is suitable for scenarios with complex network structures and computing-intensive tasks.
基金This work was supported in part by the National Natural Science Foundation of China(No.61971129)the Natural Science Foundation of Jiangsu Province(No.BK20181264)+2 种基金the Research Fund of the State Key Laboratory of Integrated Services Networks(Xidian University)(No.ISN19-10)the Research Fund of the Key Laboratory of Wireless Sensor Network&Communication(Shanghai Institute of Microsystem and Information Technology,Chinese Academy of Sciences)(No.2017002)the UK Engineering and Physical Sciences Research Council(No.EP/K040685/2).
文摘The edge caching resource allocation problem in Fog Radio Access Networks(F-RANs)is investigated.An incentive mechanism is introduced to motivate Content Providers(CPs)to participate in the resource allocation procedure.We formulate the interaction between the cloud server and the CPs as a Stackelberg game,where the cloud server sets nonuniform prices for the Fog Access Points(F-APs)while the CPs lease the F-APs for caching their most popular contents.Then,by exploiting the multiplier penalty function method,we transform the constrained optimization problem of the cloud server into an equivalent non-constrained one,which is further solved by using the simplex search method.Moreover,the existence and uniqueness of the Nash Equilibrium(NE)of the Stackelberg game are analyzed theoretically.Furthermore,we propose a uniform pricing based resource allocation strategy by eliminating the competition among the CPs,and we also theoretically analyze the factors that affect the uniform pricing strategy of the cloud server.We also propose a global optimization-based resource allocation strategy by further eliminating the competition between the cloud server and the CPs.Simulation results are provided for quantifying the proposed strategies by showing their efficiency in pricing and resource allocation.
基金supported by National Natural Science Foundation of China(No.61971060)Beijing Natural Science Foundation(4222010)。
文摘Cache-enabling unmanned aerial vehicles(UAVs)are considered for storing popular contents and providing downlink data offloading in cellular networks.In this context,we formulate a joint optimization problem of user association,caching placement,and backhaul bandwidth allocation for minimizing content acquisition delay with consideration of UAVs’energy constraint.We decompose the formulated problem into two subproblems:i)user association and caching placement and ii)backhaul bandwidth allocation.We first obtain the optimal bandwidth allocation with given user association and caching placement by the Lagrangian multiplier approach.After that,embedding the backhaul bandwidth allocation algorithm,we solve the user association and caching placement problem by a threedimensional(3D)matching method.Then we decompose it into two two-dimensional(2D)matching problems and develop low-complexity algorithms.The proposed scheme converges and exhibits a low computational complexity.Simulation results demonstrate that the proposed cache-enabling UAV framework outperforms the conventional UAV-assisted cellular networks in terms of content acquisition delay and the proposed scheme achieves significantly lower content acquisition delay compared with other two benchmark schemes.
基金the National S&T Major Project (No. 2018ZX03001011)the National Key R&D Program(No.2018YFB1801102)+1 种基金the National Natural Science Foundation of China (No. 61671072)the Beijing Natural Science Foundation (No. L192025)
文摘With the increasing maritime activities and the rapidly developing maritime economy, the fifth-generation(5G) mobile communication system is expected to be deployed at the ocean. New technologies need to be explored to meet the requirements of ultra-reliable and low latency communications(URLLC) in the maritime communication network(MCN). Mobile edge computing(MEC) can achieve high energy efficiency in MCN at the cost of suffering from high control plane latency and low reliability. In terms of this issue, the mobile edge communications, computing, and caching(MEC3) technology is proposed to sink mobile computing, network control, and storage to the edge of the network. New methods that enable resource-efficient configurations and reduce redundant data transmissions can enable the reliable implementation of computing-intension and latency-sensitive applications. The key technologies of MEC3 to enable URLLC are analyzed and optimized in MCN. The best response-based offloading algorithm(BROA) is adopted to optimize task offloading. The simulation results show that the task latency can be decreased by 26.5’ ms, and the energy consumption in terminal users can be reduced to 66.6%.
基金This work is supported by the National Natural Science Foundation of China(62072465)the Key-Area Research and Development Program of Guang Dong Province(2019B010107001).
文摘Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve this problem effectively.This paper proposes a distributed edge collaborative caching mechanism for Internet online request services scenario.It solves the problem of large average access delay caused by unbalanced load of edge servers,meets users’differentiated service demands and improves user experience.In particular,the edge cache node selection algorithm is optimized,and a novel edge cache replacement strategy considering the differentiated user requests is proposed.This mechanism can shorten the response time to a large number of user requests.Experimental results show that,compared with the current advanced online edge caching algorithm,the proposed edge collaborative caching strategy in this paper can reduce the average response delay by 9%.It also increases the user utility by 4.5 times in differentiated service scenarios,and significantly reduces the time complexity of the edge caching algorithm.
基金This work was supported by the National Natural Science Foundation of China(61871058,WYF,http://www.nsfc.gov.cn/).
文摘The emerging mobile edge networks with content caching capability allows end users to receive information from adjacent edge servers directly instead of a centralized data warehouse,thus the network transmission delay and system throughput can be improved significantly.Since the duplicate content transmissions between edge network and remote cloud can be reduced,the appropriate caching strategy can also improve the system energy efficiency of mobile edge networks to a great extent.This paper focuses on how to improve the network energy efficiency and proposes an intelligent caching strategy according to the cached content distribution model for mobile edge networks based on promising deep reinforcement learning algorithm.The deep neural network(DNN)and Q-learning algorithm are combined to design a deep reinforcement learning framework named as the deep-Q neural network(DQN),in which the DNN is adopted to represent the approximation of action-state value function in the Q-learning solution.The parameters iteration strategies in the proposed DQN algorithm were improved through stochastic gradient descent method,so the DQN algorithm could converge to the optimal solution quickly,and the network performance of the content caching policy can be optimized.The simulation results show that the proposed intelligent DQN-based content cache strategy with enough training steps could improve the energy efficiency of the mobile edge networks significantly.
基金This work was supported by the National Natural Science Foundation of China(61871058)Key Special Project in Intergovernmental International Scientific and Technological Innovation Cooperation of National Key Research and Development Program(2017YFE0118600).
文摘The rapid growth of Internet content,applications and services require more computing and storage capacity and higher bandwidth.Traditionally,internet services are provided from the cloud(i.e.,from far away)and consumed on increasingly smart devices.Edge computing and caching provides these services from nearby smart devices.Blending both approaches should combine the power of cloud services and the responsiveness of edge networks.This paper investigates how to intelligently use the caching and computing capabilities of edge nodes/cloudlets through the use of artificial intelligence-based policies.We first analyze the scenarios of mobile edge networks with edge computing and caching abilities,then design a paradigm of virtualized edge network which includes an efficient way of isolating traffic flow in physical network layer.We develop the caching and communicating resource virtualization in virtual layer,and formulate the dynamic resource allocation problem into a reinforcement learning model,with the proposed self-adaptive and self-learning management,more flexible,better performance and more secure network services with lower cost will be obtained.Simulation results and analyzes show that addressing cached contents in proper edge nodes through a trained model is more efficient than requiring them from the cloud.
基金supported in part by the grant from the Research Grants Council of the Hong Kong Special Administrative Region,China(Project Reference No.UGC/FDS16/E09/21)in part by the Hong Kong President’s Advisory Committee on Research and Development(PACRD)under Project No.2020/1.6,in part by the National Natural Science Foundation of China(NSFC)under Grants No.61971239 and No.92067201+1 种基金in part by Jiangsu Provincial Key Research and Development Program under grant No.BE2020084-4in part by Postgraduate Research&Practice Innovation Program of Jiangsu Province under Grant KYCX200714.
文摘To accommodate the tremendous increase of mobile data traffic,cache-enabled device-to-device(D2D)communication has been taken as a promising technique to release the heavy burden of cellular networks since popular contents can be pre-fetched at user devices and shared among subscribers.As a result,cellular traffic can be offloaded and an enhanced system performance can be attainable.However,due to the limited cache capacity of mobile devices and the heterogeneous preferences among different users,the requested contents are most likely not be proactively cached,inducing lower cache hit ratio.Recommendation system,on the other hand,is able to reshape users’request schema,mitigating the heterogeneity to some extent,and hence it can boost the gain of edge caching.In this paper,the cost minimization problem for the social-aware cache-enabled D2D networks with recommendation consideration is investigated,taking into account the constraints on the cache capacity budget and the total number of recommended files per user,in which the contents are sharing between the users that trust each other.The minimization problem is an integer non-convex and non-linear programming,which is in general NP-hard.Therewith,we propose a timeefficient joint recommendation and caching decision scheme.Extensive simulation results show that the proposed scheme converges quickly and significantly reduces the average cost when compared with various benchmark strategies.
基金supported by the National Key Research and Development Program of China(2018YFB0203901)the National Natural Science Foundation of China(Grant No.61772053)+1 种基金the Hebei Youth Talents Support Project(BJ2019008)the Natural Science Foundation of Hebei Province(F2020204003).
文摘The authors of this paper have previously proposed the global virtual data space system (GVDS) to aggregate the scattered and autonomous storage resources in China’s national supercomputer grid (National Supercomputing Center in Guangzhou, National Supercomputing Center in Jinan, National Supercomputing Center in Changsha, Shanghai Supercomputing Center, and Computer Network Information Center in Chinese Academy of Sciences) into a storage system that spans the wide area network (WAN), which realizes the unified management of global storage resources in China. At present, the GVDS has been successfully deployed in the China National Grid environment. However, when accessing and sharing remote data in the WAN, the GVDS will cause redundant transmission of data and waste a lot of network bandwidth resources. In this paper, we propose an edge cache system as a supplementary system of the GVDS to improve the performance of upper-level applications accessing and sharing remote data. Specifically, we first designs the architecture of the edge cache system, and then study the key technologies of this architecture: the edge cache index mechanism based on double-layer hashing, the edge cache replacement strategy based on the GDSF algorithm, the request routing based on consistent hashing method, and the cluster member maintenance method based on the SWIM protocol. The experimental results show that the edge cache system has successfully implemented the relevant operation functions (read, write, deletion, modification, etc.) and is compatible with the POSIX interface in terms of function. Further, it can greatly reduce the amount of data transmission and increase the data access bandwidth when the accessed file is located at the edge cache system in terms of performance, i.e., its performance is close to the performance of the network file system in the local area network (LAN).