Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile termin...Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile terminals to exploit storage infrastructure,improve content delivery efficiency,and enhance user experience.However,due to the limited capacity of edge servers,it remains a significant challenge to meet the changing,time-varying,and customized needs for highly diversified content of users.Recently,techniques for caching content at the edge are becoming popular for addressing the above challenges.It is capable of filling the communication gap between the users and content providers while relieving pressure on remote cloud servers.However,existing static caching strategies are still inefficient in handling the dynamics of the time-varying popularity of content and meeting users’demands for highly diversified entity data.To address this challenge,we introduce a novel method for content caching over MEC,i.e.,PRIME.It synthesizes a content popularity prediction model,which takes users’stay time and their request traces as inputs,and a deep reinforcement learning model for yielding dynamic caching schedules.Experimental results demonstrate that PRIME,when tested upon the MovieLens 1M dataset for user request patterns and the Shanghai Telecom dataset for user mobility,outperforms its peers in terms of cache hit rates,transmission latency,and system cost.展开更多
Web offers a very convenient way to access remote information resources, an important measurement of evaluating Web services quality is how long it takes to search and get information. By caching the Web server’s dyn...Web offers a very convenient way to access remote information resources, an important measurement of evaluating Web services quality is how long it takes to search and get information. By caching the Web server’s dynamic content, it can avoid repeated queries for database and reduce the access frequency of original resources, thus to improve the speed of server’s response. This paper describes the concept, advantages, principles and concrete realization procedure of a dynamic content cache module for Web server. Key words dynamic content caching - network acceleration - apache module CLC number TP 393.09 Foundation item: Supported by the Science Committee of WuhanBiography: LIU Dan (1980-), male, Master candidate, research direction: high speed computer network, high performance server clusters system.展开更多
The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.H...The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.However,heterogeneous cache nodes have different communication modes and limited caching capacities.In addition,the high mobility of vehicles renders the more complicated caching environment.Therefore,performing efficient cooperative caching becomes a key issue.In this paper,we propose a cross-tier cooperative caching architecture for all contents,which allows the distributed cache nodes to cooperate.Then,we devise the communication link and content caching model to facilitate timely content delivery.Aiming at minimizing transmission delay and cache cost,an optimization problem is formulated.Furthermore,we use a multi-agent deep reinforcement learning(MADRL)approach to model the decision-making process for caching among heterogeneous cache nodes,where each agent interacts with the environment collectively,receives observations yet a common reward,and learns its own optimal policy.Extensive simulations validate that the MADRL approach can enhance hit ratio while reducing transmission delay and cache cost.展开更多
The evolution of smart mobile devices has significantly impacted the way we generate and share contents and introduced a huge volume of Internet traffic.To address this issue and take advantage of the short-range comm...The evolution of smart mobile devices has significantly impacted the way we generate and share contents and introduced a huge volume of Internet traffic.To address this issue and take advantage of the short-range communication capabilities of smart mobile devices,the decentralized content sharing approach has emerged as a suitable and promising alternative.Decentralized content sharing uses a peer-to-peer network among colocated smart mobile device users to fulfil content requests.Several articles have been published to date to address its different aspects including group management,interest extraction,message forwarding,participation incentive,and content replication.This survey paper summarizes and critically analyzes recent advancements in decentralized content sharing and highlights potential research issues that need further consideration.展开更多
The efficient integration of satellite and terrestrial networks has become an important component for 6 G wireless architectures to provide highly reliable and secure connectivity over a wide geographical area.As the ...The efficient integration of satellite and terrestrial networks has become an important component for 6 G wireless architectures to provide highly reliable and secure connectivity over a wide geographical area.As the satellite and cellular networks are developed separately these years,the integrated network should synergize the communication,storage,computation capabilities of both sides towards an intelligent system more than mere consideration of coexistence.This has motivated us to develop double-edge intelligent integrated satellite and terrestrial networks(DILIGENT).Leveraging the boost development of multi-access edge computing(MEC)technology and artificial intelligence(AI),the framework is entitled with the systematic learning and adaptive network management of satellite and cellular networks.In this article,we provide a brief review of the state-of-art contributions from the perspective of academic research and standardization.Then we present the overall design of the proposed DILIGENT architecture,where the advantages are discussed and summarized.Strategies of task offloading,content caching and distribution are presented.Numerical results show that the proposed network architecture outperforms the existing integrated networks.展开更多
In order to alleviate capacity constraints on the fronthaul and decrease the transmit latency, a hierarchical content caching paradigm is applied in the fog radio access networks(F-RANs). In particular, a specific clu...In order to alleviate capacity constraints on the fronthaul and decrease the transmit latency, a hierarchical content caching paradigm is applied in the fog radio access networks(F-RANs). In particular, a specific cluster of remote radio heads is formed through a common centralized cloud at the baseband unit pool, while the local content is directly delivered at fog access points with edge cache and distributed radio signal processing capability. Focusing on a downlink F-RAN, the explicit expressions of ergodic rate for the hierarchical paradigm is derived. Meanwhile, both the waiting delay and latency ratio for users requiring a single content are exploited. According to the evaluation results of ergodic rate on waiting delay, the transmit latency can be effectively reduced through improving the capacity of both fronthaul and radio access links. Moreover, to fully explore the potential of hierarchical content caching, the transmit latency for users requiring multiple content objects is optimized as well in three content transmission cases with different radio access links. The simulation results verify the accuracy of the analysis, further show the latency decreases significantly due to the hierarchical paradigm.展开更多
The demand for digital media services is increasing as the number of wireless subscriptions is growing exponentially.In order to meet this growing need,mobile wireless networks have been advanced at a tremendous pace ...The demand for digital media services is increasing as the number of wireless subscriptions is growing exponentially.In order to meet this growing need,mobile wireless networks have been advanced at a tremendous pace over recent days.However,the centralized architecture of existing mobile networks,with limited capacity and range of bandwidth of the radio access network and low bandwidth back-haul network,can not handle the exponentially increasing mobile traffic.Recently,we have seen the growth of new mechanisms of data caching and delivery methods through intermediate caching servers.In this paper,we present a survey on recent advances in mobile edge computing and content caching,including caching insertion and expulsion policies,the behavior of the caching system,and caching optimization based on wireless networks.Some of the important open challenges in mobile edge computing with content caching are identified and discussed.We have also compared edge,fog and cloud computing in terms of delay.Readers of this paper will get a thorough understanding of recent advances in mobile edge computing and content caching in mobile wireless networks.展开更多
Although content caching and recommendation are two complementary approaches to improve the user experience,it is still challenging to provide an integrated paradigm to fully explore their potential,due to the high co...Although content caching and recommendation are two complementary approaches to improve the user experience,it is still challenging to provide an integrated paradigm to fully explore their potential,due to the high complexity and complicated tradeoff relationship.To provide an efficient management framework,the joint design of content delivery and recommendation in wireless content caching networks is studied in this paper.First,a joint transmission scheme of content objects and recommendation lists is designed with edge caching,and an optimization problem is formulated to balance the utility and cost of content caching and recommendation,which is an mixed integer nonlinear programming problem.Second,a reinforcement learning based algorithm is proposed to implement real time management of content caching,recommendation and delivery,which can approach the optimal solution without iterations during each decision epoch.Finally,the simulation results are provided to evaluate the performance of our proposed scheme,which show that it can achieve lower cost than the existing content caching and recommendation schemes.展开更多
The Web cluster has been a popular solution of network server system because of its scalability and cost effective ness. The cache configured in servers can result in increasing significantly performance, In this pape...The Web cluster has been a popular solution of network server system because of its scalability and cost effective ness. The cache configured in servers can result in increasing significantly performance, In this paper, we discuss the suitable configuration strategies for caching dynamic content by our experimental results. Considering the system itself can provide support for caching static Web page, such as computer memory cache and disk's own cache, we adopt a special pattern that only caches dynamic Web page in some experiments to enlarge cache space. The paper is introduced three different replacement algorithms in our cache proxy module to test the practical effects of caching dynamic pages under different conditions. The paper is chiefly analyzed the influences of generated time and accessed frequency on caching dynamic Web pages. The paper is also provided the detailed experiment results and main conclusions in the paper.展开更多
Recommendation-aware Content Caching(RCC)at the edge enables a significant reduction of the network latency and the backhaul load,thereby invigorating ubiquitous latency-sensitive innovative services.However,the effec...Recommendation-aware Content Caching(RCC)at the edge enables a significant reduction of the network latency and the backhaul load,thereby invigorating ubiquitous latency-sensitive innovative services.However,the effectiveness of RCC strategies is highly dependent on explicit information as regards subscribers’content request patterns,the sophisticated caching placement policy,and the personalized recommendation tactics.In this article,we investigate how the potentials of Artificial Intelligence(AI)and optimization techniques can be harnessed to address those core issues and facilitate the full implementation of RCC for the upcoming intelligent 6G era.Towards this end,we first elaborate on the hierarchical RCC network architecture.Then,the devised AI and optimization empowered paradigm is introduced,whereas AI and optimization techniques are leveraged to predict the users’content preferences in real-time situations with the assistance of their historical behavior data and determine the cache pushing and recommendation decision,respectively.Through extensive case studies,we validate the effectiveness of AI-based predictors in estimating users’content preference and the superiority of optimized RCC policies over the conventional benchmarks.At last,we shed light on the opportunities and challenges in the future.展开更多
As the Internet and World Wide Web grow at a fast pace, it is essential that the Web's performance should keep up with increased demand and expectations. Web Caching technology has been widely accepted as one of t...As the Internet and World Wide Web grow at a fast pace, it is essential that the Web's performance should keep up with increased demand and expectations. Web Caching technology has been widely accepted as one of the effective approaches to alleviating Web traffic and increase the Web Quality of Service (QoS). This paper provides an up-to-date survey of the rapidly expanding Web Caching literature. It discusses the state-of-the-art web caching schemes and techniques, with emphasis on the recent developments in Web Caching technology such as the differentiated Web services, heterogeneous caching network structures, and dynamic content caching.展开更多
For desirable quality of service, content providers aim at covering content requests by large network caches. Content caching has been considered as a fundamental module in network architecture. There exist few studie...For desirable quality of service, content providers aim at covering content requests by large network caches. Content caching has been considered as a fundamental module in network architecture. There exist few studies on the optimization of content caching. Most existing works focus on the design of content measurement, and the cached content is replaced by a new one based on the given metric. Therefore, the performance for service provision with multiple levels is decreased. This paper investigates the problem of finding optimal timer for each content. According to the given timer, the caching policies determine whether to cache a content and which existing content should be replaced, when a content miss occurs. Aiming to maximize the aggregate utility with capacity constraint, this problem is formalized as an integer optimization problem. A linear programming based approximation algorithm is proposed, and the approximation ratio is proved. Furthermore, the problem of content caching with relaxed constraints is given. A Lagrange multiplier based approximation algorithm with polynomial time complexity is proposed. Experimental results show that the proposed algorithms have better performance.展开更多
文摘Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile terminals to exploit storage infrastructure,improve content delivery efficiency,and enhance user experience.However,due to the limited capacity of edge servers,it remains a significant challenge to meet the changing,time-varying,and customized needs for highly diversified content of users.Recently,techniques for caching content at the edge are becoming popular for addressing the above challenges.It is capable of filling the communication gap between the users and content providers while relieving pressure on remote cloud servers.However,existing static caching strategies are still inefficient in handling the dynamics of the time-varying popularity of content and meeting users’demands for highly diversified entity data.To address this challenge,we introduce a novel method for content caching over MEC,i.e.,PRIME.It synthesizes a content popularity prediction model,which takes users’stay time and their request traces as inputs,and a deep reinforcement learning model for yielding dynamic caching schedules.Experimental results demonstrate that PRIME,when tested upon the MovieLens 1M dataset for user request patterns and the Shanghai Telecom dataset for user mobility,outperforms its peers in terms of cache hit rates,transmission latency,and system cost.
文摘Web offers a very convenient way to access remote information resources, an important measurement of evaluating Web services quality is how long it takes to search and get information. By caching the Web server’s dynamic content, it can avoid repeated queries for database and reduce the access frequency of original resources, thus to improve the speed of server’s response. This paper describes the concept, advantages, principles and concrete realization procedure of a dynamic content cache module for Web server. Key words dynamic content caching - network acceleration - apache module CLC number TP 393.09 Foundation item: Supported by the Science Committee of WuhanBiography: LIU Dan (1980-), male, Master candidate, research direction: high speed computer network, high performance server clusters system.
基金supported by the National Natural Science Foundation of China(62231020,62101401)the Youth Innovation Team of Shaanxi Universities。
文摘The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.However,heterogeneous cache nodes have different communication modes and limited caching capacities.In addition,the high mobility of vehicles renders the more complicated caching environment.Therefore,performing efficient cooperative caching becomes a key issue.In this paper,we propose a cross-tier cooperative caching architecture for all contents,which allows the distributed cache nodes to cooperate.Then,we devise the communication link and content caching model to facilitate timely content delivery.Aiming at minimizing transmission delay and cache cost,an optimization problem is formulated.Furthermore,we use a multi-agent deep reinforcement learning(MADRL)approach to model the decision-making process for caching among heterogeneous cache nodes,where each agent interacts with the environment collectively,receives observations yet a common reward,and learns its own optimal policy.Extensive simulations validate that the MADRL approach can enhance hit ratio while reducing transmission delay and cache cost.
文摘The evolution of smart mobile devices has significantly impacted the way we generate and share contents and introduced a huge volume of Internet traffic.To address this issue and take advantage of the short-range communication capabilities of smart mobile devices,the decentralized content sharing approach has emerged as a suitable and promising alternative.Decentralized content sharing uses a peer-to-peer network among colocated smart mobile device users to fulfil content requests.Several articles have been published to date to address its different aspects including group management,interest extraction,message forwarding,participation incentive,and content replication.This survey paper summarizes and critically analyzes recent advancements in decentralized content sharing and highlights potential research issues that need further consideration.
基金supportedin part by the National Science Foundation of China(NSFC)under Grant 61631005,Grant 61771065,Grant 61901048in part by the Zhijiang Laboratory Open Project Fund 2020LCOAB01in part by the Beijing Municipal Science and Technology Commission Research under Project Z181100003218015。
文摘The efficient integration of satellite and terrestrial networks has become an important component for 6 G wireless architectures to provide highly reliable and secure connectivity over a wide geographical area.As the satellite and cellular networks are developed separately these years,the integrated network should synergize the communication,storage,computation capabilities of both sides towards an intelligent system more than mere consideration of coexistence.This has motivated us to develop double-edge intelligent integrated satellite and terrestrial networks(DILIGENT).Leveraging the boost development of multi-access edge computing(MEC)technology and artificial intelligence(AI),the framework is entitled with the systematic learning and adaptive network management of satellite and cellular networks.In this article,we provide a brief review of the state-of-art contributions from the perspective of academic research and standardization.Then we present the overall design of the proposed DILIGENT architecture,where the advantages are discussed and summarized.Strategies of task offloading,content caching and distribution are presented.Numerical results show that the proposed network architecture outperforms the existing integrated networks.
基金supported in part by the National Natural Science Foundation of China (Grant No.61361166005)the State Major Science and Technology Special Projects (Grant No.2016ZX03001020006)the National Program for Support of Top-notch Young Professionals
文摘In order to alleviate capacity constraints on the fronthaul and decrease the transmit latency, a hierarchical content caching paradigm is applied in the fog radio access networks(F-RANs). In particular, a specific cluster of remote radio heads is formed through a common centralized cloud at the baseband unit pool, while the local content is directly delivered at fog access points with edge cache and distributed radio signal processing capability. Focusing on a downlink F-RAN, the explicit expressions of ergodic rate for the hierarchical paradigm is derived. Meanwhile, both the waiting delay and latency ratio for users requiring a single content are exploited. According to the evaluation results of ergodic rate on waiting delay, the transmit latency can be effectively reduced through improving the capacity of both fronthaul and radio access links. Moreover, to fully explore the potential of hierarchical content caching, the transmit latency for users requiring multiple content objects is optimized as well in three content transmission cases with different radio access links. The simulation results verify the accuracy of the analysis, further show the latency decreases significantly due to the hierarchical paradigm.
基金This work is partly supported by the US NSF under grants CNS 1650831,and HRD 1828811the U.S.Department of Homeland Security under grant DHS 2017-ST-062-000003.
文摘The demand for digital media services is increasing as the number of wireless subscriptions is growing exponentially.In order to meet this growing need,mobile wireless networks have been advanced at a tremendous pace over recent days.However,the centralized architecture of existing mobile networks,with limited capacity and range of bandwidth of the radio access network and low bandwidth back-haul network,can not handle the exponentially increasing mobile traffic.Recently,we have seen the growth of new mechanisms of data caching and delivery methods through intermediate caching servers.In this paper,we present a survey on recent advances in mobile edge computing and content caching,including caching insertion and expulsion policies,the behavior of the caching system,and caching optimization based on wireless networks.Some of the important open challenges in mobile edge computing with content caching are identified and discussed.We have also compared edge,fog and cloud computing in terms of delay.Readers of this paper will get a thorough understanding of recent advances in mobile edge computing and content caching in mobile wireless networks.
基金supported by Beijing Natural Science Foundation(Grant L182039),and National Natural Science Foundation of China(Grant 61971061).
文摘Although content caching and recommendation are two complementary approaches to improve the user experience,it is still challenging to provide an integrated paradigm to fully explore their potential,due to the high complexity and complicated tradeoff relationship.To provide an efficient management framework,the joint design of content delivery and recommendation in wireless content caching networks is studied in this paper.First,a joint transmission scheme of content objects and recommendation lists is designed with edge caching,and an optimization problem is formulated to balance the utility and cost of content caching and recommendation,which is an mixed integer nonlinear programming problem.Second,a reinforcement learning based algorithm is proposed to implement real time management of content caching,recommendation and delivery,which can approach the optimal solution without iterations during each decision epoch.Finally,the simulation results are provided to evaluate the performance of our proposed scheme,which show that it can achieve lower cost than the existing content caching and recommendation schemes.
基金Supported by the National Natural Science Foun-dation of China (90204008)
文摘The Web cluster has been a popular solution of network server system because of its scalability and cost effective ness. The cache configured in servers can result in increasing significantly performance, In this paper, we discuss the suitable configuration strategies for caching dynamic content by our experimental results. Considering the system itself can provide support for caching static Web page, such as computer memory cache and disk's own cache, we adopt a special pattern that only caches dynamic Web page in some experiments to enlarge cache space. The paper is introduced three different replacement algorithms in our cache proxy module to test the practical effects of caching dynamic pages under different conditions. The paper is chiefly analyzed the influences of generated time and accessed frequency on caching dynamic Web pages. The paper is also provided the detailed experiment results and main conclusions in the paper.
基金This work was supported in part by the MOE ARF Tier 2 under Grant MOE2015-T2-2-104the Singapore University of Technology and Design-Zhejiang University(SUTD-ZJU)Research Collaboration under Grant SUTD-ZJU/RES/01/2016and the SUTD-ZJU Research Collaboration under Grant SUTD-ZJU/RES/05/2016.
文摘Recommendation-aware Content Caching(RCC)at the edge enables a significant reduction of the network latency and the backhaul load,thereby invigorating ubiquitous latency-sensitive innovative services.However,the effectiveness of RCC strategies is highly dependent on explicit information as regards subscribers’content request patterns,the sophisticated caching placement policy,and the personalized recommendation tactics.In this article,we investigate how the potentials of Artificial Intelligence(AI)and optimization techniques can be harnessed to address those core issues and facilitate the full implementation of RCC for the upcoming intelligent 6G era.Towards this end,we first elaborate on the hierarchical RCC network architecture.Then,the devised AI and optimization empowered paradigm is introduced,whereas AI and optimization techniques are leveraged to predict the users’content preferences in real-time situations with the assistance of their historical behavior data and determine the cache pushing and recommendation decision,respectively.Through extensive case studies,we validate the effectiveness of AI-based predictors in estimating users’content preference and the superiority of optimized RCC policies over the conventional benchmarks.At last,we shed light on the opportunities and challenges in the future.
文摘As the Internet and World Wide Web grow at a fast pace, it is essential that the Web's performance should keep up with increased demand and expectations. Web Caching technology has been widely accepted as one of the effective approaches to alleviating Web traffic and increase the Web Quality of Service (QoS). This paper provides an up-to-date survey of the rapidly expanding Web Caching literature. It discusses the state-of-the-art web caching schemes and techniques, with emphasis on the recent developments in Web Caching technology such as the differentiated Web services, heterogeneous caching network structures, and dynamic content caching.
基金supported in part by the National Natural Science Foundation of China(Nos.61572104and 61402076)Startup Fund for the Doctoral Program of Liaoning Province(No.20141023)the Fundamental Research Funds for the Central Universities(Nos.DUT15RC(3)088,DUT15QY26,and DUT14QY06)
文摘For desirable quality of service, content providers aim at covering content requests by large network caches. Content caching has been considered as a fundamental module in network architecture. There exist few studies on the optimization of content caching. Most existing works focus on the design of content measurement, and the cached content is replaced by a new one based on the given metric. Therefore, the performance for service provision with multiple levels is decreased. This paper investigates the problem of finding optimal timer for each content. According to the given timer, the caching policies determine whether to cache a content and which existing content should be replaced, when a content miss occurs. Aiming to maximize the aggregate utility with capacity constraint, this problem is formalized as an integer optimization problem. A linear programming based approximation algorithm is proposed, and the approximation ratio is proved. Furthermore, the problem of content caching with relaxed constraints is given. A Lagrange multiplier based approximation algorithm with polynomial time complexity is proposed. Experimental results show that the proposed algorithms have better performance.