Evidences indicate that,due to the limited caching capacity or inaccurate estimation on users’preferences,the requested files may not be fully cached in the network edge.The transmissions of the un-cached files will ...Evidences indicate that,due to the limited caching capacity or inaccurate estimation on users’preferences,the requested files may not be fully cached in the network edge.The transmissions of the un-cached files will also lead to duplicated transmissions on backhaul channels.Buffer-aided relay has been proposed to improve the transmission performance of the un-cached files.Because of the limited buffer capacity and the information asymmetric environment,how to allocate the limited buffer capacity and how to incentivize users in participating buffer-aided relay have become critical issues.In this work,an incentive scheme based on the contract theory is proposed.Specifically,the backlog violation probability,i.e.,the buffer overflow probability,is provided based on the martingale theory.Next,based on the backlog violation probability,the utility functions of the relay node and users are constructed.With the purpose to maximize the utility of the relay node,the optimal contract problem is formulated.Then,the feasibility of the contract is also demonstrated,and the optimal solution can be obtained by the interior point method.Finally,numerical results are presented to demonstrate effectiveness of the proposed contract theory scheme.展开更多
Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile termin...Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile terminals to exploit storage infrastructure,improve content delivery efficiency,and enhance user experience.However,due to the limited capacity of edge servers,it remains a significant challenge to meet the changing,time-varying,and customized needs for highly diversified content of users.Recently,techniques for caching content at the edge are becoming popular for addressing the above challenges.It is capable of filling the communication gap between the users and content providers while relieving pressure on remote cloud servers.However,existing static caching strategies are still inefficient in handling the dynamics of the time-varying popularity of content and meeting users’demands for highly diversified entity data.To address this challenge,we introduce a novel method for content caching over MEC,i.e.,PRIME.It synthesizes a content popularity prediction model,which takes users’stay time and their request traces as inputs,and a deep reinforcement learning model for yielding dynamic caching schedules.Experimental results demonstrate that PRIME,when tested upon the MovieLens 1M dataset for user request patterns and the Shanghai Telecom dataset for user mobility,outperforms its peers in terms of cache hit rates,transmission latency,and system cost.展开更多
基金the National Natural Science Foundation of China(No.61702258)the Key Projects of Natural Science Research in Colleges and Universities of Jiangsu Province(No.19KJA410001)the Foundation of Jiangsu Advanced Numerical Control Technology Key Laboratory(No.SYKJ201901).
文摘Evidences indicate that,due to the limited caching capacity or inaccurate estimation on users’preferences,the requested files may not be fully cached in the network edge.The transmissions of the un-cached files will also lead to duplicated transmissions on backhaul channels.Buffer-aided relay has been proposed to improve the transmission performance of the un-cached files.Because of the limited buffer capacity and the information asymmetric environment,how to allocate the limited buffer capacity and how to incentivize users in participating buffer-aided relay have become critical issues.In this work,an incentive scheme based on the contract theory is proposed.Specifically,the backlog violation probability,i.e.,the buffer overflow probability,is provided based on the martingale theory.Next,based on the backlog violation probability,the utility functions of the relay node and users are constructed.With the purpose to maximize the utility of the relay node,the optimal contract problem is formulated.Then,the feasibility of the contract is also demonstrated,and the optimal solution can be obtained by the interior point method.Finally,numerical results are presented to demonstrate effectiveness of the proposed contract theory scheme.
文摘Mobile Edge Computing(MEC)is a promising technology that provides on-demand computing and efficient storage services as close to end users as possible.In an MEC environment,servers are deployed closer to mobile terminals to exploit storage infrastructure,improve content delivery efficiency,and enhance user experience.However,due to the limited capacity of edge servers,it remains a significant challenge to meet the changing,time-varying,and customized needs for highly diversified content of users.Recently,techniques for caching content at the edge are becoming popular for addressing the above challenges.It is capable of filling the communication gap between the users and content providers while relieving pressure on remote cloud servers.However,existing static caching strategies are still inefficient in handling the dynamics of the time-varying popularity of content and meeting users’demands for highly diversified entity data.To address this challenge,we introduce a novel method for content caching over MEC,i.e.,PRIME.It synthesizes a content popularity prediction model,which takes users’stay time and their request traces as inputs,and a deep reinforcement learning model for yielding dynamic caching schedules.Experimental results demonstrate that PRIME,when tested upon the MovieLens 1M dataset for user request patterns and the Shanghai Telecom dataset for user mobility,outperforms its peers in terms of cache hit rates,transmission latency,and system cost.