Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is t...Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is the trickiest to support and current research is focused on physical or MAC layer solutions, while proposals focused on the network layer using Machine Learning (ML) and Artificial Intelligence (AI) algorithms running on base stations and User Equipment (UE) or Internet of Things (IoT) devices are in early stages. In this paper, we describe the operation rationale of the most recent relevant ML algorithms and techniques, and we propose and validate ML algorithms running on both cells (base stations/gNBs) and UEs or IoT devices to handle URLLC service control. One ML algorithm runs on base stations to evaluate latency demands and offload traffic in case of need, while another lightweight algorithm runs on UEs and IoT devices to rank cells with the best URLLC service in real-time to indicate the best one cell for a UE or IoT device to camp. We show that the interplay of these algorithms leads to good service control and eventually optimal load allocation, under slow load mobility. .展开更多
研究了在基于片上网络(Network on Chip,NoC)结构的单芯片多处理器(Chip Multiple Processors,CMPs)中,访存请求的NoC延时差异对存储系统的公平性带来的影响.针对该问题进行了理论分析、抽象,并构建试验模型,从网络规模、报文比例等4个...研究了在基于片上网络(Network on Chip,NoC)结构的单芯片多处理器(Chip Multiple Processors,CMPs)中,访存请求的NoC延时差异对存储系统的公平性带来的影响.针对该问题进行了理论分析、抽象,并构建试验模型,从网络规模、报文比例等4个方面对造成访存请求的NoC延时差异的原因进行了讨论.最后提出了一种基于片上网络延时的存储器访问调度方法(Scheduling Based on NoC Latency,SBNL),与传统的方法相比,能够将NoC延时差异对访存请求公平性的影响降低20%左右,并带来15.7%的执行效率提升.展开更多
文摘Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is the trickiest to support and current research is focused on physical or MAC layer solutions, while proposals focused on the network layer using Machine Learning (ML) and Artificial Intelligence (AI) algorithms running on base stations and User Equipment (UE) or Internet of Things (IoT) devices are in early stages. In this paper, we describe the operation rationale of the most recent relevant ML algorithms and techniques, and we propose and validate ML algorithms running on both cells (base stations/gNBs) and UEs or IoT devices to handle URLLC service control. One ML algorithm runs on base stations to evaluate latency demands and offload traffic in case of need, while another lightweight algorithm runs on UEs and IoT devices to rank cells with the best URLLC service in real-time to indicate the best one cell for a UE or IoT device to camp. We show that the interplay of these algorithms leads to good service control and eventually optimal load allocation, under slow load mobility. .
文摘研究了在基于片上网络(Network on Chip,NoC)结构的单芯片多处理器(Chip Multiple Processors,CMPs)中,访存请求的NoC延时差异对存储系统的公平性带来的影响.针对该问题进行了理论分析、抽象,并构建试验模型,从网络规模、报文比例等4个方面对造成访存请求的NoC延时差异的原因进行了讨论.最后提出了一种基于片上网络延时的存储器访问调度方法(Scheduling Based on NoC Latency,SBNL),与传统的方法相比,能够将NoC延时差异对访存请求公平性的影响降低20%左右,并带来15.7%的执行效率提升.