期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Latency-Aware Dynamic Second Offloading Service in SDN-Based Fog Architecture
1
作者 Samah Ibrahim AlShathri Dina S.M.Hassan Samia Allaoua Chelloug 《Computers, Materials & Continua》 SCIE EI 2023年第4期1501-1526,共26页
Task offloading is a key strategy in Fog Computing (FC). Thedefinition of resource-constrained devices no longer applies to sensors andInternet of Things (IoT) embedded system devices alone. Smart and mobileunits can ... Task offloading is a key strategy in Fog Computing (FC). Thedefinition of resource-constrained devices no longer applies to sensors andInternet of Things (IoT) embedded system devices alone. Smart and mobileunits can also be viewed as resource-constrained devices if the power, cloudapplications, and data cloud are included in the set of required resources. Ina cloud-fog-based architecture, a task instance running on an end device mayneed to be offloaded to a fog node to complete its execution. However, ina busy network, a second offloading decision is required when the fog nodebecomes overloaded. The possibility of offloading a task, for the second time,to a fog or a cloud node depends to a great extent on task importance, latencyconstraints, and required resources. This paper presents a dynamic service thatdetermines which tasks can endure a second offloading. The task type, latencyconstraints, and amount of required resources are used to select the offloadingdestination node. This study proposes three heuristic offloading algorithms.Each algorithm targets a specific task type. An overloaded fog node can onlyissue one offloading request to execute one of these algorithms accordingto the task offloading priority. Offloading requests are sent to a SoftwareDefined Networking (SDN) controller. The fog node and controller determinethe number of offloaded tasks. Simulation results show that the average timerequired to select offloading nodes was improved by 33% when compared tothe dynamic fog-to-fog offloading algorithm. The distribution of workloadconverges to a uniform distribution when offloading latency-sensitive nonurgenttasks. The lowest offloading priority is assigned to latency-sensitivetasks with hard deadlines. At least 70% of these tasks are offloaded to fognodes that are one to three hops away from the overloaded node. 展开更多
关键词 Fog computing offloading algorithm latency-aware software defined networking SDN
下载PDF
An intelligent task offloading algorithm(iTOA)for UAV edge computing network 被引量:8
2
作者 Jienan Chen Siyu Chen +3 位作者 Siyu Luo Qi Wang Bin Cao Xiaoqian Li 《Digital Communications and Networks》 SCIE 2020年第4期433-443,共11页
Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of im... Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of image or video processing,which imposes enormous pressure on the UAV computation platform.To solve this issue,in this work,we propose an intelligent Task Offloading Algorithm(iTOA)for UAV edge computing network.Compared with existing methods,iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search(MCTS),the core algorithm of Alpha Go.MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward,such as lowest latency or power consumption.To accelerate the search convergence of MCTS,we also proposed a splitting Deep Neural Network(sDNN)to supply the prior probability for MCTS.The sDNN is trained by a self-supervised learning manager.Here,the training data set is obtained from iTOA itself as its own teacher.Compared with game theory and greedy search-based methods,the proposed iTOA improves service latency performance by 33%and 60%,respectively. 展开更多
关键词 Unmanned aerial vehicles(UAVs) Mobile edge computing(MEC) Intelligent task offloading algorithm(iTOA) Monte Carlo tree search(MCTS) Deep reinforcement learning Splitting deep neural network(sDNN)
下载PDF
Mobile Edge Communications, Computing, and Caching(MEC3) Technology in the Maritime Communication Network 被引量:18
3
作者 Jie Zeng Jiaying Sun +1 位作者 Binwei Wu Xin Su 《China Communications》 SCIE CSCD 2020年第5期223-234,共12页
With the increasing maritime activities and the rapidly developing maritime economy, the fifth-generation(5G) mobile communication system is expected to be deployed at the ocean. New technologies need to be explored t... With the increasing maritime activities and the rapidly developing maritime economy, the fifth-generation(5G) mobile communication system is expected to be deployed at the ocean. New technologies need to be explored to meet the requirements of ultra-reliable and low latency communications(URLLC) in the maritime communication network(MCN). Mobile edge computing(MEC) can achieve high energy efficiency in MCN at the cost of suffering from high control plane latency and low reliability. In terms of this issue, the mobile edge communications, computing, and caching(MEC3) technology is proposed to sink mobile computing, network control, and storage to the edge of the network. New methods that enable resource-efficient configurations and reduce redundant data transmissions can enable the reliable implementation of computing-intension and latency-sensitive applications. The key technologies of MEC3 to enable URLLC are analyzed and optimized in MCN. The best response-based offloading algorithm(BROA) is adopted to optimize task offloading. The simulation results show that the task latency can be decreased by 26.5’ ms, and the energy consumption in terminal users can be reduced to 66.6%. 展开更多
关键词 best response-based offloading algorithm(BROA) energy consumption mobile edge computing(MEC) mobile edge communications computing and caching(MEC3) task offloading
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部