Future components to enhance the basic,native security of 5G networks are either complex mechanisms whose impact in the requiring 5G communications are not considered,or lightweight solutions adapted to ultrareliable ...Future components to enhance the basic,native security of 5G networks are either complex mechanisms whose impact in the requiring 5G communications are not considered,or lightweight solutions adapted to ultrareliable low-latency communications(URLLC)but whose security properties remain under discussion.Although different 5G network slices may have different requirements,in general,both visions seem to fall short at provisioning secure URLLC in the future.In this work we address this challenge,by introducing cost-security functions as a method to evaluate the performance and adequacy of most developed and employed non-native enhanced security mechanisms in 5G networks.We categorize those new security components into different groups according to their purpose and deployment scope.We propose to analyze them in the context of existing 5G architectures using two different approaches.First,using model checking techniques,we will evaluate the probability of an attacker to be successful against each security solution.Second,using analytical models,we will analyze the impact of these security mechanisms in terms of delay,throughput consumption,and reliability.Finally,we will combine both approaches using stochastic cost-security functions and the PRISM model checker to create a global picture.Our results are first evidence of how a 5G network that covers and strengthened all security areas through enhanced,dedicated non-native mechanisms could only guarantee secure URLLC with a probability of∼55%.展开更多
Through enabling the IT and cloud computation capacities at Radio Access Network(RAN),Mobile Edge Computing(MEC) makes it possible to deploy and provide services locally.Therefore,MEC becomes the potential technology ...Through enabling the IT and cloud computation capacities at Radio Access Network(RAN),Mobile Edge Computing(MEC) makes it possible to deploy and provide services locally.Therefore,MEC becomes the potential technology to satisfy the requirements of 5G network to a certain extent,due to its functions of services localization,local breakout,caching,computation offloading,network context information exposure,etc.Especially,MEC can decrease the end-to-end latency dramatically through service localization and caching,which is key requirement of 5G low latency scenario.However,the performance of MEC still needs to be evaluated and verified for future deployment.Thus,the concept of MEC is introduced into5 G architecture and analyzed for different 5G scenarios in this paper.Secondly,the evaluation of MEC performance is conducted and analyzed in detail,especially for network end-to-end latency.In addition,some challenges of the MEC are also discussed for future deployment.展开更多
Latency sensitive services have attracted much attention lately and imposedstringent requirements on the access network design. Passive optical networks (PONs) providea potential long-term solution for the underlying ...Latency sensitive services have attracted much attention lately and imposedstringent requirements on the access network design. Passive optical networks (PONs) providea potential long-term solution for the underlying transport network supporting theseservices. This paper discusses latency limitations in PON and recent progress in PONstandardization to improve latency. Experimental results of a low latency PON system arepresented as a proof of concept.展开更多
Many energy efficiency asynchronous duty-cycle MAC(media access control) protocols have been proposed in recent years.However,in these protocols,wireless sensor nodes almost choose their wakeup time randomly during th...Many energy efficiency asynchronous duty-cycle MAC(media access control) protocols have been proposed in recent years.However,in these protocols,wireless sensor nodes almost choose their wakeup time randomly during the operational cycle,which results in the packet delivery latency increased significantly on the multiple hops path.To reduce the packet delivery latency on multi-hop path and energy waste of the sender's idle listening,a new low latency routing-enhanced asynchronous duty-cycle MAC protocol was presented,called REA-MAC.In REA-MAC,each sensor node decided when it waked up to send the beacon based on cross-layer routing information.Furthermore,the sender adaptively waked up based on the relationship between the transmission request time and the wakeup time of its next hop node.The simulation results show that REA-MAC reduces delivery latency by 60% compared to RI-MAC and reduces 8.77% power consumption on average.Under heavy traffic,REA-MAC's throughput is 1.48 times of RI-MAC's.展开更多
Systolic implementation of multiplication over GF(2m) is usually very efficient in area-time complexity,but its latency is usually very large.Thus,two low latency systolic multipliers over GF(2m) based on general irre...Systolic implementation of multiplication over GF(2m) is usually very efficient in area-time complexity,but its latency is usually very large.Thus,two low latency systolic multipliers over GF(2m) based on general irreducible polynomials and irreducible pentanomials are presented.First,a signal flow graph(SFG) is used to represent the algorithm for multiplication over GF(2m).Then,the two low latency systolic structures for multiplications over GF(2m) based on general irreducible polynomials and pentanomials are presented from the SFG by suitable cut-set retiming,respectively.Analysis indicates that the proposed two low latency designs involve at least one-third less area-delay product when compared with the existing designs,To the authors' knowledge,the time-complexity of the structures is the lowest found in literature for systolic GF(2m) multipliers based on general irreducible polynomials and pentanomials.The proposed low latency designs are regular and modular,and therefore they are suitable for many time critical applications.展开更多
Puncturing has been recognized as a promising technology to cope with the coexistence problem of enhanced mobile broadband(eMBB) and ultra-reliable low latency communications(URLLC)traffic. However, the steady perform...Puncturing has been recognized as a promising technology to cope with the coexistence problem of enhanced mobile broadband(eMBB) and ultra-reliable low latency communications(URLLC)traffic. However, the steady performance of eMBB traffic while meeting the requirements of URLLC traffic with puncturing is a major challenge in some realistic scenarios. In this paper, we pay attention to the timely and energy-efficient processing for eMBB traffic in the industrial Internet of Things(IIoT), where mobile edge computing(MEC) is employed for data processing. Specifically, the performance of eMBB traffic and URLLC traffic in a MEC-based IIoT system is ensured by setting the threshold of tolerable delay and outage probability, respectively. Furthermore,considering the limited energy supply, an energy minimization problem of eMBB device is formulated under the above constraints, by jointly optimizing the resource blocks(RBs) punctured by URLLC traffic, data offloading and transmit power of eMBB device. With Markov's inequality, the problem is reformulated by transforming the probabilistic outage constraint into a deterministic constraint. Meanwhile, an iterative energy minimization algorithm(IEMA) is proposed.Simulation results demonstrate that our algorithm has a significant reduction in the energy consumption for eMBB device and achieves a better overall effect compared to several benchmarks.展开更多
The rapid expansion of the Internet of Things (IoT) has driven the need for advanced computational frameworks capable of handling the complex data processing and security challenges that modern IoT applications demand...The rapid expansion of the Internet of Things (IoT) has driven the need for advanced computational frameworks capable of handling the complex data processing and security challenges that modern IoT applications demand. However, traditional cloud computing frameworks face significant latency, scalability, and security issues. Quantum-Edge Cloud Computing (QECC) offers an innovative solution by integrating the computational power of quantum computing with the low-latency advantages of edge computing and the scalability of cloud computing resources. This study is grounded in an extensive literature review, performance improvements, and metrics data from Bangladesh, focusing on smart city infrastructure, healthcare monitoring, and the industrial IoT sector. The discussion covers vital elements, including integrating quantum cryptography to enhance data security, the critical role of edge computing in reducing response times, and cloud computing’s ability to support large-scale IoT networks with its extensive resources. Through case studies such as the application of quantum sensors in autonomous vehicles, the practical impact of QECC is demonstrated. Additionally, the paper outlines future research opportunities, including developing quantum-resistant encryption techniques and optimizing quantum algorithms for edge computing. The convergence of these technologies in QECC has the potential to overcome the current limitations of IoT frameworks, setting a new standard for future IoT applications.展开更多
With the vigorous development of automobile industry,in-vehicle network is also constantly upgraded to meet data transmission requirements of emerging applications.The main transmission requirements are low latency an...With the vigorous development of automobile industry,in-vehicle network is also constantly upgraded to meet data transmission requirements of emerging applications.The main transmission requirements are low latency and certainty especially for autonomous driving.Time sensitive networking(TSN)based on Ethernet gives a possible solution to these requirements.Previous surveys usually investigated TSN from a general perspective,which referred to TSN of various application fields.In this paper,we focus on the application of TSN to the in-vehicle networks.For in-vehicle networks,we discuss all related TSN standards specified by IEEE 802.1 work group up to now.We further overview and analyze recent literature on various aspects of TSN for automotive applications,including synchronization,resource reservation,scheduling,certainty,software and hardware.Application scenarios of TSN for in-vehicle networks are analyzed one by one.Since TSN of in-vehicle network is still at a very initial stage,this paper also gives insights on open issues,future research directions and possible solutions.展开更多
The numbers of beam positions(BPs)and time slots for beam hopping(BH)dominate the latency of LEO satellite communications.Aiming at minimizing the number of BPs subject to a predefined requirement on the radius of BP,...The numbers of beam positions(BPs)and time slots for beam hopping(BH)dominate the latency of LEO satellite communications.Aiming at minimizing the number of BPs subject to a predefined requirement on the radius of BP,a low-complexity user density-based BP design scheme is proposed,where the original problem is decomposed into two subproblems,with the first one to find the sparsest user and the second one to determine the corresponding best BP.In particular,for the second subproblem,a user selection and smallest BP radius algorithm is proposed,where the nearby users are sequentially selected until the constraint of the given BP radius is no longer satisfied.These two subproblems are iteratively solved until all the users are selected.To further reduce the BP radius,a duplicated user removal algorithm is proposed to decrease the number of the users covered by two or more BPs.Aiming at minimizing the number of time slots subject to the no co-channel interference(CCI)constraint and the traffic demand constraint,a low-complexity CCI-free BH design scheme is proposed,where the BPs having difficulty in satisfying the constraints are considered to be illuminated in priory.Simulation results verify the effectiveness of the proposed schemes.展开更多
Sparse vector coding(SVC)is emerging as a potential technology for short packet communications.To further improve the block error rate(BLER)performance,a uniquely decomposable constellation group-based SVC(UDCG-SVC)is...Sparse vector coding(SVC)is emerging as a potential technology for short packet communications.To further improve the block error rate(BLER)performance,a uniquely decomposable constellation group-based SVC(UDCG-SVC)is proposed in this article.Additionally,in order to achieve an optimal BLER performance of UDCG-SVC,a problem to optimize the coding gain of UDCG-based superimposed constellation is formulated.Given the energy of rotation constellations in UDCG,this problem is solved by converting it into finding the maximized minimum Euclidean distance of the superimposed constellation.Simulation results demonstrate the validness of our derivation.We also find that the proposed UDCGSVC has better BLER performance compared to other SVC schemes,especially under the high order modulation scenarios.展开更多
Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is t...Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is the trickiest to support and current research is focused on physical or MAC layer solutions, while proposals focused on the network layer using Machine Learning (ML) and Artificial Intelligence (AI) algorithms running on base stations and User Equipment (UE) or Internet of Things (IoT) devices are in early stages. In this paper, we describe the operation rationale of the most recent relevant ML algorithms and techniques, and we propose and validate ML algorithms running on both cells (base stations/gNBs) and UEs or IoT devices to handle URLLC service control. One ML algorithm runs on base stations to evaluate latency demands and offload traffic in case of need, while another lightweight algorithm runs on UEs and IoT devices to rank cells with the best URLLC service in real-time to indicate the best one cell for a UE or IoT device to camp. We show that the interplay of these algorithms leads to good service control and eventually optimal load allocation, under slow load mobility. .展开更多
Ultra-reliable and low-latency communications(URLLC) has become a fundamental focus of future industrial wireless sensor net-works(IWSNs). With the evolution of automation and process control in industrial environment...Ultra-reliable and low-latency communications(URLLC) has become a fundamental focus of future industrial wireless sensor net-works(IWSNs). With the evolution of automation and process control in industrial environments, the need for increased reliabilityand reduced latencies in wireless communications is even pronounced. Furthermore, the 5G systems specifically target the URLLCin selected areas and industrial automation might turn into a suitable venue for future IWSNs, running 5G as a high speed inter-process linking technology. In this paper, a hybrid multi-channel scheme for performance and throughput enhancement of IWSNsis proposed. The scheme utilizes the multiple frequency channels to increase the overall throughput of the system along with theincrease in reliability. A special purpose frequency channel is defined, which facilitates the failed communications by retransmis-sions where the retransmission slots are allocated according to the priority level of failed communications of different nodes. Ascheduler is used to formulate priority based scheduling for retransmission in TDMA based communication slots of this channel.Furthermore, in carrier-sense multiple access with collision avoidance(CSMA/CA) based slots, a frequency polling is introducedto limit the collisions. Mathematical modelling for performance metrics is also presented. The performance of the proposed schemeis compared with that of IEEE802.15.4e, where the performance is evaluated on the basis of throughput, reliability and the num-ber of nodes accommodated in a cluster. The proposed scheme offers a notable increase in the reliability and throughput over theexisting IEEE802.15.4e Low Latency Deterministic Networks(LLDN) standard.展开更多
Fog Radio Access Networks(F-RANs)have been considered a groundbreaking technique to support the services of Internet of Things by leveraging edge caching and edge computing.However,the current contributions in computa...Fog Radio Access Networks(F-RANs)have been considered a groundbreaking technique to support the services of Internet of Things by leveraging edge caching and edge computing.However,the current contributions in computation offloading and resource allocation are inefficient;moreover,they merely consider the static communication mode,and the increasing demand for low latency services and high throughput poses tremendous challenges in F-RANs.A joint problem of mode selection,resource allocation,and power allocation is formulated to minimize latency under various constraints.We propose a Deep Reinforcement Learning(DRL)based joint computation offloading and resource allocation scheme that achieves a suboptimal solution in F-RANs.The core idea of the proposal is that the DRL controller intelligently decides whether to process the generated computation task locally at the device level or offload the task to a fog access point or cloud server and allocates an optimal amount of computation and power resources on the basis of the serving tier.Simulation results show that the proposed approach significantly minimizes latency and increases throughput in the system.展开更多
The European organization for nuclear research(CERN)is planning a high performance particle collider by 2050,which will update the currently used Large Hadron Collider(LHC).The design of the new experiment facility in...The European organization for nuclear research(CERN)is planning a high performance particle collider by 2050,which will update the currently used Large Hadron Collider(LHC).The design of the new experiment facility includes the definition of a suitable communication infrastructure to support the future needs of scientists.The huge amount of data collected by the measurement devices call for a data rate of at least 1 Gb/s per node,while the need of timely control of instruments requires a low latency of the order of 0.01μs.Moreover,the main tunnel will be 100 km long,and will need appropriate coverage for voice and data traffic,in a special underground environment subject also to strong radiations.Reliable voice,data and video transmission in a tunnel of this length is necessary to ensure timely and localized intervention,reducing access time.In addition,using wireless communication for voice,control and data acquisition of accelerator technical systems could lead to a significant reduction in cabling costs,installation times and maintenance efforts.The communication infrastructure of the Future Circular Collider(FCC)tunnel must be able to circumvent the problems of radioactivity,omnipresent in the tunnel.Current technologies transceivers cannot transmit in such a severely radioactive environment.This is due to the immediate destruction of any active or passive equipment by radioactivity.The scope of this paper is to determine the feasibility of robust wireless transmission in an underground radioactive tunnel environment.The network infrastructure design to meet the demand will be introduced,and the performance of different wireless technologies will be evaluated.展开更多
Delay and throughput are the two network indicators that users most care about.Traditional congestion control methods try to occupy buffer aggressively until packet loss being detected,causing high delay and variation...Delay and throughput are the two network indicators that users most care about.Traditional congestion control methods try to occupy buffer aggressively until packet loss being detected,causing high delay and variation.Using AQM and ECN can highly reduce packet drop rate and delay,however they may also lead to low utilization.Managing queue size of routers properly means a lot to congestion control method.Keeping traffic size varying around bottleneck bandwidth creates some degree of persistent queue in the router,which brings in additional delay into network unwillingly,but a corporation between sender and router can keep it under control.Proper persistent queue not only keeps routers being fully utilized all the time,but also lower the variation of throughput and delay,achieving the balance between delay and utilization.In this paper,we present BCTCP(Buffer Controllable TCP),a congestion control protocol based on explicit feedback from routers.It requires sender,receiver and routers cooperating with each other,in which senders adjust their sending rate according to the multiple bit load factor information from routers.It keeps queue length of bottleneck under control,leading to very good delay and utilization result,making it more applicable to complex network environments.展开更多
A wireless body area network (WBAN) allows integration of low power, invasive or noninvasive miniaturized sensors around a human body. WBAN is expected to become a basic infrastructure element for human health monitor...A wireless body area network (WBAN) allows integration of low power, invasive or noninvasive miniaturized sensors around a human body. WBAN is expected to become a basic infrastructure element for human health monitoring. The Task Group 6 of IEEE 802.15 is formed to address specific needs of body area network. It defines a medium access control layer that supports various physical layers. In this work, we analyze the efficiency of simple slotted ALOHA scheme, and then propose a novel allocation scheme that controls the random access period and packet transmission probability to optimize channel efficiency. NS-2 simulations have been carried out to evaluate its performance. The simulation results demonstrate significant performance improvement in latency and throughput using the proposed MAC algorithm.展开更多
IPsec has become an important supplement of IP to provide security protection. However, the heavyweight IPsec has a high transmission overhead and latency, and it cannot provide the address accountability. We propose ...IPsec has become an important supplement of IP to provide security protection. However, the heavyweight IPsec has a high transmission overhead and latency, and it cannot provide the address accountability. We propose the self-trustworthy and secure Internet protocol(T-IP) for authenticated and encrypted network layer communications. T-IP has the following advantages:(1) Self-Trustworthy IP address.(2) Low connection latency and transmission overhead.(3) Reserving the important merit of IP to be stateless.(4) Compatible with the existing TCP/IP architecture. We theoretically prove the security of our shared secret key in T-IP and the resistance to the known session key attack of our security-enhanced shared secret key calculation. Moreover, we analyse the possibility of the application of T-IP, including its resilience against the man-in-the-middle attack and Do S attack. The evaluation shows that T-IP has a much lower transmission overhead and connection latency compared with IPsec.展开更多
Deep learning has now been widely used in intelligent apps of mobile devices.In pursuit of ultra-low power and latency,integrating neural network accelerators(NNA)to mobile phones has become a trend.However,convention...Deep learning has now been widely used in intelligent apps of mobile devices.In pursuit of ultra-low power and latency,integrating neural network accelerators(NNA)to mobile phones has become a trend.However,conventional deep learning programming frameworks are not well-developed to support such devices,leading to low computing efficiency and high memory-occupation.To address this problem,a 2-stage pipeline is proposed for optimizing deep learning model inference on mobile devices with NNAs in terms of both speed and memory-footprint.The 1 st stage reduces computation workload via graph optimization,including splitting and merging nodes.The 2 nd stage goes further by optimizing at compilation level,including kernel fusion and in-advance compilation.The proposed optimizations on a commercial mobile phone with an NNA is evaluated.The experimental results show that the proposed approaches achieve 2.8×to 26×speed up,and reduce the memory-footprint by up to 75%.展开更多
This paper proposes a hardware-efficient implementation of division, which is useful for image processing in WSN edge devices. For error-resilient applications such as image processing, accurate calculations can be un...This paper proposes a hardware-efficient implementation of division, which is useful for image processing in WSN edge devices. For error-resilient applications such as image processing, accurate calculations can be unnecessary overhead, and approximate computing that obtains circuit benefits from inaccurate calculations is effective. Since there are studies showing sufficient performance with few bit operations, this paper proposes a combinational arithmetic circuit design of 16 bits or less. The proposed design is an approximate restoring division circuit implemented with a 2-dimensional array of 1-bit subtractor cells. The main drawback of such a design is the long “borrow-chain” that traverses all of the rows of the 2-dimensional subtractor array before a final stable quotient result can be produced, thereby resulting in a long delay and excessive power dissipation. This paper proposes two approximate subtractor cell designs, named ABSC and ADSC, that break this borrow chain: the first in the vertical direction and the second in the horizontal direction, respectively. The proposed approximate divider designs are compared with an accurate design and previous state-of-the-art designs based on accuracy and hardware overhead. The proposed designs have accuracy levels that are close to the best accuracy levels achieved by previous state-of-the-art approximate divider designs. In addition, the proposed ADSC design had the lowest delay, area, and power characteristics. Finally, the implementation of both proposed designs for two practical applications showed that both designs provide sufficient division accuracy.展开更多
In this paper,we co-design the transmission power and the offloading strategy for job offloading to a mobile edge computing(MEC)server at Terahertz(THz)frequencies.The goal is to minimize the communication energy cons...In this paper,we co-design the transmission power and the offloading strategy for job offloading to a mobile edge computing(MEC)server at Terahertz(THz)frequencies.The goal is to minimize the communication energy consumption while providing ultra-reliable low end-to-end latency(URLLC)services.To that end,we first establish a novel reliability framework,where the end-to-end(E2E)delay equals a weighted sum of the local computing delay,the communication delay and the edge computing delay,and the reliability is defined as the probability that the E2E delay remains below a certain pre-defined threshold.This reliability gives a full view of the statistics of the E2E delay,thus constituting advancement over prior works that have considered only average delays.Based on this framework,we establish the communication energy consumption minimization problem under URLLC constraints.This optimization problem is non-convex.To handle that issue,we first consider the special single-user case,where we derive the optimal solution by analyzing the structure of the optimization problem.Further,based on the analytical result for the single-user case,we decouple the optimization problem for multi-user scenarios into several sub-optimization problems and propose a sub-optimal algorithm to solve it.Numerical results verify the performance of the proposed algorithm.展开更多
基金The publication is produced within the framework of Ramon Alcarria y Borja Bordel’s research projects on the occasion of their stay at Argonne Labs(Jose Castillejo’s 2021 grant)supported by the Ministry of Science,Innovation andUniversities through the COGNOS project.
文摘Future components to enhance the basic,native security of 5G networks are either complex mechanisms whose impact in the requiring 5G communications are not considered,or lightweight solutions adapted to ultrareliable low-latency communications(URLLC)but whose security properties remain under discussion.Although different 5G network slices may have different requirements,in general,both visions seem to fall short at provisioning secure URLLC in the future.In this work we address this challenge,by introducing cost-security functions as a method to evaluate the performance and adequacy of most developed and employed non-native enhanced security mechanisms in 5G networks.We categorize those new security components into different groups according to their purpose and deployment scope.We propose to analyze them in the context of existing 5G architectures using two different approaches.First,using model checking techniques,we will evaluate the probability of an attacker to be successful against each security solution.Second,using analytical models,we will analyze the impact of these security mechanisms in terms of delay,throughput consumption,and reliability.Finally,we will combine both approaches using stochastic cost-security functions and the PRISM model checker to create a global picture.Our results are first evidence of how a 5G network that covers and strengthened all security areas through enhanced,dedicated non-native mechanisms could only guarantee secure URLLC with a probability of∼55%.
基金supported by the National High Technology Research and Development Program(863) of China(No.2015AA01A701)
文摘Through enabling the IT and cloud computation capacities at Radio Access Network(RAN),Mobile Edge Computing(MEC) makes it possible to deploy and provide services locally.Therefore,MEC becomes the potential technology to satisfy the requirements of 5G network to a certain extent,due to its functions of services localization,local breakout,caching,computation offloading,network context information exposure,etc.Especially,MEC can decrease the end-to-end latency dramatically through service localization and caching,which is key requirement of 5G low latency scenario.However,the performance of MEC still needs to be evaluated and verified for future deployment.Thus,the concept of MEC is introduced into5 G architecture and analyzed for different 5G scenarios in this paper.Secondly,the evaluation of MEC performance is conducted and analyzed in detail,especially for network end-to-end latency.In addition,some challenges of the MEC are also discussed for future deployment.
文摘Latency sensitive services have attracted much attention lately and imposedstringent requirements on the access network design. Passive optical networks (PONs) providea potential long-term solution for the underlying transport network supporting theseservices. This paper discusses latency limitations in PON and recent progress in PONstandardization to improve latency. Experimental results of a low latency PON system arepresented as a proof of concept.
基金Projects(61103011,61170261) supported by the National Natural Science Foundation of China
文摘Many energy efficiency asynchronous duty-cycle MAC(media access control) protocols have been proposed in recent years.However,in these protocols,wireless sensor nodes almost choose their wakeup time randomly during the operational cycle,which results in the packet delivery latency increased significantly on the multiple hops path.To reduce the packet delivery latency on multi-hop path and energy waste of the sender's idle listening,a new low latency routing-enhanced asynchronous duty-cycle MAC protocol was presented,called REA-MAC.In REA-MAC,each sensor node decided when it waked up to send the beacon based on cross-layer routing information.Furthermore,the sender adaptively waked up based on the relationship between the transmission request time and the wakeup time of its next hop node.The simulation results show that REA-MAC reduces delivery latency by 60% compared to RI-MAC and reduces 8.77% power consumption on average.Under heavy traffic,REA-MAC's throughput is 1.48 times of RI-MAC's.
基金Project(61174132) supported by the National Natural Science Foundation of ChinaProject(09JJ6098) supported by the Natural Science Foundation of Hunan Province,China
文摘Systolic implementation of multiplication over GF(2m) is usually very efficient in area-time complexity,but its latency is usually very large.Thus,two low latency systolic multipliers over GF(2m) based on general irreducible polynomials and irreducible pentanomials are presented.First,a signal flow graph(SFG) is used to represent the algorithm for multiplication over GF(2m).Then,the two low latency systolic structures for multiplications over GF(2m) based on general irreducible polynomials and pentanomials are presented from the SFG by suitable cut-set retiming,respectively.Analysis indicates that the proposed two low latency designs involve at least one-third less area-delay product when compared with the existing designs,To the authors' knowledge,the time-complexity of the structures is the lowest found in literature for systolic GF(2m) multipliers based on general irreducible polynomials and pentanomials.The proposed low latency designs are regular and modular,and therefore they are suitable for many time critical applications.
基金supported by the Natural Science Foundation of China (No.62171051)。
文摘Puncturing has been recognized as a promising technology to cope with the coexistence problem of enhanced mobile broadband(eMBB) and ultra-reliable low latency communications(URLLC)traffic. However, the steady performance of eMBB traffic while meeting the requirements of URLLC traffic with puncturing is a major challenge in some realistic scenarios. In this paper, we pay attention to the timely and energy-efficient processing for eMBB traffic in the industrial Internet of Things(IIoT), where mobile edge computing(MEC) is employed for data processing. Specifically, the performance of eMBB traffic and URLLC traffic in a MEC-based IIoT system is ensured by setting the threshold of tolerable delay and outage probability, respectively. Furthermore,considering the limited energy supply, an energy minimization problem of eMBB device is formulated under the above constraints, by jointly optimizing the resource blocks(RBs) punctured by URLLC traffic, data offloading and transmit power of eMBB device. With Markov's inequality, the problem is reformulated by transforming the probabilistic outage constraint into a deterministic constraint. Meanwhile, an iterative energy minimization algorithm(IEMA) is proposed.Simulation results demonstrate that our algorithm has a significant reduction in the energy consumption for eMBB device and achieves a better overall effect compared to several benchmarks.
文摘The rapid expansion of the Internet of Things (IoT) has driven the need for advanced computational frameworks capable of handling the complex data processing and security challenges that modern IoT applications demand. However, traditional cloud computing frameworks face significant latency, scalability, and security issues. Quantum-Edge Cloud Computing (QECC) offers an innovative solution by integrating the computational power of quantum computing with the low-latency advantages of edge computing and the scalability of cloud computing resources. This study is grounded in an extensive literature review, performance improvements, and metrics data from Bangladesh, focusing on smart city infrastructure, healthcare monitoring, and the industrial IoT sector. The discussion covers vital elements, including integrating quantum cryptography to enhance data security, the critical role of edge computing in reducing response times, and cloud computing’s ability to support large-scale IoT networks with its extensive resources. Through case studies such as the application of quantum sensors in autonomous vehicles, the practical impact of QECC is demonstrated. Additionally, the paper outlines future research opportunities, including developing quantum-resistant encryption techniques and optimizing quantum algorithms for edge computing. The convergence of these technologies in QECC has the potential to overcome the current limitations of IoT frameworks, setting a new standard for future IoT applications.
文摘With the vigorous development of automobile industry,in-vehicle network is also constantly upgraded to meet data transmission requirements of emerging applications.The main transmission requirements are low latency and certainty especially for autonomous driving.Time sensitive networking(TSN)based on Ethernet gives a possible solution to these requirements.Previous surveys usually investigated TSN from a general perspective,which referred to TSN of various application fields.In this paper,we focus on the application of TSN to the in-vehicle networks.For in-vehicle networks,we discuss all related TSN standards specified by IEEE 802.1 work group up to now.We further overview and analyze recent literature on various aspects of TSN for automotive applications,including synchronization,resource reservation,scheduling,certainty,software and hardware.Application scenarios of TSN for in-vehicle networks are analyzed one by one.Since TSN of in-vehicle network is still at a very initial stage,this paper also gives insights on open issues,future research directions and possible solutions.
基金supported in part by National Key Research and Development Program of China under Grant 2021YFB2900404。
文摘The numbers of beam positions(BPs)and time slots for beam hopping(BH)dominate the latency of LEO satellite communications.Aiming at minimizing the number of BPs subject to a predefined requirement on the radius of BP,a low-complexity user density-based BP design scheme is proposed,where the original problem is decomposed into two subproblems,with the first one to find the sparsest user and the second one to determine the corresponding best BP.In particular,for the second subproblem,a user selection and smallest BP radius algorithm is proposed,where the nearby users are sequentially selected until the constraint of the given BP radius is no longer satisfied.These two subproblems are iteratively solved until all the users are selected.To further reduce the BP radius,a duplicated user removal algorithm is proposed to decrease the number of the users covered by two or more BPs.Aiming at minimizing the number of time slots subject to the no co-channel interference(CCI)constraint and the traffic demand constraint,a low-complexity CCI-free BH design scheme is proposed,where the BPs having difficulty in satisfying the constraints are considered to be illuminated in priory.Simulation results verify the effectiveness of the proposed schemes.
基金supported by the National Science Fundation of China(NSFC)under grant 62001423the Henan Provincial Key Research,Development and Promotion Project under grant 212102210175the Henan Provincial Key Scientific Research Project for College and University under grant 21A510011.
文摘Sparse vector coding(SVC)is emerging as a potential technology for short packet communications.To further improve the block error rate(BLER)performance,a uniquely decomposable constellation group-based SVC(UDCG-SVC)is proposed in this article.Additionally,in order to achieve an optimal BLER performance of UDCG-SVC,a problem to optimize the coding gain of UDCG-based superimposed constellation is formulated.Given the energy of rotation constellations in UDCG,this problem is solved by converting it into finding the maximized minimum Euclidean distance of the superimposed constellation.Simulation results demonstrate the validness of our derivation.We also find that the proposed UDCGSVC has better BLER performance compared to other SVC schemes,especially under the high order modulation scenarios.
文摘Key challenges for 5G and Beyond networks relate with the requirements for exceptionally low latency, high reliability, and extremely high data rates. The Ultra-Reliable Low Latency Communication (URLLC) use case is the trickiest to support and current research is focused on physical or MAC layer solutions, while proposals focused on the network layer using Machine Learning (ML) and Artificial Intelligence (AI) algorithms running on base stations and User Equipment (UE) or Internet of Things (IoT) devices are in early stages. In this paper, we describe the operation rationale of the most recent relevant ML algorithms and techniques, and we propose and validate ML algorithms running on both cells (base stations/gNBs) and UEs or IoT devices to handle URLLC service control. One ML algorithm runs on base stations to evaluate latency demands and offload traffic in case of need, while another lightweight algorithm runs on UEs and IoT devices to rank cells with the best URLLC service in real-time to indicate the best one cell for a UE or IoT device to camp. We show that the interplay of these algorithms leads to good service control and eventually optimal load allocation, under slow load mobility. .
文摘Ultra-reliable and low-latency communications(URLLC) has become a fundamental focus of future industrial wireless sensor net-works(IWSNs). With the evolution of automation and process control in industrial environments, the need for increased reliabilityand reduced latencies in wireless communications is even pronounced. Furthermore, the 5G systems specifically target the URLLCin selected areas and industrial automation might turn into a suitable venue for future IWSNs, running 5G as a high speed inter-process linking technology. In this paper, a hybrid multi-channel scheme for performance and throughput enhancement of IWSNsis proposed. The scheme utilizes the multiple frequency channels to increase the overall throughput of the system along with theincrease in reliability. A special purpose frequency channel is defined, which facilitates the failed communications by retransmis-sions where the retransmission slots are allocated according to the priority level of failed communications of different nodes. Ascheduler is used to formulate priority based scheduling for retransmission in TDMA based communication slots of this channel.Furthermore, in carrier-sense multiple access with collision avoidance(CSMA/CA) based slots, a frequency polling is introducedto limit the collisions. Mathematical modelling for performance metrics is also presented. The performance of the proposed schemeis compared with that of IEEE802.15.4e, where the performance is evaluated on the basis of throughput, reliability and the num-ber of nodes accommodated in a cluster. The proposed scheme offers a notable increase in the reliability and throughput over theexisting IEEE802.15.4e Low Latency Deterministic Networks(LLDN) standard.
文摘Fog Radio Access Networks(F-RANs)have been considered a groundbreaking technique to support the services of Internet of Things by leveraging edge caching and edge computing.However,the current contributions in computation offloading and resource allocation are inefficient;moreover,they merely consider the static communication mode,and the increasing demand for low latency services and high throughput poses tremendous challenges in F-RANs.A joint problem of mode selection,resource allocation,and power allocation is formulated to minimize latency under various constraints.We propose a Deep Reinforcement Learning(DRL)based joint computation offloading and resource allocation scheme that achieves a suboptimal solution in F-RANs.The core idea of the proposal is that the DRL controller intelligently decides whether to process the generated computation task locally at the device level or offload the task to a fog access point or cloud server and allocates an optimal amount of computation and power resources on the basis of the serving tier.Simulation results show that the proposed approach significantly minimizes latency and increases throughput in the system.
文摘The European organization for nuclear research(CERN)is planning a high performance particle collider by 2050,which will update the currently used Large Hadron Collider(LHC).The design of the new experiment facility includes the definition of a suitable communication infrastructure to support the future needs of scientists.The huge amount of data collected by the measurement devices call for a data rate of at least 1 Gb/s per node,while the need of timely control of instruments requires a low latency of the order of 0.01μs.Moreover,the main tunnel will be 100 km long,and will need appropriate coverage for voice and data traffic,in a special underground environment subject also to strong radiations.Reliable voice,data and video transmission in a tunnel of this length is necessary to ensure timely and localized intervention,reducing access time.In addition,using wireless communication for voice,control and data acquisition of accelerator technical systems could lead to a significant reduction in cabling costs,installation times and maintenance efforts.The communication infrastructure of the Future Circular Collider(FCC)tunnel must be able to circumvent the problems of radioactivity,omnipresent in the tunnel.Current technologies transceivers cannot transmit in such a severely radioactive environment.This is due to the immediate destruction of any active or passive equipment by radioactivity.The scope of this paper is to determine the feasibility of robust wireless transmission in an underground radioactive tunnel environment.The network infrastructure design to meet the demand will be introduced,and the performance of different wireless technologies will be evaluated.
基金supported in part by the National Key R&D Program of China(2018YFB1800602)the Ministry of Education-China Mobile Research Fund Project(MCM20180506)the CERNET Innovation Project(NGIICS20190101)and(NGII20170406)。
文摘Delay and throughput are the two network indicators that users most care about.Traditional congestion control methods try to occupy buffer aggressively until packet loss being detected,causing high delay and variation.Using AQM and ECN can highly reduce packet drop rate and delay,however they may also lead to low utilization.Managing queue size of routers properly means a lot to congestion control method.Keeping traffic size varying around bottleneck bandwidth creates some degree of persistent queue in the router,which brings in additional delay into network unwillingly,but a corporation between sender and router can keep it under control.Proper persistent queue not only keeps routers being fully utilized all the time,but also lower the variation of throughput and delay,achieving the balance between delay and utilization.In this paper,we present BCTCP(Buffer Controllable TCP),a congestion control protocol based on explicit feedback from routers.It requires sender,receiver and routers cooperating with each other,in which senders adjust their sending rate according to the multiple bit load factor information from routers.It keeps queue length of bottleneck under control,leading to very good delay and utilization result,making it more applicable to complex network environments.
基金Project(2010-0020163) supported by Inha University Research and by Basic Science Research Program through the National Research Foundation of Korea(NRF) Funded by the Ministry of Education, Korea
文摘A wireless body area network (WBAN) allows integration of low power, invasive or noninvasive miniaturized sensors around a human body. WBAN is expected to become a basic infrastructure element for human health monitoring. The Task Group 6 of IEEE 802.15 is formed to address specific needs of body area network. It defines a medium access control layer that supports various physical layers. In this work, we analyze the efficiency of simple slotted ALOHA scheme, and then propose a novel allocation scheme that controls the random access period and packet transmission probability to optimize channel efficiency. NS-2 simulations have been carried out to evaluate its performance. The simulation results demonstrate significant performance improvement in latency and throughput using the proposed MAC algorithm.
基金supported by the national key research and development program under grant 2017YFB0802301Guangxi cloud computing and large data Collaborative Innovation Center Project
文摘IPsec has become an important supplement of IP to provide security protection. However, the heavyweight IPsec has a high transmission overhead and latency, and it cannot provide the address accountability. We propose the self-trustworthy and secure Internet protocol(T-IP) for authenticated and encrypted network layer communications. T-IP has the following advantages:(1) Self-Trustworthy IP address.(2) Low connection latency and transmission overhead.(3) Reserving the important merit of IP to be stateless.(4) Compatible with the existing TCP/IP architecture. We theoretically prove the security of our shared secret key in T-IP and the resistance to the known session key attack of our security-enhanced shared secret key calculation. Moreover, we analyse the possibility of the application of T-IP, including its resilience against the man-in-the-middle attack and Do S attack. The evaluation shows that T-IP has a much lower transmission overhead and connection latency compared with IPsec.
基金Supported by the National Key Research and Development Program of China(No.2017YFB1003101,2018AAA0103300,2017YFA0700900)the National Natural Science Foundation of China(No.61702478,61732007,61906179)+2 种基金the Beijing Natural Science Foundation(No.JQ18013)the National Science and Technology Major Project(No.2018ZX01031102)the Beijing Academy of Artificial Intelligence
文摘Deep learning has now been widely used in intelligent apps of mobile devices.In pursuit of ultra-low power and latency,integrating neural network accelerators(NNA)to mobile phones has become a trend.However,conventional deep learning programming frameworks are not well-developed to support such devices,leading to low computing efficiency and high memory-occupation.To address this problem,a 2-stage pipeline is proposed for optimizing deep learning model inference on mobile devices with NNAs in terms of both speed and memory-footprint.The 1 st stage reduces computation workload via graph optimization,including splitting and merging nodes.The 2 nd stage goes further by optimizing at compilation level,including kernel fusion and in-advance compilation.The proposed optimizations on a commercial mobile phone with an NNA is evaluated.The experimental results show that the proposed approaches achieve 2.8×to 26×speed up,and reduce the memory-footprint by up to 75%.
文摘This paper proposes a hardware-efficient implementation of division, which is useful for image processing in WSN edge devices. For error-resilient applications such as image processing, accurate calculations can be unnecessary overhead, and approximate computing that obtains circuit benefits from inaccurate calculations is effective. Since there are studies showing sufficient performance with few bit operations, this paper proposes a combinational arithmetic circuit design of 16 bits or less. The proposed design is an approximate restoring division circuit implemented with a 2-dimensional array of 1-bit subtractor cells. The main drawback of such a design is the long “borrow-chain” that traverses all of the rows of the 2-dimensional subtractor array before a final stable quotient result can be produced, thereby resulting in a long delay and excessive power dissipation. This paper proposes two approximate subtractor cell designs, named ABSC and ADSC, that break this borrow chain: the first in the vertical direction and the second in the horizontal direction, respectively. The proposed approximate divider designs are compared with an accurate design and previous state-of-the-art designs based on accuracy and hardware overhead. The proposed designs have accuracy levels that are close to the best accuracy levels achieved by previous state-of-the-art approximate divider designs. In addition, the proposed ADSC design had the lowest delay, area, and power characteristics. Finally, the implementation of both proposed designs for two practical applications showed that both designs provide sufficient division accuracy.
文摘In this paper,we co-design the transmission power and the offloading strategy for job offloading to a mobile edge computing(MEC)server at Terahertz(THz)frequencies.The goal is to minimize the communication energy consumption while providing ultra-reliable low end-to-end latency(URLLC)services.To that end,we first establish a novel reliability framework,where the end-to-end(E2E)delay equals a weighted sum of the local computing delay,the communication delay and the edge computing delay,and the reliability is defined as the probability that the E2E delay remains below a certain pre-defined threshold.This reliability gives a full view of the statistics of the E2E delay,thus constituting advancement over prior works that have considered only average delays.Based on this framework,we establish the communication energy consumption minimization problem under URLLC constraints.This optimization problem is non-convex.To handle that issue,we first consider the special single-user case,where we derive the optimal solution by analyzing the structure of the optimization problem.Further,based on the analytical result for the single-user case,we decouple the optimization problem for multi-user scenarios into several sub-optimization problems and propose a sub-optimal algorithm to solve it.Numerical results verify the performance of the proposed algorithm.