In Beyond the Fifth Generation(B5G)heterogeneous edge networks,numerous users are multiplexed on a channel or served on the same frequency resource block,in which case the transmitter applies coding and the receiver u...In Beyond the Fifth Generation(B5G)heterogeneous edge networks,numerous users are multiplexed on a channel or served on the same frequency resource block,in which case the transmitter applies coding and the receiver uses interference cancellation.Unfortunately,uncoordinated radio resource allocation can reduce system throughput and lead to user inequity,for this reason,in this paper,channel allocation and power allocation problems are formulated to maximize the system sum rate and minimum user achievable rate.Since the construction model is non-convex and the response variables are high-dimensional,a distributed Deep Reinforcement Learning(DRL)framework called distributed Proximal Policy Optimization(PPO)is proposed to allocate or assign resources.Specifically,several simulated agents are trained in a heterogeneous environment to find robust behaviors that perform well in channel assignment and power allocation.Moreover,agents in the collection stage slow down,which hinders the learning of other agents.Therefore,a preemption strategy is further proposed in this paper to optimize the distributed PPO,form DP-PPO and successfully mitigate the straggler problem.The experimental results show that our mechanism named DP-PPO improves the performance over other DRL methods.展开更多
Massive content delivery will become one of the most prominent tasks of future B5G/6G communication.However,various multimedia applications possess huge differences in terms of object oriented(i.e.,machine or user)and...Massive content delivery will become one of the most prominent tasks of future B5G/6G communication.However,various multimedia applications possess huge differences in terms of object oriented(i.e.,machine or user)and corresponding quality evaluation metric,which will significantly impact the design of encoding or decoding within content delivery strategy.To get over this dilemma,we firstly integrate the digital twin into the edge networks to accurately and timely capture Quality-of-Decision(QoD)or Quality-of-Experience(QoE)for the guidance of content delivery.Then,in terms of machinecentric communication,a QoD-driven compression mechanism is designed for video analytics via temporally lightweight frame classification and spatially uneven quality assignment,which can achieve a balance among decision-making,delivered content,and encoding latency.Finally,in terms of user-centric communication,by fully leveraging haptic physical properties and semantic correlations of heterogeneous streams,we develop a QoE-driven video enhancement scheme to supply high data fidelity.Numerical results demonstrate the remarkable performance improvement of massive content delivery.展开更多
The emerging mobile edge networks with content caching capability allows end users to receive information from adjacent edge servers directly instead of a centralized data warehouse,thus the network transmission delay...The emerging mobile edge networks with content caching capability allows end users to receive information from adjacent edge servers directly instead of a centralized data warehouse,thus the network transmission delay and system throughput can be improved significantly.Since the duplicate content transmissions between edge network and remote cloud can be reduced,the appropriate caching strategy can also improve the system energy efficiency of mobile edge networks to a great extent.This paper focuses on how to improve the network energy efficiency and proposes an intelligent caching strategy according to the cached content distribution model for mobile edge networks based on promising deep reinforcement learning algorithm.The deep neural network(DNN)and Q-learning algorithm are combined to design a deep reinforcement learning framework named as the deep-Q neural network(DQN),in which the DNN is adopted to represent the approximation of action-state value function in the Q-learning solution.The parameters iteration strategies in the proposed DQN algorithm were improved through stochastic gradient descent method,so the DQN algorithm could converge to the optimal solution quickly,and the network performance of the content caching policy can be optimized.The simulation results show that the proposed intelligent DQN-based content cache strategy with enough training steps could improve the energy efficiency of the mobile edge networks significantly.展开更多
The vehicle edge network(VEN)has become a new research hotspot in the Internet of Things(IOT).However,many new delays are generated during the vehicle offloading the task to the edge server,which will greatly reduce t...The vehicle edge network(VEN)has become a new research hotspot in the Internet of Things(IOT).However,many new delays are generated during the vehicle offloading the task to the edge server,which will greatly reduce the quality of service(QOS)provided by the vehicle edge network.To solve this problem,this paper proposes an evolutionary algorithm-based(EA)task offloading and resource allocation scheme.First,the delay of offloading task to the edge server is generally defined,then the mathematical model of problem is given.Finally,the objective function is optimized by evolutionary algorithm,and the optimal solution is obtained by iteration and averaging.To verify the performance of this method,contrast experiments are conducted.The experimental results show that our purposed method reduces delay and improves QOS,which is superior to other schemes.展开更多
With the rapid spread of smart sensors,data collection is becoming more and more important in Mobile Edge Networks(MENs).The collected data can be used in many applications based on the analysis results of these data ...With the rapid spread of smart sensors,data collection is becoming more and more important in Mobile Edge Networks(MENs).The collected data can be used in many applications based on the analysis results of these data by cloud computing.Nowadays,data collection schemes have been widely studied by researchers.However,most of the researches take the amount of collected data into consideration without thinking about the problem of privacy leakage of the collected data.In this paper,we propose an energy-efficient and anonymous data collection scheme for MENs to keep a balance between energy consumption and data privacy,in which the privacy information of senors is hidden during data communication.In addition,the residual energy of nodes is taken into consideration in this scheme in particular when it comes to the selection of the relay node.The security analysis shows that no privacy information of the source node and relay node is leaked to attackers.Moreover,the simulation results demonstrate that the proposed scheme is better than other schemes in aspects of lifetime and energy consumption.At the end of the simulation part,we present a qualitative analysis for the proposed scheme and some conventional protocols.It is noteworthy that the proposed scheme outperforms the existing protocols in terms of the above indicators.展开更多
Network fault diagnosis methods play a vital role in maintaining network service quality and enhancing user experience as an integral component of intelligent network management.Considering the unique characteristics ...Network fault diagnosis methods play a vital role in maintaining network service quality and enhancing user experience as an integral component of intelligent network management.Considering the unique characteristics of edge networks,such as limited resources,complex network faults,and the need for high real-time performance,enhancing and optimizing existing network fault diagnosis methods is necessary.Therefore,this paper proposes the lightweight edge-side fault diagnosis approach based on a spiking neural network(LSNN).Firstly,we use the Izhikevich neurons model to replace the Leaky Integrate and Fire(LIF)neurons model in the LSNN model.Izhikevich neurons inherit the simplicity of LIF neurons but also possess richer behavioral characteristics and flexibility to handle diverse data inputs.Inspired by Fast Spiking Interneurons(FSIs)with a high-frequency firing pattern,we use the parameters of FSIs.Secondly,inspired by the connection mode based on spiking dynamics in the basal ganglia(BG)area of the brain,we propose the pruning approach based on the FSIs of the BG in LSNN to improve computational efficiency and reduce the demand for computing resources and energy consumption.Furthermore,we propose a multiple iterative Dynamic Spike Timing Dependent Plasticity(DSTDP)algorithm to enhance the accuracy of the LSNN model.Experiments on two server fault datasets demonstrate significant precision,recall,and F1 improvements across three diagnosis dimensions.Simultaneously,lightweight indicators such as Params and FLOPs significantly reduced,showcasing the LSNN’s advanced performance and model efficiency.To conclude,experiment results on a pair of datasets indicate that the LSNN model surpasses traditional models and achieves cutting-edge outcomes in network fault diagnosis tasks.展开更多
Mobile Edge Computing(MEC)is a technology designed for the on-demand provisioning of computing and storage services,strategically positioned close to users.In the MEC environment,frequently accessed content can be dep...Mobile Edge Computing(MEC)is a technology designed for the on-demand provisioning of computing and storage services,strategically positioned close to users.In the MEC environment,frequently accessed content can be deployed and cached on edge servers to optimize the efficiency of content delivery,ultimately enhancing the quality of the user experience.However,due to the typical placement of edge devices and nodes at the network’s periphery,these components may face various potential fault tolerance challenges,including network instability,device failures,and resource constraints.Considering the dynamic nature ofMEC,making high-quality content caching decisions for real-time mobile applications,especially those sensitive to latency,by effectively utilizing mobility information,continues to be a significant challenge.In response to this challenge,this paper introduces FT-MAACC,a mobility-aware caching solution grounded in multi-agent deep reinforcement learning and equipped with fault tolerance mechanisms.This approach comprehensively integrates content adaptivity algorithms to evaluate the priority of highly user-adaptive cached content.Furthermore,it relies on collaborative caching strategies based onmulti-agent deep reinforcement learningmodels and establishes a fault-tolerancemodel to ensure the system’s reliability,availability,and persistence.Empirical results unequivocally demonstrate that FTMAACC outperforms its peer methods in cache hit rates and transmission latency.展开更多
Multiple complex networks, each with different properties and mutually fused, have the problems that the evolving process is time varying and non-equilibrium, network structures are layered and interlacing, and evolvi...Multiple complex networks, each with different properties and mutually fused, have the problems that the evolving process is time varying and non-equilibrium, network structures are layered and interlacing, and evolving characteristics are difficult to be measured. On that account, a dynamic evolving model of complex network with fusion nodes and overlap edges(CNFNOEs) is proposed. Firstly, we define some related concepts of CNFNOEs, and analyze the conversion process of fusion relationship and hierarchy relationship. According to the property difference of various nodes and edges, fusion nodes and overlap edges are subsequently split, and then the CNFNOEs is transformed to interlacing layered complex networks(ILCN). Secondly,the node degree saturation and attraction factors are defined. On that basis, the evolution algorithm and the local world evolution model for ILCN are put forward. Moreover, four typical situations of nodes evolution are discussed, and the degree distribution law during evolution is analyzed by means of the mean field method.Numerical simulation results show that nodes unreached degree saturation follow the exponential distribution with an error of no more than 6%; nodes reached degree saturation follow the distribution of their connection capacities with an error of no more than 3%; network weaving coefficients have a positive correlation with the highest probability of new node and initial number of connected edges. The results have verified the feasibility and effectiveness of the model, which provides a new idea and method for exploring CNFNOE's evolving process and law. Also, the model has good application prospects in structure and dynamics research of transportation network, communication network, social contact network,etc.展开更多
False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural ...False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural networkmodels to detect FDIA attacks.However,they overlook the fact that FDIA attack samples at public-private network edges are extremely sparse,making it difficult for neural network models to obtain sufficient samples to construct a robust detection model.To address this problem,this paper designs an efficient sample generative adversarial model of FDIA attack in public-private network edge,which can effectively bypass the detectionmodel to threaten the power grid system.A generative adversarial network(GAN)framework is first constructed by combining residual networks(ResNet)with fully connected networks(FCN).Then,a sparse adversarial learning model is built by integrating the time-aligned data and normal data,which is used to learn the distribution characteristics between normal data and attack data through iterative confrontation.Furthermore,we introduce a Gaussian hybrid distributionmatrix by aggregating the network structure of attack data characteristics and normal data characteristics,which can connect and calculate FDIA data with normal characteristics.Finally,efficient FDIA attack samples can be sequentially generated through interactive adversarial learning.Extensive simulation experiments are conducted with IEEE 14-bus and IEEE 118-bus system data,and the results demonstrate that the generated attack samples of the proposed model can present superior performance compared to state-of-the-art models in terms of attack strength,robustness,and covert capability.展开更多
The detection of error and its correction is an important area of mathematics that is vastly constructed in all communication systems.Furthermore,combinatorial design theory has several applications like detecting or ...The detection of error and its correction is an important area of mathematics that is vastly constructed in all communication systems.Furthermore,combinatorial design theory has several applications like detecting or correcting errors in communication systems.Network(graph)designs(GDs)are introduced as a generalization of the symmetric balanced incomplete block designs(BIBDs)that are utilized directly in the above mentioned application.The networks(graphs)have been represented by vectors whose entries are the labels of the vertices related to the lengths of edges linked to it.Here,a general method is proposed and applied to construct new networks designs.This method of networks representation has simplified the method of constructing the network designs.In this paper,a novel representation of networks is introduced and used as a technique of constructing the group generated network designs of the complete bipartite networks and certain circulants.A technique of constructing the group generated network designs of the circulants is given with group generated graph designs(GDs)of certain circulants.In addition,the GDs are transformed into an incidence matrices,the rows and the columns of these matrices can be both viewed as a binary nonlinear code.A novel coding error detection and correction application is proposed and examined.展开更多
The convergence of computation and communication at network edges plays a significant role in coping with computation-intensive and delay-critical tasks.During the stage of network planning,the resource provisioning p...The convergence of computation and communication at network edges plays a significant role in coping with computation-intensive and delay-critical tasks.During the stage of network planning,the resource provisioning problem for edge nodes has to be investigated to provide prior information for future system configurations.This work focuses on how to quantify the computation capabilities of access points at network edges when provisioning resources of computation and communication in multi-cell wireless networks.The problem is formulated as a discrete and non-convex minimization problem,where practical constraints including delay requirements,the inter-cell interference,and resource allocation strategies are considered.An iterative algorithm is also developed based on decomposition theory and fractional programming to solve this problem.The analysis shows that the necessary computation capability needed for certain delay guarantee depends on resource allocation strategies for delay-critical tasks.For delay-tolerant tasks,it can be approximately estimated by a derived lower bound which ignores the scheduling strategy.The efficiency of the proposed algorithm is demonstrated using numerical results.展开更多
The explosive increase of smart devices and mobile traffic results intolerable network latency and degraded service to the end-users. in heavy burden on backhaul and core network, As a complement to core network, edge...The explosive increase of smart devices and mobile traffic results intolerable network latency and degraded service to the end-users. in heavy burden on backhaul and core network, As a complement to core network, edge network contributes to relieving network burden and improving user experience. To investigate the problem of optimizing the total consumption in an edge-core network, the system consumption minimization problem is fromulated, considering the energy consumption and delay. Given that the formulated problem is a mixed nonlinear integer programming (MNIP) , a low-complexity workload allocation algorithm is proposed based on interior-point method. The proposed algorithm has an extremely short running time in practice. Finally, simulation results show that edge network can significantly complement core network with much reduced backhaul energy consumption and delay.展开更多
Memristive technology has been widely explored, due to its distinctive properties, such as nonvolatility, high density,versatility, and CMOS compatibility. For memristive devices, a general compact model is highly fav...Memristive technology has been widely explored, due to its distinctive properties, such as nonvolatility, high density,versatility, and CMOS compatibility. For memristive devices, a general compact model is highly favorable for the realization of its circuits and applications. In this paper, we propose a novel memristive model of TiOx-based devices, which considers the negative differential resistance(NDR) behavior. This model is physics-oriented and passes Linn's criteria. It not only exhibits sufficient accuracy(IV characteristics within 1.5% RMS), lower latency(below half the VTEAM model),and preferable generality compared to previous models, but also yields more precise predictions of long-term potentiation/depression(LTP/LTD). Finally, novel methods based on memristive models are proposed for gray sketching and edge detection applications. These methods avoid complex nonlinear functions required by their original counterparts. When the proposed model is utilized in these methods, they achieve increased contrast ratio and accuracy(for gray sketching and edge detection, respectively) compared to the Simmons model. Our results suggest a memristor-based network is a promising candidate to tackle the existing inefficiencies in traditional image processing methods.展开更多
This paper proposes a novel scheme based on minimum delay at the edges (MDE) for optical burst switching (OBS) networks. This scheme is designed to overcome the long delay at the edge nodes of OBS networks. The MDE sc...This paper proposes a novel scheme based on minimum delay at the edges (MDE) for optical burst switching (OBS) networks. This scheme is designed to overcome the long delay at the edge nodes of OBS networks. The MDE scheme features simultaneous burst assembly, channel scheduling, and pre-transmission of control packet. It also features estimated setup and explicit release (ESXR) signaling protocol. The MDE scheme can minimize the delay at the edge nodes for data packets, and improve the end-to-end latency performance for OBS networks. In addition, comparing with the conventional scheme, the performances of the MDE scheme are analyzed in this paper.展开更多
To achieve lower assembly delay at optical burst switching edge node, this paper proposes an approach called current weight length prediction (CWLP) to improve existing estimate mechanism in burst assembly. CWLP metho...To achieve lower assembly delay at optical burst switching edge node, this paper proposes an approach called current weight length prediction (CWLP) to improve existing estimate mechanism in burst assembly. CWLP method takes into account the arrived traffic in prediction time adequately. A parameter 'weight' is introduced to make a dynamic tradeoff between the current and past traffic under different offset time. Simulation results show that CWLP can achieve a significant improvement in terms of traffic estimation in various offset time and offered load.展开更多
A scheduling algorithm for the edge nodes of optical burst switching (OBS) networks is proposed to guarantee the delay requirement of services with different CoS (Class of Service) and provide lower burst loss ratio a...A scheduling algorithm for the edge nodes of optical burst switching (OBS) networks is proposed to guarantee the delay requirement of services with different CoS (Class of Service) and provide lower burst loss ratio at the same time. The performance of edge nodes based on the proposed algorithm is presented.展开更多
The emergence of smart edge-network content item hotspots, which are equipped with huge storage space (e.g., several GBs), opens up the opportunity to study the possibility of delivering videos at the edge network. ...The emergence of smart edge-network content item hotspots, which are equipped with huge storage space (e.g., several GBs), opens up the opportunity to study the possibility of delivering videos at the edge network. Different from both the conventional content item delivery network (CDN) and the peer-to-peer (P2P) scheme, this new delivery paradigm, namely edge video CDN, requires up to millions of edge hotspots located at users' homes/offices to be coordinately managed to serve mobile video content item. Specifically, two challenges are involved in building edge video CDN, including how edge content item hotspots should be organized to serve users, and how content items should be replicated to them at different locations to serve users. To address these challenges, we propose our data-driven design as follows. First, we formulate an edge region partition problem to jointly maximize the quality experienced by users and minimize the replication cost, which is NP-hard in nature, and we design a Voronoi-like partition algorithm to generate optimal service cells. Second, to replicate content items to edge-network content item hotspots, we propose an edge request prediction based replication strategy, which carries out the replication in a server peak offioading manner. We implement our design and use trace-driven experiments to verify its effectiveness. Compared with conventional centralized CDN and popularity-based replication, our design can significantly improve users' quality of experience, in terms of users' perceived bandwidth and latency, up to 40%.展开更多
基金supported by the Key Research and Development Program of China(No.2022YFC3005401)Key Research and Development Program of China,Yunnan Province(No.202203AA080009,202202AF080003)Postgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX21_0482).
文摘In Beyond the Fifth Generation(B5G)heterogeneous edge networks,numerous users are multiplexed on a channel or served on the same frequency resource block,in which case the transmitter applies coding and the receiver uses interference cancellation.Unfortunately,uncoordinated radio resource allocation can reduce system throughput and lead to user inequity,for this reason,in this paper,channel allocation and power allocation problems are formulated to maximize the system sum rate and minimum user achievable rate.Since the construction model is non-convex and the response variables are high-dimensional,a distributed Deep Reinforcement Learning(DRL)framework called distributed Proximal Policy Optimization(PPO)is proposed to allocate or assign resources.Specifically,several simulated agents are trained in a heterogeneous environment to find robust behaviors that perform well in channel assignment and power allocation.Moreover,agents in the collection stage slow down,which hinders the learning of other agents.Therefore,a preemption strategy is further proposed in this paper to optimize the distributed PPO,form DP-PPO and successfully mitigate the straggler problem.The experimental results show that our mechanism named DP-PPO improves the performance over other DRL methods.
基金partly supported by the National Natural Science Foundation of China (Grants No.62231017 and No.62071254)the Priority Academic Program Development of Jiangsu Higher Education Institutions。
文摘Massive content delivery will become one of the most prominent tasks of future B5G/6G communication.However,various multimedia applications possess huge differences in terms of object oriented(i.e.,machine or user)and corresponding quality evaluation metric,which will significantly impact the design of encoding or decoding within content delivery strategy.To get over this dilemma,we firstly integrate the digital twin into the edge networks to accurately and timely capture Quality-of-Decision(QoD)or Quality-of-Experience(QoE)for the guidance of content delivery.Then,in terms of machinecentric communication,a QoD-driven compression mechanism is designed for video analytics via temporally lightweight frame classification and spatially uneven quality assignment,which can achieve a balance among decision-making,delivered content,and encoding latency.Finally,in terms of user-centric communication,by fully leveraging haptic physical properties and semantic correlations of heterogeneous streams,we develop a QoE-driven video enhancement scheme to supply high data fidelity.Numerical results demonstrate the remarkable performance improvement of massive content delivery.
基金This work was supported by the National Natural Science Foundation of China(61871058,WYF,http://www.nsfc.gov.cn/).
文摘The emerging mobile edge networks with content caching capability allows end users to receive information from adjacent edge servers directly instead of a centralized data warehouse,thus the network transmission delay and system throughput can be improved significantly.Since the duplicate content transmissions between edge network and remote cloud can be reduced,the appropriate caching strategy can also improve the system energy efficiency of mobile edge networks to a great extent.This paper focuses on how to improve the network energy efficiency and proposes an intelligent caching strategy according to the cached content distribution model for mobile edge networks based on promising deep reinforcement learning algorithm.The deep neural network(DNN)and Q-learning algorithm are combined to design a deep reinforcement learning framework named as the deep-Q neural network(DQN),in which the DNN is adopted to represent the approximation of action-state value function in the Q-learning solution.The parameters iteration strategies in the proposed DQN algorithm were improved through stochastic gradient descent method,so the DQN algorithm could converge to the optimal solution quickly,and the network performance of the content caching policy can be optimized.The simulation results show that the proposed intelligent DQN-based content cache strategy with enough training steps could improve the energy efficiency of the mobile edge networks significantly.
基金This work was supported by the National Natural Science Foundation of China(Grant Nos.61602252,61802197,61972207)the Natural Science Foundation of Jiangsu Province of China(Grant No.BK20160967)the Project through the Priority Academic Program Development(PAPD)of Jiangsu Higher Education Institution。
文摘The vehicle edge network(VEN)has become a new research hotspot in the Internet of Things(IOT).However,many new delays are generated during the vehicle offloading the task to the edge server,which will greatly reduce the quality of service(QOS)provided by the vehicle edge network.To solve this problem,this paper proposes an evolutionary algorithm-based(EA)task offloading and resource allocation scheme.First,the delay of offloading task to the edge server is generally defined,then the mathematical model of problem is given.Finally,the objective function is optimized by evolutionary algorithm,and the optimal solution is obtained by iteration and averaging.To verify the performance of this method,contrast experiments are conducted.The experimental results show that our purposed method reduces delay and improves QOS,which is superior to other schemes.
基金This work is supported by the National Key R&D Program of China under Grant No.2018YFB0505000the National Natural Science Foundation of China under Grant No.U1836115,No.61922045,No.U1836115 and No.61672295+2 种基金the Natural Science Foundation of Jiangsu Province under Grant No.BK20181408the State Key Laboratory of Cryptology Foundation,Guangxi Key Laboratory of Cryptography and Information Security No.GCIS201715the CICAEET fund,and the PAPD fund.
文摘With the rapid spread of smart sensors,data collection is becoming more and more important in Mobile Edge Networks(MENs).The collected data can be used in many applications based on the analysis results of these data by cloud computing.Nowadays,data collection schemes have been widely studied by researchers.However,most of the researches take the amount of collected data into consideration without thinking about the problem of privacy leakage of the collected data.In this paper,we propose an energy-efficient and anonymous data collection scheme for MENs to keep a balance between energy consumption and data privacy,in which the privacy information of senors is hidden during data communication.In addition,the residual energy of nodes is taken into consideration in this scheme in particular when it comes to the selection of the relay node.The security analysis shows that no privacy information of the source node and relay node is leaked to attackers.Moreover,the simulation results demonstrate that the proposed scheme is better than other schemes in aspects of lifetime and energy consumption.At the end of the simulation part,we present a qualitative analysis for the proposed scheme and some conventional protocols.It is noteworthy that the proposed scheme outperforms the existing protocols in terms of the above indicators.
基金supported by National Key R&D Program of China(2019YFB2103202).
文摘Network fault diagnosis methods play a vital role in maintaining network service quality and enhancing user experience as an integral component of intelligent network management.Considering the unique characteristics of edge networks,such as limited resources,complex network faults,and the need for high real-time performance,enhancing and optimizing existing network fault diagnosis methods is necessary.Therefore,this paper proposes the lightweight edge-side fault diagnosis approach based on a spiking neural network(LSNN).Firstly,we use the Izhikevich neurons model to replace the Leaky Integrate and Fire(LIF)neurons model in the LSNN model.Izhikevich neurons inherit the simplicity of LIF neurons but also possess richer behavioral characteristics and flexibility to handle diverse data inputs.Inspired by Fast Spiking Interneurons(FSIs)with a high-frequency firing pattern,we use the parameters of FSIs.Secondly,inspired by the connection mode based on spiking dynamics in the basal ganglia(BG)area of the brain,we propose the pruning approach based on the FSIs of the BG in LSNN to improve computational efficiency and reduce the demand for computing resources and energy consumption.Furthermore,we propose a multiple iterative Dynamic Spike Timing Dependent Plasticity(DSTDP)algorithm to enhance the accuracy of the LSNN model.Experiments on two server fault datasets demonstrate significant precision,recall,and F1 improvements across three diagnosis dimensions.Simultaneously,lightweight indicators such as Params and FLOPs significantly reduced,showcasing the LSNN’s advanced performance and model efficiency.To conclude,experiment results on a pair of datasets indicate that the LSNN model surpasses traditional models and achieves cutting-edge outcomes in network fault diagnosis tasks.
基金supported by the Innovation Fund Project of Jiangxi Normal University(YJS2022065)the Domestic Visiting Program of Jiangxi Normal University.
文摘Mobile Edge Computing(MEC)is a technology designed for the on-demand provisioning of computing and storage services,strategically positioned close to users.In the MEC environment,frequently accessed content can be deployed and cached on edge servers to optimize the efficiency of content delivery,ultimately enhancing the quality of the user experience.However,due to the typical placement of edge devices and nodes at the network’s periphery,these components may face various potential fault tolerance challenges,including network instability,device failures,and resource constraints.Considering the dynamic nature ofMEC,making high-quality content caching decisions for real-time mobile applications,especially those sensitive to latency,by effectively utilizing mobility information,continues to be a significant challenge.In response to this challenge,this paper introduces FT-MAACC,a mobility-aware caching solution grounded in multi-agent deep reinforcement learning and equipped with fault tolerance mechanisms.This approach comprehensively integrates content adaptivity algorithms to evaluate the priority of highly user-adaptive cached content.Furthermore,it relies on collaborative caching strategies based onmulti-agent deep reinforcement learningmodels and establishes a fault-tolerancemodel to ensure the system’s reliability,availability,and persistence.Empirical results unequivocally demonstrate that FTMAACC outperforms its peer methods in cache hit rates and transmission latency.
基金supported by the National Natural Science Foundation of China(615730176140149961174162)
文摘Multiple complex networks, each with different properties and mutually fused, have the problems that the evolving process is time varying and non-equilibrium, network structures are layered and interlacing, and evolving characteristics are difficult to be measured. On that account, a dynamic evolving model of complex network with fusion nodes and overlap edges(CNFNOEs) is proposed. Firstly, we define some related concepts of CNFNOEs, and analyze the conversion process of fusion relationship and hierarchy relationship. According to the property difference of various nodes and edges, fusion nodes and overlap edges are subsequently split, and then the CNFNOEs is transformed to interlacing layered complex networks(ILCN). Secondly,the node degree saturation and attraction factors are defined. On that basis, the evolution algorithm and the local world evolution model for ILCN are put forward. Moreover, four typical situations of nodes evolution are discussed, and the degree distribution law during evolution is analyzed by means of the mean field method.Numerical simulation results show that nodes unreached degree saturation follow the exponential distribution with an error of no more than 6%; nodes reached degree saturation follow the distribution of their connection capacities with an error of no more than 3%; network weaving coefficients have a positive correlation with the highest probability of new node and initial number of connected edges. The results have verified the feasibility and effectiveness of the model, which provides a new idea and method for exploring CNFNOE's evolving process and law. Also, the model has good application prospects in structure and dynamics research of transportation network, communication network, social contact network,etc.
基金supported in part by the the Natural Science Foundation of Shanghai(20ZR1421600)Research Fund of Guangxi Key Lab of Multi-Source Information Mining&Security(MIMS21-M-02).
文摘False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural networkmodels to detect FDIA attacks.However,they overlook the fact that FDIA attack samples at public-private network edges are extremely sparse,making it difficult for neural network models to obtain sufficient samples to construct a robust detection model.To address this problem,this paper designs an efficient sample generative adversarial model of FDIA attack in public-private network edge,which can effectively bypass the detectionmodel to threaten the power grid system.A generative adversarial network(GAN)framework is first constructed by combining residual networks(ResNet)with fully connected networks(FCN).Then,a sparse adversarial learning model is built by integrating the time-aligned data and normal data,which is used to learn the distribution characteristics between normal data and attack data through iterative confrontation.Furthermore,we introduce a Gaussian hybrid distributionmatrix by aggregating the network structure of attack data characteristics and normal data characteristics,which can connect and calculate FDIA data with normal characteristics.Finally,efficient FDIA attack samples can be sequentially generated through interactive adversarial learning.Extensive simulation experiments are conducted with IEEE 14-bus and IEEE 118-bus system data,and the results demonstrate that the generated attack samples of the proposed model can present superior performance compared to state-of-the-art models in terms of attack strength,robustness,and covert capability.
基金support from Taif University Researchers Supporting Project Number(TURSP-2020/031),Taif University,Taif,Saudi Arabia.
文摘The detection of error and its correction is an important area of mathematics that is vastly constructed in all communication systems.Furthermore,combinatorial design theory has several applications like detecting or correcting errors in communication systems.Network(graph)designs(GDs)are introduced as a generalization of the symmetric balanced incomplete block designs(BIBDs)that are utilized directly in the above mentioned application.The networks(graphs)have been represented by vectors whose entries are the labels of the vertices related to the lengths of edges linked to it.Here,a general method is proposed and applied to construct new networks designs.This method of networks representation has simplified the method of constructing the network designs.In this paper,a novel representation of networks is introduced and used as a technique of constructing the group generated network designs of the complete bipartite networks and certain circulants.A technique of constructing the group generated network designs of the circulants is given with group generated graph designs(GDs)of certain circulants.In addition,the GDs are transformed into an incidence matrices,the rows and the columns of these matrices can be both viewed as a binary nonlinear code.A novel coding error detection and correction application is proposed and examined.
基金Supported by the Shanghai Sailing Program(No.18YF1427900)the National Natural Science Foundation of China(No.61471347)the Shanghai Pujiang Program(No.2020PJD081).
文摘The convergence of computation and communication at network edges plays a significant role in coping with computation-intensive and delay-critical tasks.During the stage of network planning,the resource provisioning problem for edge nodes has to be investigated to provide prior information for future system configurations.This work focuses on how to quantify the computation capabilities of access points at network edges when provisioning resources of computation and communication in multi-cell wireless networks.The problem is formulated as a discrete and non-convex minimization problem,where practical constraints including delay requirements,the inter-cell interference,and resource allocation strategies are considered.An iterative algorithm is also developed based on decomposition theory and fractional programming to solve this problem.The analysis shows that the necessary computation capability needed for certain delay guarantee depends on resource allocation strategies for delay-critical tasks.For delay-tolerant tasks,it can be approximately estimated by a derived lower bound which ignores the scheduling strategy.The efficiency of the proposed algorithm is demonstrated using numerical results.
基金supported by the National Science and Technology Major Project (2018ZX03001016)the China Ministry of Education-CMCC Research (MCM20160104 )the Beijing Municipal Science and Technology Commission Research (Z171100005217001)
文摘The explosive increase of smart devices and mobile traffic results intolerable network latency and degraded service to the end-users. in heavy burden on backhaul and core network, As a complement to core network, edge network contributes to relieving network burden and improving user experience. To investigate the problem of optimizing the total consumption in an edge-core network, the system consumption minimization problem is fromulated, considering the energy consumption and delay. Given that the formulated problem is a mixed nonlinear integer programming (MNIP) , a low-complexity workload allocation algorithm is proposed based on interior-point method. The proposed algorithm has an extremely short running time in practice. Finally, simulation results show that edge network can significantly complement core network with much reduced backhaul energy consumption and delay.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61332003 and 61303068)the Natural Science Foundation of Hunan Province,China(Grant No.2015JJ3024)
文摘Memristive technology has been widely explored, due to its distinctive properties, such as nonvolatility, high density,versatility, and CMOS compatibility. For memristive devices, a general compact model is highly favorable for the realization of its circuits and applications. In this paper, we propose a novel memristive model of TiOx-based devices, which considers the negative differential resistance(NDR) behavior. This model is physics-oriented and passes Linn's criteria. It not only exhibits sufficient accuracy(IV characteristics within 1.5% RMS), lower latency(below half the VTEAM model),and preferable generality compared to previous models, but also yields more precise predictions of long-term potentiation/depression(LTP/LTD). Finally, novel methods based on memristive models are proposed for gray sketching and edge detection applications. These methods avoid complex nonlinear functions required by their original counterparts. When the proposed model is utilized in these methods, they achieve increased contrast ratio and accuracy(for gray sketching and edge detection, respectively) compared to the Simmons model. Our results suggest a memristor-based network is a promising candidate to tackle the existing inefficiencies in traditional image processing methods.
文摘This paper proposes a novel scheme based on minimum delay at the edges (MDE) for optical burst switching (OBS) networks. This scheme is designed to overcome the long delay at the edge nodes of OBS networks. The MDE scheme features simultaneous burst assembly, channel scheduling, and pre-transmission of control packet. It also features estimated setup and explicit release (ESXR) signaling protocol. The MDE scheme can minimize the delay at the edge nodes for data packets, and improve the end-to-end latency performance for OBS networks. In addition, comparing with the conventional scheme, the performances of the MDE scheme are analyzed in this paper.
基金This work was jointly supported by the National Natural Science Foundation of China (No. 69990540)the Optical Technology Plan of Shanghai.
文摘To achieve lower assembly delay at optical burst switching edge node, this paper proposes an approach called current weight length prediction (CWLP) to improve existing estimate mechanism in burst assembly. CWLP method takes into account the arrived traffic in prediction time adequately. A parameter 'weight' is introduced to make a dynamic tradeoff between the current and past traffic under different offset time. Simulation results show that CWLP can achieve a significant improvement in terms of traffic estimation in various offset time and offered load.
文摘A scheduling algorithm for the edge nodes of optical burst switching (OBS) networks is proposed to guarantee the delay requirement of services with different CoS (Class of Service) and provide lower burst loss ratio at the same time. The performance of edge nodes based on the proposed algorithm is presented.
基金This work was supported by the National Basic Research 973 Program of China under Grant No. 2015CB352300, the National Natural Science Foundation of China under Grant Nos. 61402247, 61272231, and 61133008, and the Beijing Key Laboratory of Net- worked Multimedia.
文摘The emergence of smart edge-network content item hotspots, which are equipped with huge storage space (e.g., several GBs), opens up the opportunity to study the possibility of delivering videos at the edge network. Different from both the conventional content item delivery network (CDN) and the peer-to-peer (P2P) scheme, this new delivery paradigm, namely edge video CDN, requires up to millions of edge hotspots located at users' homes/offices to be coordinately managed to serve mobile video content item. Specifically, two challenges are involved in building edge video CDN, including how edge content item hotspots should be organized to serve users, and how content items should be replicated to them at different locations to serve users. To address these challenges, we propose our data-driven design as follows. First, we formulate an edge region partition problem to jointly maximize the quality experienced by users and minimize the replication cost, which is NP-hard in nature, and we design a Voronoi-like partition algorithm to generate optimal service cells. Second, to replicate content items to edge-network content item hotspots, we propose an edge request prediction based replication strategy, which carries out the replication in a server peak offioading manner. We implement our design and use trace-driven experiments to verify its effectiveness. Compared with conventional centralized CDN and popularity-based replication, our design can significantly improve users' quality of experience, in terms of users' perceived bandwidth and latency, up to 40%.