With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these...With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.展开更多
Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data ana...Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing.展开更多
Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,exces...Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.展开更多
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ...Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.展开更多
Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this pap...Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature.展开更多
Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to ...Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to realize joint allocation of computing and connectivity resources in survivable inter-datacenter EONs,a survivable routing,modulation level,spectrum,and computing resource allocation algorithm(SRMLSCRA)algorithm and three datacenter selection strategies,i.e.Computing Resource First(CRF),Shortest Path First(SPF)and Random Destination(RD),are proposed for different scenarios.Unicast and manycast are applied to the communication of computing requests,and the routing strategies are calculated respectively.Simulation results show that SRMLCRA-CRF can serve the largest amount of protected computing tasks,and the requested calculation blocking probability is reduced by 29.2%,28.3%and 30.5%compared with SRMLSCRA-SPF,SRMLSCRA-RD and the benchmark EPS-RMSA algorithms respectively.Therefore,it is more applicable to the networks with huge calculations.Besides,SRMLSCRA-SPF consumes the least spectrum,thereby exhibiting its suitability for scenarios where the amount of calculation is small and communication resources are scarce.The results demonstrate that the proposed methods realize the joint allocation of computing and connectivity resources,and could provide efficient protection for services under single-link failure and occupy less spectrum.展开更多
As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and...As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.展开更多
As edge computing services soar,the problem of resource fragmentation situation is greatly worsened in elastic optical networks(EON).Aimed to solve this problem,this article proposes the fragmentation prediction model...As edge computing services soar,the problem of resource fragmentation situation is greatly worsened in elastic optical networks(EON).Aimed to solve this problem,this article proposes the fragmentation prediction model that makes full use of the gate recurrent unit(GRU)algorithm.Based on the fragmentation prediction model,one virtual optical network mapping scheme is presented for edge computing driven EON.With the minimum of fragmentation degree all over the whole EON,the virtual network mapping can be successively conducted.Test results show that the proposed approach can reduce blocking rate,and the supporting ability for virtual optical network services is greatly improved.展开更多
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r...Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.展开更多
As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file tra...As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%.展开更多
Nowadays,when a life span of sensor nodes are threatened by the shortage of energy available for communication,sink mobility is an excellent technique for increasing its lifespan.When communicating via a WSN,the use o...Nowadays,when a life span of sensor nodes are threatened by the shortage of energy available for communication,sink mobility is an excellent technique for increasing its lifespan.When communicating via a WSN,the use of nodes as a transmission method eliminates the need for a physical medium.Sink mobility in a dynamic network topology presents a problem for sensor nodes that have reserved resources.Unless the route is revised and changed to reflect the location of the mobile sink location,it will be inefficient for delivering data effec-tively.In the clustering strategy,nodes are grouped together to improve commu-nication,and the cluster head receives data from compactable nodes.The sink receives the aggregated data from the head.The cluster head is the central node in the conventional technique.A single node uses more energy than a node that is routed to a dead node.Increasing the number of people using a route shortens its lifespan.The proposed work demonstrates the effectiveness with which sensor node paths can be modified at a lower cost by utilising the virtual grid.The best routes are maintained mostly by sink node communication on routes based on dynamic route adjustment(VGDRA).Only specific nodes are acquired to re-align data supply to the mobile sink in accordance with new paradigms of route recon-struction.According to the results,VGDRA schemes have a longer life span because of the reduced number of loops.展开更多
With the rapid development of the Industrial Internet of Things(IIoT),the traditional centralized cloud processing model has encountered the challenges of high communication latency and high energy consumption in hand...With the rapid development of the Industrial Internet of Things(IIoT),the traditional centralized cloud processing model has encountered the challenges of high communication latency and high energy consumption in handling industrial big data tasks.This paper aims to propose a low-latency and lowenergy path computing scheme for the above problems.This scheme is based on the cloud-fog network architecture.The computing resources of fog network devices in the fog computing layer are used to complete task processing step by step during the data interaction from industrial field devices to the cloud center.A collaborative scheduling strategy based on the particle diversity discrete binary particle swarm optimization(PDBPSO)algorithm is proposed to deploy manufacturing tasks to the fog computing layer reasonably.The task in the form of a directed acyclic graph(DAG)is mapped to a factory fog network in the form of an undirected graph(UG)to find the appropriate computing path for the task,significantly reducing the task processing latency under energy consumption constraints.Simulation experiments show that this scheme’s latency performance outperforms the strategy that tasks are wholly offloaded to the cloud and the strategy that tasks are entirely offloaded to the edge equipment.展开更多
For intelligent surveillance videos,anomaly detection is extremely important.Deep learning algorithms have been popular for evaluating realtime surveillance recordings,like traffic accidents,and criminal or unlawful i...For intelligent surveillance videos,anomaly detection is extremely important.Deep learning algorithms have been popular for evaluating realtime surveillance recordings,like traffic accidents,and criminal or unlawful incidents such as suicide attempts.Nevertheless,Deep learning methods for classification,like convolutional neural networks,necessitate a lot of computing power.Quantum computing is a branch of technology that solves abnormal and complex problems using quantum mechanics.As a result,the focus of this research is on developing a hybrid quantum computing model which is based on deep learning.This research develops a Quantum Computing-based Convolutional Neural Network(QC-CNN)to extract features and classify anomalies from surveillance footage.A Quantum-based Circuit,such as the real amplitude circuit,is utilized to improve the performance of the model.As far as my research,this is the first work to employ quantum deep learning techniques to classify anomalous events in video surveillance applications.There are 13 anomalies classified from the UCF-crime dataset.Based on experimental results,the proposed model is capable of efficiently classifying data concerning confusion matrix,Receiver Operating Characteristic(ROC),accuracy,Area Under Curve(AUC),precision,recall as well as F1-score.The proposed QC-CNN has attained the best accuracy of 95.65 percent which is 5.37%greater when compared to other existing models.To measure the efficiency of the proposed work,QC-CNN is also evaluated with classical and quantum models.展开更多
To efficiently complete a complex computation task,the complex task should be decomposed into subcomputation tasks that run parallel in edge computing.Wireless Sensor Network(WSN)is a typical application of parallel c...To efficiently complete a complex computation task,the complex task should be decomposed into subcomputation tasks that run parallel in edge computing.Wireless Sensor Network(WSN)is a typical application of parallel computation.To achieve highly reliable parallel computation for wireless sensor network,the network's lifetime needs to be extended.Therefore,a proper task allocation strategy is needed to reduce the energy consumption and balance the load of the network.This paper proposes a task model and a cluster-based WSN model in edge computing.In our model,different tasks require different types of resources and different sensors provide different types of resources,so our model is heterogeneous,which makes the model more practical.Then we propose a task allocation algorithm that combines the Genetic Algorithm(GA)and the Ant Colony Optimization(ACO)algorithm.The algorithm concentrates on energy conservation and load balancing so that the lifetime of the network can be extended.The experimental result shows the algorithm's effectiveness and advantages in energy conservation and load balancing.展开更多
Quantum computing is a field with increasing relevance as quantum hardware improves and more applications of quantum computing are discovered. In this paper, we demonstrate the feasibility of modeling Ising Model Hami...Quantum computing is a field with increasing relevance as quantum hardware improves and more applications of quantum computing are discovered. In this paper, we demonstrate the feasibility of modeling Ising Model Hamiltonians on the IBM quantum computer. We developed quantum circuits to simulate these systems more efficiently for both closed and open boundary Ising models, with and without perturbations. We tested these various geometries of systems in both 1-D and 2-D space to mimic two real systems: magnetic materials and biological neural networks (BNNs). Our quantum model is more efficient than classical computers, which can struggle to simulate large, complex systems of particles.展开更多
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ...AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.展开更多
Interconnection of all things challenges the traditional communication methods,and Semantic Communication and Computing(SCC)will become new solutions.It is a challenging task to accurately detect,extract,and represent...Interconnection of all things challenges the traditional communication methods,and Semantic Communication and Computing(SCC)will become new solutions.It is a challenging task to accurately detect,extract,and represent semantic information in the research of SCC-based networks.In previous research,researchers usually use convolution to extract the feature information of a graph and perform the corresponding task of node classification.However,the content of semantic information is quite complex.Although graph convolutional neural networks provide an effective solution for node classification tasks,due to their limitations in representing multiple relational patterns and not recognizing and analyzing higher-order local structures,the extracted feature information is subject to varying degrees of loss.Therefore,this paper extends from a single-layer topology network to a multi-layer heterogeneous topology network.The Bidirectional Encoder Representations from Transformers(BERT)training word vector is introduced to extract the semantic features in the network,and the existing graph neural network is improved by combining the higher-order local feature module of the network model representation network.A multi-layer network embedding algorithm on SCC-based networks with motifs is proposed to complete the task of end-to-end node classification.We verify the effectiveness of the algorithm on a real multi-layer heterogeneous network.展开更多
The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power suppor...The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.展开更多
The lower limb exoskeletons are used to assist wearers in various scenarios such as medical and industrial settings.Complex modeling errors of the exoskeleton in different application scenarios pose challenges to the ...The lower limb exoskeletons are used to assist wearers in various scenarios such as medical and industrial settings.Complex modeling errors of the exoskeleton in different application scenarios pose challenges to the robustness and stability of its control algorithm.The Radial Basis Function(RBF)neural network is used widely to compensate for modeling errors.In order to solve the problem that the current RBF neural network controllers cannot guarantee the asymptotic stability,a neural network robust control algorithm based on computed torque method is proposed in this paper,focusing on trajectory tracking.It innovatively incorporates the robust adaptive term while introducing the RBF neural network term,improving the compensation ability for modeling errors.The stability of the algorithm is proved by Lyapunov method,and the effectiveness of the robust adaptive term is verified by the simulation.Experiments wearing the exoskeleton under different walking speeds and scenarios were carried out,and the results show that the absolute value of tracking errors of the hip and knee joints of the exoskeleton are consistently less than 1.5°and 2.5°,respectively.The proposed control algorithm effectively compensates for modeling errors and exhibits high robustness.展开更多
The blockchain-empowered Internet of Vehicles(IoV)enables various services and achieves data security and privacy,significantly advancing modern vehicle systems.However,the increased frequency of data transmission and...The blockchain-empowered Internet of Vehicles(IoV)enables various services and achieves data security and privacy,significantly advancing modern vehicle systems.However,the increased frequency of data transmission and complex network connections among nodes also make them more susceptible to adversarial attacks.As a result,an efficient intrusion detection system(IDS)becomes crucial for securing the IoV environment.Existing IDSs based on convolutional neural networks(CNN)often suffer from high training time and storage requirements.In this paper,we propose a lightweight IDS solution to protect IoV against both intra-vehicle and external threats.Our approach achieves superior performance,as demonstrated by key metrics such as accuracy and precision.Specifically,our method achieves accuracy rates ranging from 99.08% to 100% on the Car-Hacking dataset,with a remarkably short training time.展开更多
基金supported by the National Science Foundation of China under Grant 62271062 and 62071063by the Zhijiang Laboratory Open Project Fund 2020LCOAB01。
文摘With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.
基金Project supported by the National Natural Science Foundation of China (Grant Nos. U20A20227,62076208, and 62076207)Chongqing Talent Plan “Contract System” Project (Grant No. CQYC20210302257)+3 种基金National Key Laboratory of Smart Vehicle Safety Technology Open Fund Project (Grant No. IVSTSKL-202309)the Chongqing Technology Innovation and Application Development Special Major Project (Grant No. CSTB2023TIAD-STX0020)College of Artificial Intelligence, Southwest UniversityState Key Laboratory of Intelligent Vehicle Safety Technology
文摘Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing.
基金supported by the National Natural Science Foundation of China(Nos.61974164,62074166,62004219,62004220,and 62104256).
文摘Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.
基金supported by National Key Research and Development Program of China(2018YFC1504502).
文摘Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.
基金supported in part by the National Natural Science Foundation of China(NSFC)under Grant 62371082 and 62001076in part by the National Key R&D Program of China under Grant 2021YFB1714100in part by the Natural Science Foundation of Chongqing under Grant CSTB2023NSCQ-MSX0726 and cstc2020jcyjmsxmX0878.
文摘Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature.
基金supported by the National Natural Science Foundation of China(No.62001045)Beijing Municipal Natural Science Foundation(No.4214059)+1 种基金Fund of State Key Laboratory of IPOC(BUPT)(No.IPOC2021ZT17)Fundamental Research Funds for the Central Universities(No.2022RC09).
文摘Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to realize joint allocation of computing and connectivity resources in survivable inter-datacenter EONs,a survivable routing,modulation level,spectrum,and computing resource allocation algorithm(SRMLSCRA)algorithm and three datacenter selection strategies,i.e.Computing Resource First(CRF),Shortest Path First(SPF)and Random Destination(RD),are proposed for different scenarios.Unicast and manycast are applied to the communication of computing requests,and the routing strategies are calculated respectively.Simulation results show that SRMLCRA-CRF can serve the largest amount of protected computing tasks,and the requested calculation blocking probability is reduced by 29.2%,28.3%and 30.5%compared with SRMLSCRA-SPF,SRMLSCRA-RD and the benchmark EPS-RMSA algorithms respectively.Therefore,it is more applicable to the networks with huge calculations.Besides,SRMLSCRA-SPF consumes the least spectrum,thereby exhibiting its suitability for scenarios where the amount of calculation is small and communication resources are scarce.The results demonstrate that the proposed methods realize the joint allocation of computing and connectivity resources,and could provide efficient protection for services under single-link failure and occupy less spectrum.
基金supported by the National Natural Science Foundation of China under Grant 62272391in part by the Key Industry Innovation Chain of Shaanxi under Grant 2021ZDLGY05-08.
文摘As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.
基金Supported by the National Key Research and Development Program of China(No.2021YFB2401204)。
文摘As edge computing services soar,the problem of resource fragmentation situation is greatly worsened in elastic optical networks(EON).Aimed to solve this problem,this article proposes the fragmentation prediction model that makes full use of the gate recurrent unit(GRU)algorithm.Based on the fragmentation prediction model,one virtual optical network mapping scheme is presented for edge computing driven EON.With the minimum of fragmentation degree all over the whole EON,the virtual network mapping can be successively conducted.Test results show that the proposed approach can reduce blocking rate,and the supporting ability for virtual optical network services is greatly improved.
文摘Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.
基金supported by the National Key Research and Development Program of China 2021YFB2900504,2020YFB1807900 and 2020YFB1807903by the National Science Foundation of China under Grant 62271062,62071063。
文摘As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%.
文摘Nowadays,when a life span of sensor nodes are threatened by the shortage of energy available for communication,sink mobility is an excellent technique for increasing its lifespan.When communicating via a WSN,the use of nodes as a transmission method eliminates the need for a physical medium.Sink mobility in a dynamic network topology presents a problem for sensor nodes that have reserved resources.Unless the route is revised and changed to reflect the location of the mobile sink location,it will be inefficient for delivering data effec-tively.In the clustering strategy,nodes are grouped together to improve commu-nication,and the cluster head receives data from compactable nodes.The sink receives the aggregated data from the head.The cluster head is the central node in the conventional technique.A single node uses more energy than a node that is routed to a dead node.Increasing the number of people using a route shortens its lifespan.The proposed work demonstrates the effectiveness with which sensor node paths can be modified at a lower cost by utilising the virtual grid.The best routes are maintained mostly by sink node communication on routes based on dynamic route adjustment(VGDRA).Only specific nodes are acquired to re-align data supply to the mobile sink in accordance with new paradigms of route recon-struction.According to the results,VGDRA schemes have a longer life span because of the reduced number of loops.
基金supported by the Shaanxi Key R&D Program Project(2021GY-100).
文摘With the rapid development of the Industrial Internet of Things(IIoT),the traditional centralized cloud processing model has encountered the challenges of high communication latency and high energy consumption in handling industrial big data tasks.This paper aims to propose a low-latency and lowenergy path computing scheme for the above problems.This scheme is based on the cloud-fog network architecture.The computing resources of fog network devices in the fog computing layer are used to complete task processing step by step during the data interaction from industrial field devices to the cloud center.A collaborative scheduling strategy based on the particle diversity discrete binary particle swarm optimization(PDBPSO)algorithm is proposed to deploy manufacturing tasks to the fog computing layer reasonably.The task in the form of a directed acyclic graph(DAG)is mapped to a factory fog network in the form of an undirected graph(UG)to find the appropriate computing path for the task,significantly reducing the task processing latency under energy consumption constraints.Simulation experiments show that this scheme’s latency performance outperforms the strategy that tasks are wholly offloaded to the cloud and the strategy that tasks are entirely offloaded to the edge equipment.
文摘For intelligent surveillance videos,anomaly detection is extremely important.Deep learning algorithms have been popular for evaluating realtime surveillance recordings,like traffic accidents,and criminal or unlawful incidents such as suicide attempts.Nevertheless,Deep learning methods for classification,like convolutional neural networks,necessitate a lot of computing power.Quantum computing is a branch of technology that solves abnormal and complex problems using quantum mechanics.As a result,the focus of this research is on developing a hybrid quantum computing model which is based on deep learning.This research develops a Quantum Computing-based Convolutional Neural Network(QC-CNN)to extract features and classify anomalies from surveillance footage.A Quantum-based Circuit,such as the real amplitude circuit,is utilized to improve the performance of the model.As far as my research,this is the first work to employ quantum deep learning techniques to classify anomalous events in video surveillance applications.There are 13 anomalies classified from the UCF-crime dataset.Based on experimental results,the proposed model is capable of efficiently classifying data concerning confusion matrix,Receiver Operating Characteristic(ROC),accuracy,Area Under Curve(AUC),precision,recall as well as F1-score.The proposed QC-CNN has attained the best accuracy of 95.65 percent which is 5.37%greater when compared to other existing models.To measure the efficiency of the proposed work,QC-CNN is also evaluated with classical and quantum models.
基金supported by Postdoctoral Science Foundation of China(No.2021M702441)National Natural Science Foundation of China(No.61871283)。
文摘To efficiently complete a complex computation task,the complex task should be decomposed into subcomputation tasks that run parallel in edge computing.Wireless Sensor Network(WSN)is a typical application of parallel computation.To achieve highly reliable parallel computation for wireless sensor network,the network's lifetime needs to be extended.Therefore,a proper task allocation strategy is needed to reduce the energy consumption and balance the load of the network.This paper proposes a task model and a cluster-based WSN model in edge computing.In our model,different tasks require different types of resources and different sensors provide different types of resources,so our model is heterogeneous,which makes the model more practical.Then we propose a task allocation algorithm that combines the Genetic Algorithm(GA)and the Ant Colony Optimization(ACO)algorithm.The algorithm concentrates on energy conservation and load balancing so that the lifetime of the network can be extended.The experimental result shows the algorithm's effectiveness and advantages in energy conservation and load balancing.
文摘Quantum computing is a field with increasing relevance as quantum hardware improves and more applications of quantum computing are discovered. In this paper, we demonstrate the feasibility of modeling Ising Model Hamiltonians on the IBM quantum computer. We developed quantum circuits to simulate these systems more efficiently for both closed and open boundary Ising models, with and without perturbations. We tested these various geometries of systems in both 1-D and 2-D space to mimic two real systems: magnetic materials and biological neural networks (BNNs). Our quantum model is more efficient than classical computers, which can struggle to simulate large, complex systems of particles.
基金Project supported in part by the National Key Research and Development Program of China(Grant No.2021YFA0716400)the National Natural Science Foundation of China(Grant Nos.62225405,62150027,61974080,61991443,61975093,61927811,61875104,62175126,and 62235011)+2 种基金the Ministry of Science and Technology of China(Grant Nos.2021ZD0109900 and 2021ZD0109903)the Collaborative Innovation Center of Solid-State Lighting and Energy-Saving ElectronicsTsinghua University Initiative Scientific Research Program.
文摘AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.
基金supported by National Natural Science Foundation of China(62101088,61801076,61971336)Natural Science Foundation of Liaoning Province(2022-MS-157,2023-MS-108)+1 种基金Key Laboratory of Big Data Intelligent Computing Funds for Chongqing University of Posts and Telecommunications(BDIC-2023-A-003)Fundamental Research Funds for the Central Universities(3132022230).
文摘Interconnection of all things challenges the traditional communication methods,and Semantic Communication and Computing(SCC)will become new solutions.It is a challenging task to accurately detect,extract,and represent semantic information in the research of SCC-based networks.In previous research,researchers usually use convolution to extract the feature information of a graph and perform the corresponding task of node classification.However,the content of semantic information is quite complex.Although graph convolutional neural networks provide an effective solution for node classification tasks,due to their limitations in representing multiple relational patterns and not recognizing and analyzing higher-order local structures,the extracted feature information is subject to varying degrees of loss.Therefore,this paper extends from a single-layer topology network to a multi-layer heterogeneous topology network.The Bidirectional Encoder Representations from Transformers(BERT)training word vector is introduced to extract the semantic features in the network,and the existing graph neural network is improved by combining the higher-order local feature module of the network model representation network.A multi-layer network embedding algorithm on SCC-based networks with motifs is proposed to complete the task of end-to-end node classification.We verify the effectiveness of the algorithm on a real multi-layer heterogeneous network.
基金supported by the National Key Research and Development Plan(No.2022YFB2902701)the key Natural Science Foundation of Shenzhen(No.JCYJ20220818102209020).
文摘The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.
基金Supported by National Key R&D Program of China(Grant No.2022YFB4701200)National Natural Science Foundation of China(NSFC)(Grant Nos.T2121003,52205004).
文摘The lower limb exoskeletons are used to assist wearers in various scenarios such as medical and industrial settings.Complex modeling errors of the exoskeleton in different application scenarios pose challenges to the robustness and stability of its control algorithm.The Radial Basis Function(RBF)neural network is used widely to compensate for modeling errors.In order to solve the problem that the current RBF neural network controllers cannot guarantee the asymptotic stability,a neural network robust control algorithm based on computed torque method is proposed in this paper,focusing on trajectory tracking.It innovatively incorporates the robust adaptive term while introducing the RBF neural network term,improving the compensation ability for modeling errors.The stability of the algorithm is proved by Lyapunov method,and the effectiveness of the robust adaptive term is verified by the simulation.Experiments wearing the exoskeleton under different walking speeds and scenarios were carried out,and the results show that the absolute value of tracking errors of the hip and knee joints of the exoskeleton are consistently less than 1.5°and 2.5°,respectively.The proposed control algorithm effectively compensates for modeling errors and exhibits high robustness.
基金supported in part by the Open Research Fund of Joint Laboratory on Cyberspace Security,China Southern Power Grid(Grant No.CSS2022KF03)the Science and Technology Planning Project of Guangzhou,China(GrantNo.202201010388)the Fundamental Research Funds for the Central Universities.
文摘The blockchain-empowered Internet of Vehicles(IoV)enables various services and achieves data security and privacy,significantly advancing modern vehicle systems.However,the increased frequency of data transmission and complex network connections among nodes also make them more susceptible to adversarial attacks.As a result,an efficient intrusion detection system(IDS)becomes crucial for securing the IoV environment.Existing IDSs based on convolutional neural networks(CNN)often suffer from high training time and storage requirements.In this paper,we propose a lightweight IDS solution to protect IoV against both intra-vehicle and external threats.Our approach achieves superior performance,as demonstrated by key metrics such as accuracy and precision.Specifically,our method achieves accuracy rates ranging from 99.08% to 100% on the Car-Hacking dataset,with a remarkably short training time.