The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where su...The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where substantial data transfers are necessitated by the generation of extensive information and the need for frame-by-frame analysis. Herein, we present a novel approach for dynamic motion recognition, leveraging a spatial-temporal in-sensor computing system rooted in multiframe integration by employing photodetector. Our approach introduced a retinomorphic MoS_(2) photodetector device for motion detection and analysis. The device enables the generation of informative final states, nonlinearly embedding both past and present frames. Subsequent multiply-accumulate (MAC) calculations are efficiently performed as the classifier. When evaluating our devices for target detection and direction classification, we achieved an impressive recognition accuracy of 93.5%. By eliminating the need for frame-by-frame analysis, our system not only achieves high precision but also facilitates energy-efficient in-sensor computing.展开更多
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these...With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.展开更多
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ...AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.展开更多
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r...Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.展开更多
Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data ana...Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing.展开更多
Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,exces...Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.展开更多
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ...Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.展开更多
Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this pap...Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature.展开更多
Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to ...Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to realize joint allocation of computing and connectivity resources in survivable inter-datacenter EONs,a survivable routing,modulation level,spectrum,and computing resource allocation algorithm(SRMLSCRA)algorithm and three datacenter selection strategies,i.e.Computing Resource First(CRF),Shortest Path First(SPF)and Random Destination(RD),are proposed for different scenarios.Unicast and manycast are applied to the communication of computing requests,and the routing strategies are calculated respectively.Simulation results show that SRMLCRA-CRF can serve the largest amount of protected computing tasks,and the requested calculation blocking probability is reduced by 29.2%,28.3%and 30.5%compared with SRMLSCRA-SPF,SRMLSCRA-RD and the benchmark EPS-RMSA algorithms respectively.Therefore,it is more applicable to the networks with huge calculations.Besides,SRMLSCRA-SPF consumes the least spectrum,thereby exhibiting its suitability for scenarios where the amount of calculation is small and communication resources are scarce.The results demonstrate that the proposed methods realize the joint allocation of computing and connectivity resources,and could provide efficient protection for services under single-link failure and occupy less spectrum.展开更多
Unmanned aerial vehicle(UAV)-enabled edge computing is emerging as a potential enabler for Artificial Intelligence of Things(AIoT)in the forthcoming sixth-generation(6G)communication networks.With the use of flexible ...Unmanned aerial vehicle(UAV)-enabled edge computing is emerging as a potential enabler for Artificial Intelligence of Things(AIoT)in the forthcoming sixth-generation(6G)communication networks.With the use of flexible UAVs,massive sensing data is gathered and processed promptly without considering geographical locations.Deep neural networks(DNNs)are becoming a driving force to extract valuable information from sensing data.However,the lightweight servers installed on UAVs are not able to meet the extremely high requirements of inference tasks due to the limited battery capacities of UAVs.In this work,we investigate a DNN model placement problem for AIoT applications,where the trained DNN models are selected and placed on UAVs to execute inference tasks locally.It is impractical to obtain future DNN model request profiles and system operation states in UAV-enabled edge computing.The Lyapunov optimization technique is leveraged for the proposed DNN model placement problem.Based on the observed system overview,an advanced online placement(AOP)algorithm is developed to solve the transformed problem in each time slot,which can reduce DNN model transmission delay and disk I/O energy cost simultaneously while keeping the input data queues stable.Finally,extensive simulations are provided to depict the effectiveness of the AOP algorithm.The numerical results demonstrate that the AOP algorithm can reduce 18.14%of the model placement cost and 29.89%of the input data queue backlog on average by comparing it with benchmark algorithms.展开更多
As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and...As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.展开更多
As edge computing services soar,the problem of resource fragmentation situation is greatly worsened in elastic optical networks(EON).Aimed to solve this problem,this article proposes the fragmentation prediction model...As edge computing services soar,the problem of resource fragmentation situation is greatly worsened in elastic optical networks(EON).Aimed to solve this problem,this article proposes the fragmentation prediction model that makes full use of the gate recurrent unit(GRU)algorithm.Based on the fragmentation prediction model,one virtual optical network mapping scheme is presented for edge computing driven EON.With the minimum of fragmentation degree all over the whole EON,the virtual network mapping can be successively conducted.Test results show that the proposed approach can reduce blocking rate,and the supporting ability for virtual optical network services is greatly improved.展开更多
In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi...In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.展开更多
The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resourc...The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resources(ONER)for EC over Fiber-Wireless(FiWi)access networks is proposed in this paper.In the ONER scheme,the FiWi network connects edge computing nodes with fiber and converges wireless and fiber connections seamlessly,so that it can support the offloading transmission with low delay and wide bandwidth.Based on the ONER scheme supported by FiWi networks,computation tasks can be offloaded to edge computing nodes in a wider range of area without increasing wireless hops(e.g.,just one wireless hop),which achieves low delay.Additionally,an efficient Computation Resource Scheduling(CRS)algorithm based on the ONER scheme is also proposed to make offloading decision.The results show that more offloading requests can be satisfied and the average completion time of computation tasks decreases significantly with the ONER scheme and the CRS algorithm.Therefore,the ONER scheme and the CRS algorithm can schedule computation resources at neighboring edge computing nodes for offloading to meet the challenge of large scale computation tasks.展开更多
Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the afford...Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the affordability of robust hardware for remote audio recording is stimulating international interest in the potential for acoustic methods for biodiversity monitoring.The scale of the data involved requires automated methods,however, the development of acoustic sensor networks capable of sampling the soundscape across time and space and relaying the data to an accessible storage location remains a significant technical challenge, with power management at its core. Recording and transmitting large quantities of audio data is power intensive,hampering long-term deployment in remote, off-grid locations of key ecological interest. Rather than transmitting heavy audio data, in this paper, we propose a low-cost and energy efficient wireless acoustic sensor network integrated with edge computing structure for remote acoustic monitoring and in situ analysis.Recording and computation of acoustic indices are carried out directly on edge devices built from low noise primo condenser microphones and Teensy microcontrollers, using internal FFT hardware support. Resultant indices are transmitted over a ZigBee-based wireless mesh network to a destination server.Benchmark tests of audio quality, indices computation and power consumption demonstrate acoustic equivalence and significant power savings over current solutions.展开更多
Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-int...Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-intensive applications for satellite communication networks(SCNs).By deploying edge computing servers in satellite and gateway stations,SCNs can achieve significant performance gains of the computing capacities at the expense of extending the dimensions and complexity of resource management.Therefore,in this paper,we investigate the joint computing and communication resource management problem for SCNs to minimize the execution latency of the computation-intensive applications,while two different satellite edge computing scenarios and local execution are considered.Furthermore,the joint computing and communication resource allocation problem for the computation-intensive services is formulated as a mixed-integer programming problem.A game-theoretic and many-to-one matching theorybased scheme(JCCRA-GM)is proposed to achieve an approximate optimal solution.Numerical results show that the proposed method with low complexity can achieve almost the same weight-sum latency as the Brute-force method.展开更多
Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it ...Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it is known that the complementary metal-oxide-semiconductor(CMOS)field effect transistors have played the fundamental role in the modern integrated circuit technology.Therefore,will complementary memtransistors(CMT)also play such a role in the future neuromorphic circuits and chips?In this review,various types of materials and physical mechanisms for constructing CMT(how)are inspected with their merits and need-to-address challenges discussed.Then the unique properties(what)and poten-tial applications of CMT in different learning algorithms/scenarios of spiking neural networks(why)are reviewed,including super-vised rule,reinforcement one,dynamic vision with in-sensor computing,etc.Through exploiting the complementary structure-related novel functions,significant reduction of hardware consuming,enhancement of energy/efficiency ratio and other advan-tages have been gained,illustrating the alluring prospect of design technology co-optimization(DTCO)of CMT towards neuro-morphic computing.展开更多
In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as we...In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network.展开更多
With the rapid development of wireless networks,the Ad Hoc networks are widely used in many fields,but the current network security solutions for the Ad Hoc network are not competitive enough.So the critical technolog...With the rapid development of wireless networks,the Ad Hoc networks are widely used in many fields,but the current network security solutions for the Ad Hoc network are not competitive enough.So the critical technology of Ad Hoc network applications shall be how to implement the security scheme.Here the discussions are focused on the specific solution against the security threats which the Ad Hoc networks will face,the methodology of a management model which uses trusted computing technology to solve Ad Hoc network security problems,and the analysis and verification for the security of this model.展开更多
The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide...The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide variety of traffic types.Current traffic analysis methods are executed on the cloud,which needs to upload the traffic data.Fog computing is a more promising way to save bandwidth resources by offloading these tasks to the fog nodes.However,traffic analysis models based on traditional machine learning need to retrain all traffic data when updating the trained model,which are not suitable for fog computing due to the poor computing power.In this study,we design a novel fog computing based traffic analysis system using broad learning.For one thing,fog computing can provide a distributed architecture for saving the bandwidth resources.For another,we use the broad learning to incrementally train the traffic data,which is more suitable for fog computing because it can support incremental updates of models without retraining all data.We implement our system on the Raspberry Pi,and experimental results show that we have a 98%probability to accurately identify these traffic data.Moreover,our method has a faster training speed compared with Convolutional Neural Network(CNN).展开更多
基金supported by the National Natural Science Foundation of China (52322210, 52172144, 22375069, 21825103, and U21A2069)National Key R&D Program of China (2021YFA1200501)+2 种基金Shenzhen Science and Technology Program (JCYJ20220818102215033, JCYJ20200109105422876)the Innovation Project of Optics Valley Laboratory (OVL2023PY007)Science and Technology Commission of Shanghai Municipality (21YF1454700)。
文摘The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where substantial data transfers are necessitated by the generation of extensive information and the need for frame-by-frame analysis. Herein, we present a novel approach for dynamic motion recognition, leveraging a spatial-temporal in-sensor computing system rooted in multiframe integration by employing photodetector. Our approach introduced a retinomorphic MoS_(2) photodetector device for motion detection and analysis. The device enables the generation of informative final states, nonlinearly embedding both past and present frames. Subsequent multiply-accumulate (MAC) calculations are efficiently performed as the classifier. When evaluating our devices for target detection and direction classification, we achieved an impressive recognition accuracy of 93.5%. By eliminating the need for frame-by-frame analysis, our system not only achieves high precision but also facilitates energy-efficient in-sensor computing.
基金supported by the National Science Foundation of China under Grant 62271062 and 62071063by the Zhijiang Laboratory Open Project Fund 2020LCOAB01。
文摘With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.
基金Project supported in part by the National Key Research and Development Program of China(Grant No.2021YFA0716400)the National Natural Science Foundation of China(Grant Nos.62225405,62150027,61974080,61991443,61975093,61927811,61875104,62175126,and 62235011)+2 种基金the Ministry of Science and Technology of China(Grant Nos.2021ZD0109900 and 2021ZD0109903)the Collaborative Innovation Center of Solid-State Lighting and Energy-Saving ElectronicsTsinghua University Initiative Scientific Research Program.
文摘AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.
文摘Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.
基金Project supported by the National Natural Science Foundation of China (Grant Nos. U20A20227,62076208, and 62076207)Chongqing Talent Plan “Contract System” Project (Grant No. CQYC20210302257)+3 种基金National Key Laboratory of Smart Vehicle Safety Technology Open Fund Project (Grant No. IVSTSKL-202309)the Chongqing Technology Innovation and Application Development Special Major Project (Grant No. CSTB2023TIAD-STX0020)College of Artificial Intelligence, Southwest UniversityState Key Laboratory of Intelligent Vehicle Safety Technology
文摘Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing.
基金supported by the National Natural Science Foundation of China(Nos.61974164,62074166,62004219,62004220,and 62104256).
文摘Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.
基金supported by National Key Research and Development Program of China(2018YFC1504502).
文摘Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.
基金supported in part by the National Natural Science Foundation of China(NSFC)under Grant 62371082 and 62001076in part by the National Key R&D Program of China under Grant 2021YFB1714100in part by the Natural Science Foundation of Chongqing under Grant CSTB2023NSCQ-MSX0726 and cstc2020jcyjmsxmX0878.
文摘Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature.
基金supported by the National Natural Science Foundation of China(No.62001045)Beijing Municipal Natural Science Foundation(No.4214059)+1 种基金Fund of State Key Laboratory of IPOC(BUPT)(No.IPOC2021ZT17)Fundamental Research Funds for the Central Universities(No.2022RC09).
文摘Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to realize joint allocation of computing and connectivity resources in survivable inter-datacenter EONs,a survivable routing,modulation level,spectrum,and computing resource allocation algorithm(SRMLSCRA)algorithm and three datacenter selection strategies,i.e.Computing Resource First(CRF),Shortest Path First(SPF)and Random Destination(RD),are proposed for different scenarios.Unicast and manycast are applied to the communication of computing requests,and the routing strategies are calculated respectively.Simulation results show that SRMLCRA-CRF can serve the largest amount of protected computing tasks,and the requested calculation blocking probability is reduced by 29.2%,28.3%and 30.5%compared with SRMLSCRA-SPF,SRMLSCRA-RD and the benchmark EPS-RMSA algorithms respectively.Therefore,it is more applicable to the networks with huge calculations.Besides,SRMLSCRA-SPF consumes the least spectrum,thereby exhibiting its suitability for scenarios where the amount of calculation is small and communication resources are scarce.The results demonstrate that the proposed methods realize the joint allocation of computing and connectivity resources,and could provide efficient protection for services under single-link failure and occupy less spectrum.
基金supported by the National Science Foundation of China(Grant No.62202118)the Top-Technology Talent Project from Guizhou Education Department(Qianjiao Ji[2022]073)+1 种基金the Natural Science Foundation of Hebei Province(Grant No.F2022203045 and F2022203026))the Central Government Guided Local Science and Technology Development Fund Project(Grant No.226Z0701G).
文摘Unmanned aerial vehicle(UAV)-enabled edge computing is emerging as a potential enabler for Artificial Intelligence of Things(AIoT)in the forthcoming sixth-generation(6G)communication networks.With the use of flexible UAVs,massive sensing data is gathered and processed promptly without considering geographical locations.Deep neural networks(DNNs)are becoming a driving force to extract valuable information from sensing data.However,the lightweight servers installed on UAVs are not able to meet the extremely high requirements of inference tasks due to the limited battery capacities of UAVs.In this work,we investigate a DNN model placement problem for AIoT applications,where the trained DNN models are selected and placed on UAVs to execute inference tasks locally.It is impractical to obtain future DNN model request profiles and system operation states in UAV-enabled edge computing.The Lyapunov optimization technique is leveraged for the proposed DNN model placement problem.Based on the observed system overview,an advanced online placement(AOP)algorithm is developed to solve the transformed problem in each time slot,which can reduce DNN model transmission delay and disk I/O energy cost simultaneously while keeping the input data queues stable.Finally,extensive simulations are provided to depict the effectiveness of the AOP algorithm.The numerical results demonstrate that the AOP algorithm can reduce 18.14%of the model placement cost and 29.89%of the input data queue backlog on average by comparing it with benchmark algorithms.
基金supported by the National Natural Science Foundation of China under Grant 62272391in part by the Key Industry Innovation Chain of Shaanxi under Grant 2021ZDLGY05-08.
文摘As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency.
基金Supported by the National Key Research and Development Program of China(No.2021YFB2401204)。
文摘As edge computing services soar,the problem of resource fragmentation situation is greatly worsened in elastic optical networks(EON).Aimed to solve this problem,this article proposes the fragmentation prediction model that makes full use of the gate recurrent unit(GRU)algorithm.Based on the fragmentation prediction model,one virtual optical network mapping scheme is presented for edge computing driven EON.With the minimum of fragmentation degree all over the whole EON,the virtual network mapping can be successively conducted.Test results show that the proposed approach can reduce blocking rate,and the supporting ability for virtual optical network services is greatly improved.
基金This work was supported by the National Key R&D Program of China No.2019YFB1802800.
文摘In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.
基金supported by National Natural Science Foundation of China(Grant No.61471053,61901052)Fundamental Research Funds for the Central Universities(Grant 2018RC03)Beijing Laboratory of Advanced Information Networks
文摘The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resources(ONER)for EC over Fiber-Wireless(FiWi)access networks is proposed in this paper.In the ONER scheme,the FiWi network connects edge computing nodes with fiber and converges wireless and fiber connections seamlessly,so that it can support the offloading transmission with low delay and wide bandwidth.Based on the ONER scheme supported by FiWi networks,computation tasks can be offloaded to edge computing nodes in a wider range of area without increasing wireless hops(e.g.,just one wireless hop),which achieves low delay.Additionally,an efficient Computation Resource Scheduling(CRS)algorithm based on the ONER scheme is also proposed to make offloading decision.The results show that more offloading requests can be satisfied and the average completion time of computation tasks decreases significantly with the ONER scheme and the CRS algorithm.Therefore,the ONER scheme and the CRS algorithm can schedule computation resources at neighboring edge computing nodes for offloading to meet the challenge of large scale computation tasks.
文摘Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the affordability of robust hardware for remote audio recording is stimulating international interest in the potential for acoustic methods for biodiversity monitoring.The scale of the data involved requires automated methods,however, the development of acoustic sensor networks capable of sampling the soundscape across time and space and relaying the data to an accessible storage location remains a significant technical challenge, with power management at its core. Recording and transmitting large quantities of audio data is power intensive,hampering long-term deployment in remote, off-grid locations of key ecological interest. Rather than transmitting heavy audio data, in this paper, we propose a low-cost and energy efficient wireless acoustic sensor network integrated with edge computing structure for remote acoustic monitoring and in situ analysis.Recording and computation of acoustic indices are carried out directly on edge devices built from low noise primo condenser microphones and Teensy microcontrollers, using internal FFT hardware support. Resultant indices are transmitted over a ZigBee-based wireless mesh network to a destination server.Benchmark tests of audio quality, indices computation and power consumption demonstrate acoustic equivalence and significant power savings over current solutions.
基金This work was supported by the National Natural Science Foundation of China(Grants 61971054 and 61601045)Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory Foundation(HHX21641X002 and HHX20641X003).
文摘Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-intensive applications for satellite communication networks(SCNs).By deploying edge computing servers in satellite and gateway stations,SCNs can achieve significant performance gains of the computing capacities at the expense of extending the dimensions and complexity of resource management.Therefore,in this paper,we investigate the joint computing and communication resource management problem for SCNs to minimize the execution latency of the computation-intensive applications,while two different satellite edge computing scenarios and local execution are considered.Furthermore,the joint computing and communication resource allocation problem for the computation-intensive services is formulated as a mixed-integer programming problem.A game-theoretic and many-to-one matching theorybased scheme(JCCRA-GM)is proposed to achieve an approximate optimal solution.Numerical results show that the proposed method with low complexity can achieve almost the same weight-sum latency as the Brute-force method.
基金supported by the National Key Research and Development Program of China(No.2023YFB4502200)Natural Science Foundation of China(Nos.92164204 and 62374063)the Science and Technology Major Project of Hubei Province(No.2022AEA001).
文摘Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it is known that the complementary metal-oxide-semiconductor(CMOS)field effect transistors have played the fundamental role in the modern integrated circuit technology.Therefore,will complementary memtransistors(CMT)also play such a role in the future neuromorphic circuits and chips?In this review,various types of materials and physical mechanisms for constructing CMT(how)are inspected with their merits and need-to-address challenges discussed.Then the unique properties(what)and poten-tial applications of CMT in different learning algorithms/scenarios of spiking neural networks(why)are reviewed,including super-vised rule,reinforcement one,dynamic vision with in-sensor computing,etc.Through exploiting the complementary structure-related novel functions,significant reduction of hardware consuming,enhancement of energy/efficiency ratio and other advan-tages have been gained,illustrating the alluring prospect of design technology co-optimization(DTCO)of CMT towards neuro-morphic computing.
基金This work was supported in part by the National Key R&D Program of China under Grant 2019YFE0114000in part by the National Natural Science Foundation of China under Grant 61701042+1 种基金in part by the 111 Project of China(Grant No.B16006)the research foundation of Ministry of EducationChina Mobile under Grant MCM20180101.
文摘In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network.
基金National Natural Science Foundation of China under Grant No. 60970115,National Natural Science Funds Projects of China under Grant No. 91018008
文摘With the rapid development of wireless networks,the Ad Hoc networks are widely used in many fields,but the current network security solutions for the Ad Hoc network are not competitive enough.So the critical technology of Ad Hoc network applications shall be how to implement the security scheme.Here the discussions are focused on the specific solution against the security threats which the Ad Hoc networks will face,the methodology of a management model which uses trusted computing technology to solve Ad Hoc network security problems,and the analysis and verification for the security of this model.
基金supported by JSPS KAKENHI Grant Number JP16K00117, JP19K20250KDDI Foundationthe China Scholarship Council (201808050016)
文摘The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide variety of traffic types.Current traffic analysis methods are executed on the cloud,which needs to upload the traffic data.Fog computing is a more promising way to save bandwidth resources by offloading these tasks to the fog nodes.However,traffic analysis models based on traditional machine learning need to retrain all traffic data when updating the trained model,which are not suitable for fog computing due to the poor computing power.In this study,we design a novel fog computing based traffic analysis system using broad learning.For one thing,fog computing can provide a distributed architecture for saving the bandwidth resources.For another,we use the broad learning to incrementally train the traffic data,which is more suitable for fog computing because it can support incremental updates of models without retraining all data.We implement our system on the Raspberry Pi,and experimental results show that we have a 98%probability to accurately identify these traffic data.Moreover,our method has a faster training speed compared with Convolutional Neural Network(CNN).