期刊文献+
共找到21,775篇文章
< 1 2 250 >
每页显示 20 50 100
Exploring reservoir computing:Implementation via double stochastic nanowire networks
1
作者 唐健峰 夏磊 +3 位作者 李广隶 付军 段书凯 王丽丹 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期572-582,共11页
Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data ana... Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing. 展开更多
关键词 double-layer stochastic(DS)nanowire network architecture neuromorphic computation nanowire network reservoir computing time series prediction
下载PDF
A Review of Computing with Spiking Neural Networks
2
作者 Jiadong Wu Yinan Wang +2 位作者 Zhiwei Li Lun Lu Qingjiang Li 《Computers, Materials & Continua》 SCIE EI 2024年第3期2909-2939,共31页
Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,exces... Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing. 展开更多
关键词 Spiking neural networks neural networks brain-like computing artificial intelligence learning algorithm
下载PDF
Online Learning-Based Offloading Decision and Resource Allocation in Mobile Edge Computing-Enabled Satellite-Terrestrial Networks
3
作者 Tong Minglei Li Song +1 位作者 Han Wanjiang Wang Xiaoxiang 《China Communications》 SCIE CSCD 2024年第3期230-246,共17页
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ... Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes. 展开更多
关键词 computing resource allocation mobile edge computing satellite-terrestrial networks task offloading decision
下载PDF
Joint Optimization of Energy Consumption and Network Latency in Blockchain-Enabled Fog Computing Networks
4
作者 Huang Xiaoge Yin Hongbo +3 位作者 Cao Bin Wang Yongsheng Chen Qianbin Zhang Jie 《China Communications》 SCIE CSCD 2024年第4期104-119,共16页
Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this pap... Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature. 展开更多
关键词 blockchain energy consumption fog computing network Internet of Things LATENCY
下载PDF
Edge computing oriented virtual optical network mapping scheme based on fragmentation prediction
5
作者 何烁 BAI Huifeng +1 位作者 HUO Chao ZHANG Ganghong 《High Technology Letters》 EI CAS 2024年第2期158-163,共6页
As edge computing services soar,the problem of resource fragmentation situation is greatly worsened in elastic optical networks(EON).Aimed to solve this problem,this article proposes the fragmentation prediction model... As edge computing services soar,the problem of resource fragmentation situation is greatly worsened in elastic optical networks(EON).Aimed to solve this problem,this article proposes the fragmentation prediction model that makes full use of the gate recurrent unit(GRU)algorithm.Based on the fragmentation prediction model,one virtual optical network mapping scheme is presented for edge computing driven EON.With the minimum of fragmentation degree all over the whole EON,the virtual network mapping can be successively conducted.Test results show that the proposed approach can reduce blocking rate,and the supporting ability for virtual optical network services is greatly improved. 展开更多
关键词 elastic optical networks virtual optical network fragmentation self-awareness edge computing
下载PDF
Security Implications of Edge Computing in Cloud Networks
6
作者 Sina Ahmadi 《Journal of Computer and Communications》 2024年第2期26-46,共21页
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r... Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques. 展开更多
关键词 Edge computing Cloud networks Artificial Intelligence Machine Learning Cloud Security
下载PDF
Advances in neuromorphic computing:Expanding horizons for AI development through novel artificial neurons and in-sensor computing
7
作者 杨玉波 赵吉哲 +11 位作者 刘胤洁 华夏扬 王天睿 郑纪元 郝智彪 熊兵 孙长征 韩彦军 王健 李洪涛 汪莱 罗毅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期1-23,共23页
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ... AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI. 展开更多
关键词 neuromorphic computing spiking neural network(SNN) in-sensor computing artificial intelligence
下载PDF
Lightweight Intrusion Detection Using Reservoir Computing
8
作者 Jiarui Deng Wuqiang Shen +4 位作者 Yihua Feng Guosheng Lu Guiquan Shen Lei Cui Shanxiang Lyu 《Computers, Materials & Continua》 SCIE EI 2024年第1期1345-1361,共17页
The blockchain-empowered Internet of Vehicles(IoV)enables various services and achieves data security and privacy,significantly advancing modern vehicle systems.However,the increased frequency of data transmission and... The blockchain-empowered Internet of Vehicles(IoV)enables various services and achieves data security and privacy,significantly advancing modern vehicle systems.However,the increased frequency of data transmission and complex network connections among nodes also make them more susceptible to adversarial attacks.As a result,an efficient intrusion detection system(IDS)becomes crucial for securing the IoV environment.Existing IDSs based on convolutional neural networks(CNN)often suffer from high training time and storage requirements.In this paper,we propose a lightweight IDS solution to protect IoV against both intra-vehicle and external threats.Our approach achieves superior performance,as demonstrated by key metrics such as accuracy and precision.Specifically,our method achieves accuracy rates ranging from 99.08% to 100% on the Car-Hacking dataset,with a remarkably short training time. 展开更多
关键词 Echo state network intrusion detection system Internet of Vehicles reservoir computing
下载PDF
Analysis and Optimization on Partition-Based Caching and Delivery in Satellite-Terrestrial Edge Computing Networks
9
作者 Peng Wang Xing Zhang +2 位作者 Jiaxin Zhang Shuang Zheng Wenhao Liu 《China Communications》 SCIE CSCD 2023年第3期252-285,共34页
As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file tra... As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%. 展开更多
关键词 edge computing satellite terrestrial net-works caching deployment stochastic geometry 6G networks
下载PDF
Improved Network Validity Using Various Soft Computing Techniques
10
作者 M.Yuvaraju R.Elakkiyavendan 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1465-1477,共13页
Nowadays,when a life span of sensor nodes are threatened by the shortage of energy available for communication,sink mobility is an excellent technique for increasing its lifespan.When communicating via a WSN,the use o... Nowadays,when a life span of sensor nodes are threatened by the shortage of energy available for communication,sink mobility is an excellent technique for increasing its lifespan.When communicating via a WSN,the use of nodes as a transmission method eliminates the need for a physical medium.Sink mobility in a dynamic network topology presents a problem for sensor nodes that have reserved resources.Unless the route is revised and changed to reflect the location of the mobile sink location,it will be inefficient for delivering data effec-tively.In the clustering strategy,nodes are grouped together to improve commu-nication,and the cluster head receives data from compactable nodes.The sink receives the aggregated data from the head.The cluster head is the central node in the conventional technique.A single node uses more energy than a node that is routed to a dead node.Increasing the number of people using a route shortens its lifespan.The proposed work demonstrates the effectiveness with which sensor node paths can be modified at a lower cost by utilising the virtual grid.The best routes are maintained mostly by sink node communication on routes based on dynamic route adjustment(VGDRA).Only specific nodes are acquired to re-align data supply to the mobile sink in accordance with new paradigms of route recon-struction.According to the results,VGDRA schemes have a longer life span because of the reduced number of loops. 展开更多
关键词 Soft computing intelligent systems wireless networks SENSOR
下载PDF
Path Computing Scheme with Low-Latency and Low-Power in Hybrid Cloud-Fog Network for IIoT
11
作者 Jijun Ren Peng Zhu Zhiyuan Ren 《China Communications》 SCIE CSCD 2023年第8期1-16,共16页
With the rapid development of the Industrial Internet of Things(IIoT),the traditional centralized cloud processing model has encountered the challenges of high communication latency and high energy consumption in hand... With the rapid development of the Industrial Internet of Things(IIoT),the traditional centralized cloud processing model has encountered the challenges of high communication latency and high energy consumption in handling industrial big data tasks.This paper aims to propose a low-latency and lowenergy path computing scheme for the above problems.This scheme is based on the cloud-fog network architecture.The computing resources of fog network devices in the fog computing layer are used to complete task processing step by step during the data interaction from industrial field devices to the cloud center.A collaborative scheduling strategy based on the particle diversity discrete binary particle swarm optimization(PDBPSO)algorithm is proposed to deploy manufacturing tasks to the fog computing layer reasonably.The task in the form of a directed acyclic graph(DAG)is mapped to a factory fog network in the form of an undirected graph(UG)to find the appropriate computing path for the task,significantly reducing the task processing latency under energy consumption constraints.Simulation experiments show that this scheme’s latency performance outperforms the strategy that tasks are wholly offloaded to the cloud and the strategy that tasks are entirely offloaded to the edge equipment. 展开更多
关键词 collaborative offloading strategy cloudfog network architecture industrial internet of things path computing PDBPSO
下载PDF
Quantum Computing Based Neural Networks for Anomaly Classification in Real-Time Surveillance Videos
12
作者 MD.Yasar Arafath A.Niranjil Kumar 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期2489-2508,共20页
For intelligent surveillance videos,anomaly detection is extremely important.Deep learning algorithms have been popular for evaluating realtime surveillance recordings,like traffic accidents,and criminal or unlawful i... For intelligent surveillance videos,anomaly detection is extremely important.Deep learning algorithms have been popular for evaluating realtime surveillance recordings,like traffic accidents,and criminal or unlawful incidents such as suicide attempts.Nevertheless,Deep learning methods for classification,like convolutional neural networks,necessitate a lot of computing power.Quantum computing is a branch of technology that solves abnormal and complex problems using quantum mechanics.As a result,the focus of this research is on developing a hybrid quantum computing model which is based on deep learning.This research develops a Quantum Computing-based Convolutional Neural Network(QC-CNN)to extract features and classify anomalies from surveillance footage.A Quantum-based Circuit,such as the real amplitude circuit,is utilized to improve the performance of the model.As far as my research,this is the first work to employ quantum deep learning techniques to classify anomalous events in video surveillance applications.There are 13 anomalies classified from the UCF-crime dataset.Based on experimental results,the proposed model is capable of efficiently classifying data concerning confusion matrix,Receiver Operating Characteristic(ROC),accuracy,Area Under Curve(AUC),precision,recall as well as F1-score.The proposed QC-CNN has attained the best accuracy of 95.65 percent which is 5.37%greater when compared to other existing models.To measure the efficiency of the proposed work,QC-CNN is also evaluated with classical and quantum models. 展开更多
关键词 Deep learning video surveillance quantum computing anomaly detection convolutional neural network
下载PDF
Energy-efficient task allocation for reliable parallel computation of cluster-based wireless sensor network in edge computing
13
作者 Jiabao Wen Jiachen Yang +2 位作者 Tianying Wang Yang Li Zhihan Lv 《Digital Communications and Networks》 SCIE CSCD 2023年第2期473-482,共10页
To efficiently complete a complex computation task,the complex task should be decomposed into subcomputation tasks that run parallel in edge computing.Wireless Sensor Network(WSN)is a typical application of parallel c... To efficiently complete a complex computation task,the complex task should be decomposed into subcomputation tasks that run parallel in edge computing.Wireless Sensor Network(WSN)is a typical application of parallel computation.To achieve highly reliable parallel computation for wireless sensor network,the network's lifetime needs to be extended.Therefore,a proper task allocation strategy is needed to reduce the energy consumption and balance the load of the network.This paper proposes a task model and a cluster-based WSN model in edge computing.In our model,different tasks require different types of resources and different sensors provide different types of resources,so our model is heterogeneous,which makes the model more practical.Then we propose a task allocation algorithm that combines the Genetic Algorithm(GA)and the Ant Colony Optimization(ACO)algorithm.The algorithm concentrates on energy conservation and load balancing so that the lifetime of the network can be extended.The experimental result shows the algorithm's effectiveness and advantages in energy conservation and load balancing. 展开更多
关键词 Wireless sensor network Parallel computation Task allocation Genetic algorithm Ant colony optimization algorithm ENERGY-EFFICIENT Load balancing
下载PDF
Leveraging Quantum Computing for the Ising Model to Simulate Two Real Systems: Magnetic Materials and Biological Neural Networks (BNNs)
14
作者 David L. Cao Khoi Dinh 《Journal of Quantum Information Science》 2023年第3期138-155,共18页
Quantum computing is a field with increasing relevance as quantum hardware improves and more applications of quantum computing are discovered. In this paper, we demonstrate the feasibility of modeling Ising Model Hami... Quantum computing is a field with increasing relevance as quantum hardware improves and more applications of quantum computing are discovered. In this paper, we demonstrate the feasibility of modeling Ising Model Hamiltonians on the IBM quantum computer. We developed quantum circuits to simulate these systems more efficiently for both closed and open boundary Ising models, with and without perturbations. We tested these various geometries of systems in both 1-D and 2-D space to mimic two real systems: magnetic materials and biological neural networks (BNNs). Our quantum model is more efficient than classical computers, which can struggle to simulate large, complex systems of particles. 展开更多
关键词 Ising Model Magnetic Material Biological Neural network Quantum computting International Business Machines (IBM)
下载PDF
Task Offloading and Resource Allocation in IoT Based Mobile Edge Computing Using Deep Learning 被引量:1
15
作者 Ily s Abdullaev Natalia Prodanova +3 位作者 KAruna Bhaskar ELaxmi Lydia Seifedine Kadry Jungeun Kim 《Computers, Materials & Continua》 SCIE EI 2023年第8期1463-1477,共15页
Recently,computation offloading has become an effective method for overcoming the constraint of a mobile device(MD)using computationintensivemobile and offloading delay-sensitive application tasks to the remote cloud-... Recently,computation offloading has become an effective method for overcoming the constraint of a mobile device(MD)using computationintensivemobile and offloading delay-sensitive application tasks to the remote cloud-based data center.Smart city benefitted from offloading to edge point.Consider a mobile edge computing(MEC)network in multiple regions.They comprise N MDs and many access points,in which everyMDhasM independent real-time tasks.This study designs a new Task Offloading and Resource Allocation in IoT-based MEC using Deep Learning with Seagull Optimization(TORA-DLSGO)algorithm.The proposed TORA-DLSGO technique addresses the resource management issue in the MEC server,which enables an optimum offloading decision to minimize the system cost.In addition,an objective function is derived based on minimizing energy consumption subject to the latency requirements and restricted resources.The TORA-DLSGO technique uses the deep belief network(DBN)model for optimum offloading decision-making.Finally,the SGO algorithm is used for the parameter tuning of the DBN model.The simulation results exemplify that the TORA-DLSGO technique outperformed the existing model in reducing client overhead in the MEC systems with a maximum reward of 0.8967. 展开更多
关键词 Mobile edge computing seagull optimization deep belief network resource management parameter tuning
下载PDF
Computing Power Network:The Architecture of Convergence of Computing and Networking towards 6G Requirement 被引量:21
16
作者 Xiongyan Tang Chang Cao +4 位作者 Youxiang Wang Shuai Zhang Ying Liu Mingxuan Li Tao He 《China Communications》 SCIE CSCD 2021年第2期175-185,共11页
In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi... In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on. 展开更多
关键词 6G edge computing cloud computing convergence of cloud and network computing power network
下载PDF
An Offloading Scheme Leveraging on Neighboring Node Resources for Edge Computing over Fiber-Wireless (FiWi) Access Networks 被引量:3
17
作者 Wei Chang Yihong Hu +2 位作者 Guochu Shou Yaqiong Liu Zhigang Guo 《China Communications》 SCIE CSCD 2019年第11期107-119,共13页
The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resourc... The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resources(ONER)for EC over Fiber-Wireless(FiWi)access networks is proposed in this paper.In the ONER scheme,the FiWi network connects edge computing nodes with fiber and converges wireless and fiber connections seamlessly,so that it can support the offloading transmission with low delay and wide bandwidth.Based on the ONER scheme supported by FiWi networks,computation tasks can be offloaded to edge computing nodes in a wider range of area without increasing wireless hops(e.g.,just one wireless hop),which achieves low delay.Additionally,an efficient Computation Resource Scheduling(CRS)algorithm based on the ONER scheme is also proposed to make offloading decision.The results show that more offloading requests can be satisfied and the average completion time of computation tasks decreases significantly with the ONER scheme and the CRS algorithm.Therefore,the ONER scheme and the CRS algorithm can schedule computation resources at neighboring edge computing nodes for offloading to meet the challenge of large scale computation tasks. 展开更多
关键词 EDGE computing OFFLOADING Fiber-wireless access networks delay
下载PDF
Wireless Acoustic Sensor Networks and Edge Computing for Rapid Acoustic Monitoring 被引量:6
18
作者 Zhengguo Sheng Saskia Pfersich +3 位作者 Alice Eldridge Jianshan Zhou Daxin Tian Victor C.M.Leung 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2019年第1期64-74,共11页
Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the afford... Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the affordability of robust hardware for remote audio recording is stimulating international interest in the potential for acoustic methods for biodiversity monitoring.The scale of the data involved requires automated methods,however, the development of acoustic sensor networks capable of sampling the soundscape across time and space and relaying the data to an accessible storage location remains a significant technical challenge, with power management at its core. Recording and transmitting large quantities of audio data is power intensive,hampering long-term deployment in remote, off-grid locations of key ecological interest. Rather than transmitting heavy audio data, in this paper, we propose a low-cost and energy efficient wireless acoustic sensor network integrated with edge computing structure for remote acoustic monitoring and in situ analysis.Recording and computation of acoustic indices are carried out directly on edge devices built from low noise primo condenser microphones and Teensy microcontrollers, using internal FFT hardware support. Resultant indices are transmitted over a ZigBee-based wireless mesh network to a destination server.Benchmark tests of audio quality, indices computation and power consumption demonstrate acoustic equivalence and significant power savings over current solutions. 展开更多
关键词 ACOUSTIC sensor networks EDGE computing energy EFFICIENCY
下载PDF
Joint Computing and Communication Resource Allocation for Satellite Communication Networks with Edge Computing 被引量:9
19
作者 Shanghong Zhang Gaofeng Cui +1 位作者 Yating Long Weidong Wang 《China Communications》 SCIE CSCD 2021年第7期236-252,共17页
Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-int... Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-intensive applications for satellite communication networks(SCNs).By deploying edge computing servers in satellite and gateway stations,SCNs can achieve significant performance gains of the computing capacities at the expense of extending the dimensions and complexity of resource management.Therefore,in this paper,we investigate the joint computing and communication resource management problem for SCNs to minimize the execution latency of the computation-intensive applications,while two different satellite edge computing scenarios and local execution are considered.Furthermore,the joint computing and communication resource allocation problem for the computation-intensive services is formulated as a mixed-integer programming problem.A game-theoretic and many-to-one matching theorybased scheme(JCCRA-GM)is proposed to achieve an approximate optimal solution.Numerical results show that the proposed method with low complexity can achieve almost the same weight-sum latency as the Brute-force method. 展开更多
关键词 satellite communication networks edge computing resource allocation matching theory
下载PDF
Unleashing the Power of Moiré Materials in Neuromorphic Computing
20
作者 John Paul Strachan 《Chinese Physics Letters》 SCIE EI CAS CSCD 2023年第12期131-132,共2页
Reservoir computing has been an intriguing paradigm in the field of artificial intelligence and machine learning that draws inspiration from the complex dynamics of recurrent neural networks found in biological system... Reservoir computing has been an intriguing paradigm in the field of artificial intelligence and machine learning that draws inspiration from the complex dynamics of recurrent neural networks found in biological systems. Unlike traditional neural networks, reservoir computing separates the training of a fixed, randomly connected ‘reservoir’layer from a simpler ‘readout’ layer. This distinctive architecture allows the reservoir to process information in a highly dynamic and nonlinear manner. 展开更多
关键词 networkS NEURAL computing
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部