期刊文献+
共找到22,255篇文章
< 1 2 250 >
每页显示 20 50 100
Computing Power Network:A Survey
1
作者 Sun Yukun Lei Bo +4 位作者 Liu Junlin Huang Haonan Zhang Xing Peng Jing Wang Wenbo 《China Communications》 SCIE CSCD 2024年第9期109-145,共37页
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these... With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well. 展开更多
关键词 computing power modeling computing power network computing power scheduling information awareness network forwarding
下载PDF
Exploring reservoir computing:Implementation via double stochastic nanowire networks
2
作者 唐健峰 夏磊 +3 位作者 李广隶 付军 段书凯 王丽丹 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期572-582,共11页
Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data ana... Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing. 展开更多
关键词 double-layer stochastic(DS)nanowire network architecture neuromorphic computation nanowire network reservoir computing time series prediction
下载PDF
A Review of Computing with Spiking Neural Networks
3
作者 Jiadong Wu Yinan Wang +2 位作者 Zhiwei Li Lun Lu Qingjiang Li 《Computers, Materials & Continua》 SCIE EI 2024年第3期2909-2939,共31页
Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,exces... Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing. 展开更多
关键词 Spiking neural networks neural networks brain-like computing artificial intelligence learning algorithm
下载PDF
Online Learning-Based Offloading Decision and Resource Allocation in Mobile Edge Computing-Enabled Satellite-Terrestrial Networks
4
作者 Tong Minglei Li Song +1 位作者 Han Wanjiang Wang Xiaoxiang 《China Communications》 SCIE CSCD 2024年第3期230-246,共17页
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ... Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes. 展开更多
关键词 computing resource allocation mobile edge computing satellite-terrestrial networks task offloading decision
下载PDF
Joint Optimization of Energy Consumption and Network Latency in Blockchain-Enabled Fog Computing Networks
5
作者 Huang Xiaoge Yin Hongbo +3 位作者 Cao Bin Wang Yongsheng Chen Qianbin Zhang Jie 《China Communications》 SCIE CSCD 2024年第4期104-119,共16页
Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this pap... Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature. 展开更多
关键词 blockchain energy consumption fog computing network Internet of Things LATENCY
下载PDF
Joint Allocation of Computing and Connectivity Resources in Survivable Inter-Datacenter Elastic Optical Networks
6
作者 Yang Tao Li Yang Chen Xue 《China Communications》 SCIE CSCD 2024年第8期172-181,共10页
Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to ... Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to realize joint allocation of computing and connectivity resources in survivable inter-datacenter EONs,a survivable routing,modulation level,spectrum,and computing resource allocation algorithm(SRMLSCRA)algorithm and three datacenter selection strategies,i.e.Computing Resource First(CRF),Shortest Path First(SPF)and Random Destination(RD),are proposed for different scenarios.Unicast and manycast are applied to the communication of computing requests,and the routing strategies are calculated respectively.Simulation results show that SRMLCRA-CRF can serve the largest amount of protected computing tasks,and the requested calculation blocking probability is reduced by 29.2%,28.3%and 30.5%compared with SRMLSCRA-SPF,SRMLSCRA-RD and the benchmark EPS-RMSA algorithms respectively.Therefore,it is more applicable to the networks with huge calculations.Besides,SRMLSCRA-SPF consumes the least spectrum,thereby exhibiting its suitability for scenarios where the amount of calculation is small and communication resources are scarce.The results demonstrate that the proposed methods realize the joint allocation of computing and connectivity resources,and could provide efficient protection for services under single-link failure and occupy less spectrum. 展开更多
关键词 computing and connectivity interdatacenter networks joint resource allocation service protection
下载PDF
Efficient Digital Twin Placement for Blockchain-Empowered Wireless Computing Power Network
7
作者 Wei Wu Liang Yu +2 位作者 Liping Yang Yadong Zhang Peng Wang 《Computers, Materials & Continua》 SCIE EI 2024年第7期587-603,共17页
As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and... As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency. 展开更多
关键词 Wireless computing power network blockchain digital twin placement minimum synchronization latency
下载PDF
Edge computing oriented virtual optical network mapping scheme based on fragmentation prediction
8
作者 何烁 BAI Huifeng +1 位作者 HUO Chao ZHANG Ganghong 《High Technology Letters》 EI CAS 2024年第2期158-163,共6页
As edge computing services soar,the problem of resource fragmentation situation is greatly worsened in elastic optical networks(EON).Aimed to solve this problem,this article proposes the fragmentation prediction model... As edge computing services soar,the problem of resource fragmentation situation is greatly worsened in elastic optical networks(EON).Aimed to solve this problem,this article proposes the fragmentation prediction model that makes full use of the gate recurrent unit(GRU)algorithm.Based on the fragmentation prediction model,one virtual optical network mapping scheme is presented for edge computing driven EON.With the minimum of fragmentation degree all over the whole EON,the virtual network mapping can be successively conducted.Test results show that the proposed approach can reduce blocking rate,and the supporting ability for virtual optical network services is greatly improved. 展开更多
关键词 elastic optical networks virtual optical network fragmentation self-awareness edge computing
下载PDF
Security Implications of Edge Computing in Cloud Networks
9
作者 Sina Ahmadi 《Journal of Computer and Communications》 2024年第2期26-46,共21页
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r... Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques. 展开更多
关键词 Edge computing Cloud networks Artificial Intelligence Machine Learning Cloud Security
下载PDF
Analysis and Optimization on Partition-Based Caching and Delivery in Satellite-Terrestrial Edge Computing Networks 被引量:2
10
作者 Peng Wang Xing Zhang +2 位作者 Jiaxin Zhang Shuang Zheng Wenhao Liu 《China Communications》 SCIE CSCD 2023年第3期252-285,共34页
As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file tra... As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%. 展开更多
关键词 edge computing satellite terrestrial net-works caching deployment stochastic geometry 6G networks
下载PDF
Improved Network Validity Using Various Soft Computing Techniques
11
作者 M.Yuvaraju R.Elakkiyavendan 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1465-1477,共13页
Nowadays,when a life span of sensor nodes are threatened by the shortage of energy available for communication,sink mobility is an excellent technique for increasing its lifespan.When communicating via a WSN,the use o... Nowadays,when a life span of sensor nodes are threatened by the shortage of energy available for communication,sink mobility is an excellent technique for increasing its lifespan.When communicating via a WSN,the use of nodes as a transmission method eliminates the need for a physical medium.Sink mobility in a dynamic network topology presents a problem for sensor nodes that have reserved resources.Unless the route is revised and changed to reflect the location of the mobile sink location,it will be inefficient for delivering data effec-tively.In the clustering strategy,nodes are grouped together to improve commu-nication,and the cluster head receives data from compactable nodes.The sink receives the aggregated data from the head.The cluster head is the central node in the conventional technique.A single node uses more energy than a node that is routed to a dead node.Increasing the number of people using a route shortens its lifespan.The proposed work demonstrates the effectiveness with which sensor node paths can be modified at a lower cost by utilising the virtual grid.The best routes are maintained mostly by sink node communication on routes based on dynamic route adjustment(VGDRA).Only specific nodes are acquired to re-align data supply to the mobile sink in accordance with new paradigms of route recon-struction.According to the results,VGDRA schemes have a longer life span because of the reduced number of loops. 展开更多
关键词 Soft computing intelligent systems wireless networks SENSOR
下载PDF
Path Computing Scheme with Low-Latency and Low-Power in Hybrid Cloud-Fog Network for IIoT
12
作者 Jijun Ren Peng Zhu Zhiyuan Ren 《China Communications》 SCIE CSCD 2023年第8期1-16,共16页
With the rapid development of the Industrial Internet of Things(IIoT),the traditional centralized cloud processing model has encountered the challenges of high communication latency and high energy consumption in hand... With the rapid development of the Industrial Internet of Things(IIoT),the traditional centralized cloud processing model has encountered the challenges of high communication latency and high energy consumption in handling industrial big data tasks.This paper aims to propose a low-latency and lowenergy path computing scheme for the above problems.This scheme is based on the cloud-fog network architecture.The computing resources of fog network devices in the fog computing layer are used to complete task processing step by step during the data interaction from industrial field devices to the cloud center.A collaborative scheduling strategy based on the particle diversity discrete binary particle swarm optimization(PDBPSO)algorithm is proposed to deploy manufacturing tasks to the fog computing layer reasonably.The task in the form of a directed acyclic graph(DAG)is mapped to a factory fog network in the form of an undirected graph(UG)to find the appropriate computing path for the task,significantly reducing the task processing latency under energy consumption constraints.Simulation experiments show that this scheme’s latency performance outperforms the strategy that tasks are wholly offloaded to the cloud and the strategy that tasks are entirely offloaded to the edge equipment. 展开更多
关键词 collaborative offloading strategy cloudfog network architecture industrial internet of things path computing PDBPSO
下载PDF
Quantum Computing Based Neural Networks for Anomaly Classification in Real-Time Surveillance Videos
13
作者 MD.Yasar Arafath A.Niranjil Kumar 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期2489-2508,共20页
For intelligent surveillance videos,anomaly detection is extremely important.Deep learning algorithms have been popular for evaluating realtime surveillance recordings,like traffic accidents,and criminal or unlawful i... For intelligent surveillance videos,anomaly detection is extremely important.Deep learning algorithms have been popular for evaluating realtime surveillance recordings,like traffic accidents,and criminal or unlawful incidents such as suicide attempts.Nevertheless,Deep learning methods for classification,like convolutional neural networks,necessitate a lot of computing power.Quantum computing is a branch of technology that solves abnormal and complex problems using quantum mechanics.As a result,the focus of this research is on developing a hybrid quantum computing model which is based on deep learning.This research develops a Quantum Computing-based Convolutional Neural Network(QC-CNN)to extract features and classify anomalies from surveillance footage.A Quantum-based Circuit,such as the real amplitude circuit,is utilized to improve the performance of the model.As far as my research,this is the first work to employ quantum deep learning techniques to classify anomalous events in video surveillance applications.There are 13 anomalies classified from the UCF-crime dataset.Based on experimental results,the proposed model is capable of efficiently classifying data concerning confusion matrix,Receiver Operating Characteristic(ROC),accuracy,Area Under Curve(AUC),precision,recall as well as F1-score.The proposed QC-CNN has attained the best accuracy of 95.65 percent which is 5.37%greater when compared to other existing models.To measure the efficiency of the proposed work,QC-CNN is also evaluated with classical and quantum models. 展开更多
关键词 Deep learning video surveillance quantum computing anomaly detection convolutional neural network
下载PDF
Energy-efficient task allocation for reliable parallel computation of cluster-based wireless sensor network in edge computing
14
作者 Jiabao Wen Jiachen Yang +2 位作者 Tianying Wang Yang Li Zhihan Lv 《Digital Communications and Networks》 SCIE CSCD 2023年第2期473-482,共10页
To efficiently complete a complex computation task,the complex task should be decomposed into subcomputation tasks that run parallel in edge computing.Wireless Sensor Network(WSN)is a typical application of parallel c... To efficiently complete a complex computation task,the complex task should be decomposed into subcomputation tasks that run parallel in edge computing.Wireless Sensor Network(WSN)is a typical application of parallel computation.To achieve highly reliable parallel computation for wireless sensor network,the network's lifetime needs to be extended.Therefore,a proper task allocation strategy is needed to reduce the energy consumption and balance the load of the network.This paper proposes a task model and a cluster-based WSN model in edge computing.In our model,different tasks require different types of resources and different sensors provide different types of resources,so our model is heterogeneous,which makes the model more practical.Then we propose a task allocation algorithm that combines the Genetic Algorithm(GA)and the Ant Colony Optimization(ACO)algorithm.The algorithm concentrates on energy conservation and load balancing so that the lifetime of the network can be extended.The experimental result shows the algorithm's effectiveness and advantages in energy conservation and load balancing. 展开更多
关键词 Wireless sensor network Parallel computation Task allocation Genetic algorithm Ant colony optimization algorithm ENERGY-EFFICIENT Load balancing
下载PDF
Leveraging Quantum Computing for the Ising Model to Simulate Two Real Systems: Magnetic Materials and Biological Neural Networks (BNNs)
15
作者 David L. Cao Khoi Dinh 《Journal of Quantum Information Science》 2023年第3期138-155,共18页
Quantum computing is a field with increasing relevance as quantum hardware improves and more applications of quantum computing are discovered. In this paper, we demonstrate the feasibility of modeling Ising Model Hami... Quantum computing is a field with increasing relevance as quantum hardware improves and more applications of quantum computing are discovered. In this paper, we demonstrate the feasibility of modeling Ising Model Hamiltonians on the IBM quantum computer. We developed quantum circuits to simulate these systems more efficiently for both closed and open boundary Ising models, with and without perturbations. We tested these various geometries of systems in both 1-D and 2-D space to mimic two real systems: magnetic materials and biological neural networks (BNNs). Our quantum model is more efficient than classical computers, which can struggle to simulate large, complex systems of particles. 展开更多
关键词 Ising Model Magnetic Material Biological Neural network Quantum Computting International Business Machines (IBM)
下载PDF
Advances in neuromorphic computing:Expanding horizons for AI development through novel artificial neurons and in-sensor computing
16
作者 杨玉波 赵吉哲 +11 位作者 刘胤洁 华夏扬 王天睿 郑纪元 郝智彪 熊兵 孙长征 韩彦军 王健 李洪涛 汪莱 罗毅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期1-23,共23页
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ... AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI. 展开更多
关键词 neuromorphic computing spiking neural network(SNN) in-sensor computing artificial intelligence
下载PDF
Multi-layer network embedding on scc-based network with motif
17
作者 Lu Sun Xiaona Li +4 位作者 Mingyue Zhang Liangtian Wan Yun Lin Xianpeng Wang Gang Xu 《Digital Communications and Networks》 SCIE CSCD 2024年第3期546-556,共11页
Interconnection of all things challenges the traditional communication methods,and Semantic Communication and Computing(SCC)will become new solutions.It is a challenging task to accurately detect,extract,and represent... Interconnection of all things challenges the traditional communication methods,and Semantic Communication and Computing(SCC)will become new solutions.It is a challenging task to accurately detect,extract,and represent semantic information in the research of SCC-based networks.In previous research,researchers usually use convolution to extract the feature information of a graph and perform the corresponding task of node classification.However,the content of semantic information is quite complex.Although graph convolutional neural networks provide an effective solution for node classification tasks,due to their limitations in representing multiple relational patterns and not recognizing and analyzing higher-order local structures,the extracted feature information is subject to varying degrees of loss.Therefore,this paper extends from a single-layer topology network to a multi-layer heterogeneous topology network.The Bidirectional Encoder Representations from Transformers(BERT)training word vector is introduced to extract the semantic features in the network,and the existing graph neural network is improved by combining the higher-order local feature module of the network model representation network.A multi-layer network embedding algorithm on SCC-based networks with motifs is proposed to complete the task of end-to-end node classification.We verify the effectiveness of the algorithm on a real multi-layer heterogeneous network. 展开更多
关键词 Semantic communication and computing Multi-layer network Graph neural network MOTIF
下载PDF
A Self-Attention Based Dynamic Resource Management for Satellite-Terrestrial Networks
18
作者 Lin Tianhao Luo Zhiyong 《China Communications》 SCIE CSCD 2024年第4期136-150,共15页
The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power suppor... The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks. 展开更多
关键词 mobile edge computing resource management satellite-terrestrial networks self-attention
下载PDF
Neural Network Robust Control Based on Computed Torque for Lower Limb Exoskeleton
19
作者 Yibo Han Hongtao Ma +6 位作者 Yapeng Wang Di Shi Yanggang Feng Xianzhong Li Yanjun Shi Xilun Ding Wuxiang Zhang 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第2期83-99,共17页
The lower limb exoskeletons are used to assist wearers in various scenarios such as medical and industrial settings.Complex modeling errors of the exoskeleton in different application scenarios pose challenges to the ... The lower limb exoskeletons are used to assist wearers in various scenarios such as medical and industrial settings.Complex modeling errors of the exoskeleton in different application scenarios pose challenges to the robustness and stability of its control algorithm.The Radial Basis Function(RBF)neural network is used widely to compensate for modeling errors.In order to solve the problem that the current RBF neural network controllers cannot guarantee the asymptotic stability,a neural network robust control algorithm based on computed torque method is proposed in this paper,focusing on trajectory tracking.It innovatively incorporates the robust adaptive term while introducing the RBF neural network term,improving the compensation ability for modeling errors.The stability of the algorithm is proved by Lyapunov method,and the effectiveness of the robust adaptive term is verified by the simulation.Experiments wearing the exoskeleton under different walking speeds and scenarios were carried out,and the results show that the absolute value of tracking errors of the hip and knee joints of the exoskeleton are consistently less than 1.5°and 2.5°,respectively.The proposed control algorithm effectively compensates for modeling errors and exhibits high robustness. 展开更多
关键词 Lower limb exoskeleton Model compensation RBF neural network Computed torque method
下载PDF
Lightweight Intrusion Detection Using Reservoir Computing
20
作者 Jiarui Deng Wuqiang Shen +4 位作者 Yihua Feng Guosheng Lu Guiquan Shen Lei Cui Shanxiang Lyu 《Computers, Materials & Continua》 SCIE EI 2024年第1期1345-1361,共17页
The blockchain-empowered Internet of Vehicles(IoV)enables various services and achieves data security and privacy,significantly advancing modern vehicle systems.However,the increased frequency of data transmission and... The blockchain-empowered Internet of Vehicles(IoV)enables various services and achieves data security and privacy,significantly advancing modern vehicle systems.However,the increased frequency of data transmission and complex network connections among nodes also make them more susceptible to adversarial attacks.As a result,an efficient intrusion detection system(IDS)becomes crucial for securing the IoV environment.Existing IDSs based on convolutional neural networks(CNN)often suffer from high training time and storage requirements.In this paper,we propose a lightweight IDS solution to protect IoV against both intra-vehicle and external threats.Our approach achieves superior performance,as demonstrated by key metrics such as accuracy and precision.Specifically,our method achieves accuracy rates ranging from 99.08% to 100% on the Car-Hacking dataset,with a remarkably short training time. 展开更多
关键词 Echo state network intrusion detection system Internet of Vehicles reservoir computing
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部