期刊文献+
共找到25篇文章
< 1 2 >
每页显示 20 50 100
Computing Power Network:A Survey 被引量:1
1
作者 Sun Yukun Lei Bo +4 位作者 Liu Junlin Huang Haonan Zhang Xing Peng Jing Wang Wenbo 《China Communications》 SCIE CSCD 2024年第9期109-145,共37页
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these... With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well. 展开更多
关键词 computing power modeling computing power network computing power scheduling information awareness network forwarding
下载PDF
Joint Optimization of Energy Consumption and Network Latency in Blockchain-Enabled Fog Computing Networks
2
作者 Huang Xiaoge Yin Hongbo +3 位作者 Cao Bin Wang Yongsheng Chen Qianbin Zhang Jie 《China Communications》 SCIE CSCD 2024年第4期104-119,共16页
Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this pap... Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature. 展开更多
关键词 blockchain energy consumption fog computing network Internet of Things LATENCY
下载PDF
Efficient Digital Twin Placement for Blockchain-Empowered Wireless Computing Power Network
3
作者 Wei Wu Liang Yu +2 位作者 Liping Yang Yadong Zhang Peng Wang 《Computers, Materials & Continua》 SCIE EI 2024年第7期587-603,共17页
As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and... As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency. 展开更多
关键词 Wireless computing power network blockchain digital twin placement minimum synchronization latency
下载PDF
Computing Power Network:The Architecture of Convergence of Computing and Networking towards 6G Requirement 被引量:32
4
作者 Xiongyan Tang Chang Cao +4 位作者 Youxiang Wang Shuai Zhang Ying Liu Mingxuan Li Tao He 《China Communications》 SCIE CSCD 2021年第2期175-185,共11页
In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi... In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on. 展开更多
关键词 6G edge computing cloud computing convergence of cloud and network computing power network
下载PDF
Joint Resource Allocation Using Evolutionary Algorithms in Heterogeneous Mobile Cloud Computing Networks 被引量:10
5
作者 Weiwei Xia Lianfeng Shen 《China Communications》 SCIE CSCD 2018年第8期189-204,共16页
The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility ... The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility of users as well as satisfy the required quality of service(QoS) such as the end-to-end response latency experienced by each user. We formulate the problem of joint resource allocation as a combinatorial optimization problem. Three evolutionary approaches are considered to solve the problem: genetic algorithm(GA), ant colony optimization with genetic algorithm(ACO-GA), and quantum genetic algorithm(QGA). To decrease the time complexity, we propose a mapping process between the resource allocation matrix and the chromosome of GA, ACO-GA, and QGA, search the available radio and cloud resource pairs based on the resource availability matrixes for ACOGA, and encode the difference value between the allocated resources and the minimum resource requirement for QGA. Extensive simulation results show that our proposed methods greatly outperform the existing algorithms in terms of running time, the accuracy of final results, the total utility, resource utilization and the end-to-end response latency guaranteeing. 展开更多
关键词 heterogeneous mobile cloud computing networks resource allocation genetic algorithm ant colony optimization quantum genetic algorithm
下载PDF
Numerical simulation of neuronal spike patterns in a retinal network model 被引量:1
6
作者 Lei Wang Shenquan Liu Shanxing Ou 《Neural Regeneration Research》 SCIE CAS CSCD 2011年第16期1254-1260,共7页
This study utilized a neuronal compartment model and NEURON software to study the effects of external light stimulation on retinal photoreceptors and spike patterns of neurons in a retinal network Following light stim... This study utilized a neuronal compartment model and NEURON software to study the effects of external light stimulation on retinal photoreceptors and spike patterns of neurons in a retinal network Following light stimulation of different shapes and sizes, changes in the spike features of ganglion cells indicated that different shapes of light stimulation elicited different retinal responses. By manipulating the shape of light stimulation, we investigated the effects of the large number of electrical synapses existing between retinal neurons. Model simulation and analysis suggested that interplexiform cells play an important role in visual signal information processing in the retina, and the findings indicated that our constructed retinal network model was reliable and feasible. In addition, the simulation results demonstrated that ganglion cells exhibited a variety of spike patterns under different light stimulation sizes and different stimulation shapes, which reflect the functions of the retina in signal transmission and processing. 展开更多
关键词 computational network model RETINA light stimulation ganglion cell spike pattern
下载PDF
AN OBJECT-ORIENTED DISTRIBUTED MULTIMEDIA EDITOR
7
作者 Xu Dan(Computer Science Department Yunnan University Kunming,Yunnan Province,650091P.R.China) Pan Zhigeng Shi Jiaoying(State Key Lab.of CAD and CG Department of Applied Mathematics Zhejiang University Hangzhou Zhejiang Province 310027 P.R.China) 《Computer Aided Drafting,Design and Manufacturing》 1996年第2期35-42,共2页
Open Editor is an Object-Oriented multimedia editor,which runs in the network distributed environment.To add audio media into multimedia application,an audio server based on Client/Server paradigm is designed.In this ... Open Editor is an Object-Oriented multimedia editor,which runs in the network distributed environment.To add audio media into multimedia application,an audio server based on Client/Server paradigm is designed.In this paper,we first give an overview of Open Editor,then an in-depth discussion of the implementation techniques of audio functions is presented. 展开更多
关键词 ss:multimedia window system audio server networked computing OBJECT-ORIENTED
全文增补中
Efficient Broadcast Retransmission Based on Network Coding for InterPlaNetary Internet 被引量:1
8
作者 苟亮 边东明 +2 位作者 张更新 徐志平 申振 《China Communications》 SCIE CSCD 2013年第8期111-124,共14页
In traditional wireless broadcast networks,a corrupted packet must be retransmitted even if it has been lost by only one receiver.Obviously,this is not bandwidth-efficient for the receivers that already hold the retra... In traditional wireless broadcast networks,a corrupted packet must be retransmitted even if it has been lost by only one receiver.Obviously,this is not bandwidth-efficient for the receivers that already hold the retransmitted packet.Therefore,it is important to develop a method to realise efficient broadcast transmission.Network coding is a promising technique in this scenario.However,none of the proposed schemes achieves both high transmission efficiency and low computational complexity simultaneously so far.To address this problem,a novel Efficient Opportunistic Network Coding Retransmission(EONCR)scheme is proposed in this paper.This scheme employs a new packet scheduling algorithm which uses a Packet Distribution Matrix(PDM)directly to select the coded packets.The analysis and simulation results indicate that transmission efficiency of EONCR is over 0.1,more than the schemes proposed previously in some simulation conditions,and the computational overhead is reduced substantially.Hence,it has great application prospects in wireless broadcast networks,especially energyand bandwidth-limited systems such as satellite broadcast systems and Planetary Networks(PNs). 展开更多
关键词 wireless broadcast retransmission opportunistic network coding packet scheduling transmission efficiency computational complexity PN
下载PDF
A Novel Stateful PCE-Cloud Based Control Architecture of Optical Networks for Cloud Services 被引量:1
9
作者 QIN Panke CHEN Xue +1 位作者 WANG Lei WANG Liqian 《China Communications》 SCIE CSCD 2015年第10期117-127,共11页
The next-generation optical network is a service oriented network,which could be delivered by utilizing the generalized multiprotocol label switching(GMPLS) based control plane to realize lots of intelligent features ... The next-generation optical network is a service oriented network,which could be delivered by utilizing the generalized multiprotocol label switching(GMPLS) based control plane to realize lots of intelligent features such as rapid provisioning,automated protection and restoration(P&R),efficient resource allocation,and support for different quality of service(QoS) requirements.In this paper,we propose a novel stateful PCE-cloud(SPC)based architecture of GMPLS optical networks for cloud services.The cloud computing technologies(e.g.virtualization and parallel computing) are applied to the construction of SPC for improving the reliability and maximizing resource utilization.The functions of SPC and GMPLS based control plane are expanded according to the features of cloud services for different QoS requirements.The architecture and detailed description of the components of SPC are provided.Different potential cooperation relationships between public stateful PCE cloud(PSPC) and region stateful PCE cloud(RSPC) are investigated.Moreover,we present the policy-enabled and constraint-based routing scheme base on the cooperation of PSPC and RSPC.Simulation results for verifying the performance of routing and control plane reliability are analyzed. 展开更多
关键词 optical networks control plane GMPLS stateful PCE cloud computing Qo S
下载PDF
Process of Petri Nets Extension
10
作者 ZHOU Guofu HE Yanxiang DU Zhuomin 《Wuhan University Journal of Natural Sciences》 EI CAS 2006年第2期351-354,共4页
To describe the dynamic semantics for the network computing, the concept on process is presented Based on the semantic model with variable, resource and relation. Accordingly, the formal definition of process and the ... To describe the dynamic semantics for the network computing, the concept on process is presented Based on the semantic model with variable, resource and relation. Accordingly, the formal definition of process and the mapping rules from the specification of Petri nets extension to process are discussed in detail respectively. Based on the collective concepts of process, the specification of dynamic semantics also is constructed as a net system. Finally, to illustrate process intuitively, an example is specified completely. 展开更多
关键词 network computing computing model PROCESS Petri nets
下载PDF
Federated learning based QoS-aware caching decisions in fog-enabled internet of things networks
11
作者 Xiaoge Huang Zhi Chen +1 位作者 Qianbin Chen Jie Zhang 《Digital Communications and Networks》 SCIE CSCD 2023年第2期580-589,共10页
Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to ef... Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to effectively reduce the content fetch delay for latency-sensitive services of Internet of Things(IoT)devices.Considering the time-varying scenario,the machine learning techniques could further reduce the content fetch delay by optimizing the caching decisions.In this paper,to minimize the content fetch delay and ensure the QoS of the network,a Device-to-Device(D2D)assisted fog computing network architecture is introduced,which supports federated learning and QoS-aware caching decisions based on time-varying user preferences.To release the network congestion and the risk of the user privacy leakage,federated learning,is enabled in the D2D-assisted fog computing network.Specifically,it has been observed that federated learning yields suboptimal results according to the Non-Independent Identical Distribution(Non-IID)of local users data.To address this issue,a distributed cluster-based user preference estimation algorithm is proposed to optimize the content caching placement,improve the cache hit rate,the content fetch delay and the convergence rate,which can effectively mitigate the impact of the Non-IID data set by clustering.The simulation results show that the proposed algorithm provides a considerable performance improvement with better learning results compared with the existing algorithms. 展开更多
关键词 Fog computing network IoT D2D communication Deep neural network Federated learning
下载PDF
Architecture and Key Technology of Distributed Intelligent Open Systems
12
作者 Xiaoyu Tong Yunyong Zhang Bingyi Fang 《ZTE Communications》 2011年第2期53-57,共5页
High-speed large-bandwidth networks and growth in rich internet applications has brought unprecedented pressure to bear on telecom operators. Consequently, operators need to play to the advantages of their networks, m... High-speed large-bandwidth networks and growth in rich internet applications has brought unprecedented pressure to bear on telecom operators. Consequently, operators need to play to the advantages of their networks, make good use of their large customer bases, and expand their business resources in service, platform, and interface. Network and customer resources should be integrated in order to create new business ecosystems. This paper describes new threats and challenges facing telecom operators and analyzes how leading operators are handling transformation in terms of operations and business model. A new concept called distributed intelligent open system (DIOS)—a public computing communication network—is proposed. The architecture and key technologies of DIOS is discussed in detail. 展开更多
关键词 DIOS public computing communication network (PCCN) cloud computing
下载PDF
Reputation-based joint optimization of user satisfaction and resource utilization in a computing force network
13
作者 Yuexia FU Jing WANG +2 位作者 Lu LU Qinqin TANG Sheng ZHANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第5期685-700,共16页
Under the development of computing and network convergence,considering the computing and network resources of multiple providers as a whole in a computing force network(CFN)has gradually become a new trend.However,sin... Under the development of computing and network convergence,considering the computing and network resources of multiple providers as a whole in a computing force network(CFN)has gradually become a new trend.However,since each computing and network resource provider(CNRP)considers only its own interest and competes with other CNRPs,introducing multiple CNRPs will result in a lack of trust and difficulty in unified scheduling.In addition,concurrent users have different requirements,so there is an urgent need to study how to optimally match users and CNRPs on a many-to-many basis,to improve user satisfaction and ensure the utilization of limited resources.In this paper,we adopt a reputation model based on the beta distribution function to measure the credibility of CNRPs and propose a performance-based reputation update model.Then,we formalize the problem into a constrained multi-objective optimization problem and find feasible solutions using a modified fast and elitist non-dominated sorting genetic algorithm(NSGA-II).We conduct extensive simulations to evaluate the proposed algorithm.Simulation results demonstrate that the proposed model and the problem formulation are valid,and the NSGA-II is effective and can find the Pareto set of CFN,which increases user satisfaction and resource utilization.Moreover,a set of solutions provided by the Pareto set give us more choices of the many-to-many matching of users and CNRPs according to the actual situation. 展开更多
关键词 computing force network Resource scheduling Performance-based reputation User satisfaction
原文传递
Combining graph neural network with deep reinforcement learning for resource allocation in computing force networks
14
作者 Xueying HAN Mingxi XIE +3 位作者 Ke YU Xiaohong HUANG Zongpeng DU Huijuan YAO 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第5期701-712,共12页
Fueled by the explosive growth of ultra-low-latency and real-time applications with specific computing and network performance requirements,the computing force network(CFN)has become a hot research subject.The primary... Fueled by the explosive growth of ultra-low-latency and real-time applications with specific computing and network performance requirements,the computing force network(CFN)has become a hot research subject.The primary CFN challenge is to leverage network resources and computing resources.Although recent advances in deep reinforcement learning(DRL)have brought significant improvement in network optimization,these methods still suffer from topology changes and fail to generalize for those topologies not seen in training.This paper proposes a graph neural network(GNN)based DRL framework to accommodate network trafic and computing resources jointly and efficiently.By taking advantage of the generalization capability in GNN,the proposed method can operate over variable topologies and obtain higher performance than the other DRL methods. 展开更多
关键词 computing force network Routing optimization Deep learning Graph neural network Resource allocation
原文传递
Communication efficiency optimization of federated learning for computing and network convergence of 6G networks
15
作者 Yizhuo CAI Bo LEI +4 位作者 Qianying ZHAO Jing PENG Min WEI Yushun ZHANG Xing ZHANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第5期713-727,共15页
Federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models.However,factors such as network topology and computing power of devices can aff... Federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models.However,factors such as network topology and computing power of devices can affect its training or communication process in complex network environments.Computing and network convergence(CNC)of sixth-generation(6G)networks,a new network architecture and paradigm with computing-measurable,perceptible,distributable,dispatchable,and manageable capabilities,can effectively support federated learning training and improve its communication efficiency.By guiding the participating devices'training in federated learning based on business requirements,resource load,network conditions,and computing power of devices,CNC can reach this goal.In this paper,to improve the communication eficiency of federated learning in complex networks,we study the communication eficiency optimization methods of federated learning for CNC of 6G networks that give decisions on the training process for different network conditions and computing power of participating devices.The simulations address two architectures that exist for devices in federated learning and arrange devices to participate in training based on arithmetic power while achieving optimization of communication efficiency in the process of transferring model parameters.The results show that the methods we proposed can cope well with complex network situations,effectively balance the delay distribution of participating devices for local training,improve the communication eficiency during the transfer of model parameters,and improve the resource utilization in the network. 展开更多
关键词 computing and network convergence Communication efficiency Federated learning Two architectures
原文传递
Credibility protection for resource sharing and collaborating in non-center network computing environments
16
作者 XU Xiao-long TU Qun +1 位作者 WANG Xin-heng WANG Ru-chuan 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2014年第3期46-55,共10页
Non-center network computing environments have some unique characteristics, such as instability, heterogeneity, autonomy, distribution and openness, which bring serious issues of security and reliability. This article... Non-center network computing environments have some unique characteristics, such as instability, heterogeneity, autonomy, distribution and openness, which bring serious issues of security and reliability. This article proposes a brand-new credibility protection mechanism for resource sharing and collaboration in non-center network computing environments. First, the three-dimensional hierarchical classified topology (3DHCT) is proposed, which provides a basic framework for realizations of the identity credibility, the behavior credibility and the capability credibility. Next, the agent technology is utilized to construct the credibility protection model. This article also proposes a new comprehensive credibility evaluation algorithm with simple, efficient, quantitative and able to meet the requirements of evaluating behavior credibility and the capability credibility evaluation as well. The Dempster-Shafer theory of evidence and the combination rule are used to achieve the evaluation of the capability credibility. The behavior credibility is evaluated with the current and historical performance of nodes for providers and consumers to realize more accurate prediction. Based on the non-center network computing simulation test platform, simulation is been conducted to test the performance and validity of the proposed algorithms. Experiment and analysis show that the proposed algorithms are suitable for large-scale, dynamic network computing environments, and able to maintain the credibility for networks without relying on central node, make a non-center network gradually evolve into an orderly, stable and reliable computing environment efficiently. 展开更多
关键词 non-center network computing AGENT CREDIBILITY EVIDENCE
原文传递
Prediction based dynamic resource allocation method for edge computing first networking
17
作者 Zhang Luying Liu Xiaokai +2 位作者 Li Zhao Xu Fangmin Zhao Chenglin 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2023年第3期78-87,共10页
Aiming at the factory with high-complex and multi-terminal in the industrial Internet of things(IIoT),a hierarchical edge networking collaboration(HENC)framework based on the cloud-edge collaboration and computing fir... Aiming at the factory with high-complex and multi-terminal in the industrial Internet of things(IIoT),a hierarchical edge networking collaboration(HENC)framework based on the cloud-edge collaboration and computing first networking(CFN)is proposed to improve the capability of task processing with fixed computing resources on the edge effectively.To optimize the delay and energy consumption in HENC,a multi-objective optimization(MOO)problem is formulated.Furthermore,to improve the efficiency and reliability of the system,a resource prediction model based on ridge regression(RR)is proposed to forecast the task size of the next time slot,and an emergency-aware(EA)computing resource allocation algorithm is proposed to reallocate tasks in edge CFN.Based on the simulation result,the EA algorithm is superior to the greedy resource allocation in time delay,energy consumption,quality of service(QoS)especially with limited computing resources. 展开更多
关键词 cloud-edge collaboration computing first networking(CFN) computing resource allocation multi-objective optimization(MOO)
原文传递
Special Section of Tsinghua Science and Technology on Wireless Computing and Networking
18
《Tsinghua Science and Technology》 SCIE EI CAS 2013年第1期108-108,共1页
The publication of Tsinghua Science and Technology was started in 1996. Since then, it has been an international academic journal sponsored by Tsinghua University and published bimonthly. This journal aims at presenti... The publication of Tsinghua Science and Technology was started in 1996. Since then, it has been an international academic journal sponsored by Tsinghua University and published bimonthly. This journal aims at presenting the state-of-art scientific achievements in computer science, and other IT fields, and is currently indexed by Ei and other abstracting indices. From year 2013, the journal will be available for open access through IEEE Xplore Digital Library. This year's special section on Wireless Computing and Networking of Tsinghua Science and Technology is devoted to gather and present new research that address the challenges in the broad areas of Wireless Networks, Sensor Networks, Wireless Computing and Communication. While Wireless Networks have great potential to provide heterogeneous access and services for ubiquitous users, the demanding communication environment of wireless networks imposes challenges to many interesting research topics, such as channel estimation, communication protocol design, resource management, system design and so on. In Wireless Network research, it is unavoidable to wrestle unique problems such as non-uniform spectrum allocation, various radio resource management policies, economic concerns, the scarcity of radio resources, and user mobility. This Special Section therefore aims to publish high quality, original, unpublished research papers in the broad area of Wireless Computing and Networking, and thus presents a platform for scientists and scholars to share their observations and research results in the field. Specific topics for this special section include but are not limited to: 展开更多
关键词 Special Section of Tsinghua Science and Technology on Wireless computing and Networking
原文传递
Neural circuit and its functional roles in cerebellar cortex 被引量:1
19
作者 汪雷 刘深泉 《Neuroscience Bulletin》 SCIE CAS CSCD 2011年第3期173-184,共12页
Objective To investigate the spike activities of cerebellar cortical cells in a computational network model con- structed based on the anatomical structure of cerebellar cortex. Methods and Results The multicompartmen... Objective To investigate the spike activities of cerebellar cortical cells in a computational network model con- structed based on the anatomical structure of cerebellar cortex. Methods and Results The multicompartment model of neuron and NEURON software were used to study the external influences on cerebellar cortical cells. Various potential spike patterns in these cells were obtained. By analyzing the impacts of different incoming stimuli on the potential spike of Purkinje cell, temporal focusing caused by the granule cell-golgi cell feedback inhibitory loop to Purkinje cell and spa- tial focusing caused by the parallel fiber-basket/stellate cell local inhibitory loop to Purkinje cell were discussed. Finally, the motor learning process of rabbit eye blink conditioned reflex was demonstrated in this model. The simulation results showed that when the afferent from climbing fiber existed, rabbit adaptation to eye blinking gradually became stable under the Spike Timing-Dependent Plasticity (STDP) learning rule. Conclusion The constructed cerebellar cortex network is a reliable and feasible model. The model simulation results confirmed the output signal stability of cerebellar cortex after STDP learning and the network can execute the function of spatial and temporal focusing. 展开更多
关键词 computational network model cerebellar cortex temporal focusing spatial focusing Spike Timing-DependentPlasticity eye blink conditioned reflex
原文传递
Optimum Tactics of Parallel Multi-Grid Algorithm with Virtual Boundary Forecast Method Running on a Local Network with the PVM Platform 被引量:2
20
作者 郭庆平 章社生 卫加宁 《Journal of Computer Science & Technology》 SCIE EI CSCD 2000年第4期355-359,共5页
In this paper, an optimum tactic of multi-grid parallel algorithmwith virtual boundary forecast method is disscussed, and a two-stage implementationis presented. The numerical results of solving a non-linear heat tran... In this paper, an optimum tactic of multi-grid parallel algorithmwith virtual boundary forecast method is disscussed, and a two-stage implementationis presented. The numerical results of solving a non-linear heat transfer equationshow that the optimum implementation is much better than the non-optimum one. 展开更多
关键词 ALGORITHM parallel multi-grid virtual boundary forecast (VBF) speedup network computing PVM
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部