期刊文献+
共找到29篇文章
< 1 2 >
每页显示 20 50 100
Computing Power Network:A Survey 被引量:1
1
作者 Sun Yukun Lei Bo +4 位作者 Liu Junlin Huang Haonan Zhang Xing Peng Jing Wang Wenbo 《China Communications》 SCIE CSCD 2024年第9期109-145,共37页
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these... With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well. 展开更多
关键词 computing power modeling computing power network computing power scheduling information awareness network forwarding
下载PDF
Joint Optimization of Energy Consumption and Network Latency in Blockchain-Enabled Fog Computing Networks
2
作者 Huang Xiaoge Yin Hongbo +3 位作者 Cao Bin Wang Yongsheng Chen Qianbin Zhang Jie 《China Communications》 SCIE CSCD 2024年第4期104-119,共16页
Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this pap... Fog computing is considered as a solution to accommodate the emergence of booming requirements from a large variety of resource-limited Internet of Things(IoT)devices.To ensure the security of private data,in this paper,we introduce a blockchain-enabled three-layer device-fog-cloud heterogeneous network.A reputation model is proposed to update the credibility of the fog nodes(FN),which is used to select blockchain nodes(BN)from FNs to participate in the consensus process.According to the Rivest-Shamir-Adleman(RSA)encryption algorithm applied to the blockchain system,FNs could verify the identity of the node through its public key to avoid malicious attacks.Additionally,to reduce the computation complexity of the consensus algorithms and the network overhead,we propose a dynamic offloading and resource allocation(DORA)algorithm and a reputation-based democratic byzantine fault tolerant(R-DBFT)algorithm to optimize the offloading decisions and decrease the number of BNs in the consensus algorithm while ensuring the network security.Simulation results demonstrate that the proposed algorithm could efficiently reduce the network overhead,and obtain a considerable performance improvement compared to the related algorithms in the previous literature. 展开更多
关键词 blockchain energy consumption fog computing network Internet of Things LATENCY
下载PDF
Efficient Digital Twin Placement for Blockchain-Empowered Wireless Computing Power Network
3
作者 Wei Wu Liang Yu +2 位作者 Liping Yang Yadong Zhang Peng Wang 《Computers, Materials & Continua》 SCIE EI 2024年第7期587-603,共17页
As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and... As an open network architecture,Wireless Computing PowerNetworks(WCPN)pose newchallenges for achieving efficient and secure resource management in networks,because of issues such as insecure communication channels and untrusted device terminals.Blockchain,as a shared,immutable distributed ledger,provides a secure resource management solution for WCPN.However,integrating blockchain into WCPN faces challenges like device heterogeneity,monitoring communication states,and dynamic network nature.Whereas Digital Twins(DT)can accurately maintain digital models of physical entities through real-time data updates and self-learning,enabling continuous optimization of WCPN,improving synchronization performance,ensuring real-time accuracy,and supporting smooth operation of WCPN services.In this paper,we propose a DT for blockchain-empowered WCPN architecture that guarantees real-time data transmission between physical entities and digital models.We adopt an enumeration-based optimal placement algorithm(EOPA)and an improved simulated annealing-based near-optimal placement algorithm(ISAPA)to achieve minimum average DT synchronization latency under the constraint of DT error.Numerical results show that the proposed solution in this paper outperforms benchmarks in terms of average synchronization latency. 展开更多
关键词 Wireless computing power network blockchain digital twin placement minimum synchronization latency
下载PDF
Computing Power Network:The Architecture of Convergence of Computing and Networking towards 6G Requirement 被引量:32
4
作者 Xiongyan Tang Chang Cao +4 位作者 Youxiang Wang Shuai Zhang Ying Liu Mingxuan Li Tao He 《China Communications》 SCIE CSCD 2021年第2期175-185,共11页
In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi... In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on. 展开更多
关键词 6G edge computing cloud computing convergence of cloud and network computing power network
下载PDF
Joint Resource Allocation Using Evolutionary Algorithms in Heterogeneous Mobile Cloud Computing Networks 被引量:10
5
作者 Weiwei Xia Lianfeng Shen 《China Communications》 SCIE CSCD 2018年第8期189-204,共16页
The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility ... The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility of users as well as satisfy the required quality of service(QoS) such as the end-to-end response latency experienced by each user. We formulate the problem of joint resource allocation as a combinatorial optimization problem. Three evolutionary approaches are considered to solve the problem: genetic algorithm(GA), ant colony optimization with genetic algorithm(ACO-GA), and quantum genetic algorithm(QGA). To decrease the time complexity, we propose a mapping process between the resource allocation matrix and the chromosome of GA, ACO-GA, and QGA, search the available radio and cloud resource pairs based on the resource availability matrixes for ACOGA, and encode the difference value between the allocated resources and the minimum resource requirement for QGA. Extensive simulation results show that our proposed methods greatly outperform the existing algorithms in terms of running time, the accuracy of final results, the total utility, resource utilization and the end-to-end response latency guaranteeing. 展开更多
关键词 heterogeneous mobile cloud computing networks resource allocation genetic algorithm ant colony optimization quantum genetic algorithm
下载PDF
Numerical simulation of neuronal spike patterns in a retinal network model 被引量:1
6
作者 Lei Wang Shenquan Liu Shanxing Ou 《Neural Regeneration Research》 SCIE CAS CSCD 2011年第16期1254-1260,共7页
This study utilized a neuronal compartment model and NEURON software to study the effects of external light stimulation on retinal photoreceptors and spike patterns of neurons in a retinal network Following light stim... This study utilized a neuronal compartment model and NEURON software to study the effects of external light stimulation on retinal photoreceptors and spike patterns of neurons in a retinal network Following light stimulation of different shapes and sizes, changes in the spike features of ganglion cells indicated that different shapes of light stimulation elicited different retinal responses. By manipulating the shape of light stimulation, we investigated the effects of the large number of electrical synapses existing between retinal neurons. Model simulation and analysis suggested that interplexiform cells play an important role in visual signal information processing in the retina, and the findings indicated that our constructed retinal network model was reliable and feasible. In addition, the simulation results demonstrated that ganglion cells exhibited a variety of spike patterns under different light stimulation sizes and different stimulation shapes, which reflect the functions of the retina in signal transmission and processing. 展开更多
关键词 computational network model RETINA light stimulation ganglion cell spike pattern
下载PDF
A Novel Stateful PCE-Cloud Based Control Architecture of Optical Networks for Cloud Services 被引量:1
7
作者 QIN Panke CHEN Xue +1 位作者 WANG Lei WANG Liqian 《China Communications》 SCIE CSCD 2015年第10期117-127,共11页
The next-generation optical network is a service oriented network,which could be delivered by utilizing the generalized multiprotocol label switching(GMPLS) based control plane to realize lots of intelligent features ... The next-generation optical network is a service oriented network,which could be delivered by utilizing the generalized multiprotocol label switching(GMPLS) based control plane to realize lots of intelligent features such as rapid provisioning,automated protection and restoration(P&R),efficient resource allocation,and support for different quality of service(QoS) requirements.In this paper,we propose a novel stateful PCE-cloud(SPC)based architecture of GMPLS optical networks for cloud services.The cloud computing technologies(e.g.virtualization and parallel computing) are applied to the construction of SPC for improving the reliability and maximizing resource utilization.The functions of SPC and GMPLS based control plane are expanded according to the features of cloud services for different QoS requirements.The architecture and detailed description of the components of SPC are provided.Different potential cooperation relationships between public stateful PCE cloud(PSPC) and region stateful PCE cloud(RSPC) are investigated.Moreover,we present the policy-enabled and constraint-based routing scheme base on the cooperation of PSPC and RSPC.Simulation results for verifying the performance of routing and control plane reliability are analyzed. 展开更多
关键词 optical networks control plane GMPLS stateful PCE cloud computing Qo S
下载PDF
Efficient Broadcast Retransmission Based on Network Coding for InterPlaNetary Internet 被引量:1
8
作者 苟亮 边东明 +2 位作者 张更新 徐志平 申振 《China Communications》 SCIE CSCD 2013年第8期111-124,共14页
In traditional wireless broadcast networks,a corrupted packet must be retransmitted even if it has been lost by only one receiver.Obviously,this is not bandwidth-efficient for the receivers that already hold the retra... In traditional wireless broadcast networks,a corrupted packet must be retransmitted even if it has been lost by only one receiver.Obviously,this is not bandwidth-efficient for the receivers that already hold the retransmitted packet.Therefore,it is important to develop a method to realise efficient broadcast transmission.Network coding is a promising technique in this scenario.However,none of the proposed schemes achieves both high transmission efficiency and low computational complexity simultaneously so far.To address this problem,a novel Efficient Opportunistic Network Coding Retransmission(EONCR)scheme is proposed in this paper.This scheme employs a new packet scheduling algorithm which uses a Packet Distribution Matrix(PDM)directly to select the coded packets.The analysis and simulation results indicate that transmission efficiency of EONCR is over 0.1,more than the schemes proposed previously in some simulation conditions,and the computational overhead is reduced substantially.Hence,it has great application prospects in wireless broadcast networks,especially energyand bandwidth-limited systems such as satellite broadcast systems and Planetary Networks(PNs). 展开更多
关键词 wireless broadcast retransmission opportunistic network coding packet scheduling transmission efficiency computational complexity PN
下载PDF
Federated learning based QoS-aware caching decisions in fog-enabled internet of things networks
9
作者 Xiaoge Huang Zhi Chen +1 位作者 Qianbin Chen Jie Zhang 《Digital Communications and Networks》 SCIE CSCD 2023年第2期580-589,共10页
Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to ef... Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to effectively reduce the content fetch delay for latency-sensitive services of Internet of Things(IoT)devices.Considering the time-varying scenario,the machine learning techniques could further reduce the content fetch delay by optimizing the caching decisions.In this paper,to minimize the content fetch delay and ensure the QoS of the network,a Device-to-Device(D2D)assisted fog computing network architecture is introduced,which supports federated learning and QoS-aware caching decisions based on time-varying user preferences.To release the network congestion and the risk of the user privacy leakage,federated learning,is enabled in the D2D-assisted fog computing network.Specifically,it has been observed that federated learning yields suboptimal results according to the Non-Independent Identical Distribution(Non-IID)of local users data.To address this issue,a distributed cluster-based user preference estimation algorithm is proposed to optimize the content caching placement,improve the cache hit rate,the content fetch delay and the convergence rate,which can effectively mitigate the impact of the Non-IID data set by clustering.The simulation results show that the proposed algorithm provides a considerable performance improvement with better learning results compared with the existing algorithms. 展开更多
关键词 Fog computing network IoT D2D communication Deep neural network Federated learning
下载PDF
Going beyond Computation and Its Limits: Injecting Cognition into Computing
10
作者 Rao Mikkilineni 《Applied Mathematics》 2012年第11期1826-1835,共10页
Cognition is the ability to process information, apply knowledge, and change the circumstance. Cognition is associated with intent and its accomplishment through various processes that monitor and control a system and... Cognition is the ability to process information, apply knowledge, and change the circumstance. Cognition is associated with intent and its accomplishment through various processes that monitor and control a system and its environment. Cognition is associated with a sense of “self” (the observer) and the systems with which it interacts (the environment or the “observed”). Cognition extensively uses time and history in executing and regulating tasks that constitute a cognitive process. Whether cognition is computation in the strict sense of adhering to Turing-Church thesis or needs additional constructs is a very relevant question for addressing the design of self-managing (autonomous) distributed computing systems. In this paper we argue that cognition requires more than mere book-keeping provided by the Turing machines and certain aspects of cognition such as self-identity, self-description, self-monitoring and self-management can be implemented using parallel extensions to current serial von-Neumann stored program control (SPC) Turing machine implementations. We argue that the new DIME (Distributed Intelligent Computing Element) computing model, recently introduced as the building block of the DIME network architecture, is an analogue of Turing’s O-machine and extends it to implement a recursive managed distributed computing network, which can be viewed as an interconnected group of such specialized Oracle machines, referred to as a DIME network. The DIME network architecture provides the architectural resiliency, which is often associated with cellular organisms, through auto-failover;auto-scaling;live-migration;and end-to-end transaction security assurance in a distributed system. We argue that the self-identity and self-management processes of a DIME network inject the elements of cognition into Turing machine based computing as is demonstrated by two prototypes eliminating the complexity introduced by hypervisors, virtual machines and other layers of ad-hoc management software in today’s distributed computing environments. 展开更多
关键词 COGNITION Cognitive Process computationALISM TURING MACHINE TURING O-Machine DIME DIME network Architecture
下载PDF
计算主义下虚拟网络复杂性探究 被引量:1
11
作者 景卉 周维刚 《系统科学学报》 2008年第1期31-34,40,共5页
在简要的对计算主义作为一种新的本体论哲学观所经历的三个阶段进行评述,并对计算主义视野下的虚拟网络空间所呈现出的复杂性特征进行初步探究,指出虚拟网络空间具有自演化、自组织、涌现性、自相似等复杂性特征,试图指出计算主义思潮... 在简要的对计算主义作为一种新的本体论哲学观所经历的三个阶段进行评述,并对计算主义视野下的虚拟网络空间所呈现出的复杂性特征进行初步探究,指出虚拟网络空间具有自演化、自组织、涌现性、自相似等复杂性特征,试图指出计算主义思潮对当代哲学与科技的发展所产生的深刻影响。 展开更多
关键词 计算主义 虚拟网络空间 复杂性 本体论
下载PDF
Computational Approaches for Prioritizing Candidate Disease Genes Based on PPI Networks 被引量:4
12
作者 Wei Lan Jianxin Wang +2 位作者 Min Li Wei Peng Fangxiang Wu 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2015年第5期500-512,共13页
With the continuing development and improvement of genome-wide techniques, a great number of candidate genes are discovered. How to identify the most likely disease genes among a large number of candidates becomes a f... With the continuing development and improvement of genome-wide techniques, a great number of candidate genes are discovered. How to identify the most likely disease genes among a large number of candidates becomes a fundamental challenge in human health. A common view is that genes related to a specific or similar disease tend to reside in the same neighbourhood of biomolecular networks. Recently, based on such observations,many methods have been developed to tackle this challenge. In this review, we firstly introduce the concept of disease genes, their properties, and available data for identifying them. Then we review the recent computational approaches for prioritizing candidate disease genes based on Protein-Protein Interaction(PPI) networks and investigate their advantages and disadvantages. Furthermore, some pieces of existing software and network resources are summarized. Finally, we discuss key issues in prioritizing candidate disease genes and point out some future research directions. 展开更多
关键词 candidate disease-gene prioritization protein-protein interaction network human disease computational tools
原文传递
Process of Petri Nets Extension
13
作者 ZHOU Guofu HE Yanxiang DU Zhuomin 《Wuhan University Journal of Natural Sciences》 EI CAS 2006年第2期351-354,共4页
To describe the dynamic semantics for the network computing, the concept on process is presented Based on the semantic model with variable, resource and relation. Accordingly, the formal definition of process and the ... To describe the dynamic semantics for the network computing, the concept on process is presented Based on the semantic model with variable, resource and relation. Accordingly, the formal definition of process and the mapping rules from the specification of Petri nets extension to process are discussed in detail respectively. Based on the collective concepts of process, the specification of dynamic semantics also is constructed as a net system. Finally, to illustrate process intuitively, an example is specified completely. 展开更多
关键词 network computing computing model PROCESS Petri nets
下载PDF
Architecture and Key Technology of Distributed Intelligent Open Systems
14
作者 Xiaoyu Tong Yunyong Zhang Bingyi Fang 《ZTE Communications》 2011年第2期53-57,共5页
High-speed large-bandwidth networks and growth in rich internet applications has brought unprecedented pressure to bear on telecom operators. Consequently, operators need to play to the advantages of their networks, m... High-speed large-bandwidth networks and growth in rich internet applications has brought unprecedented pressure to bear on telecom operators. Consequently, operators need to play to the advantages of their networks, make good use of their large customer bases, and expand their business resources in service, platform, and interface. Network and customer resources should be integrated in order to create new business ecosystems. This paper describes new threats and challenges facing telecom operators and analyzes how leading operators are handling transformation in terms of operations and business model. A new concept called distributed intelligent open system (DIOS)—a public computing communication network—is proposed. The architecture and key technologies of DIOS is discussed in detail. 展开更多
关键词 DIOS public computing communication network (PCCN) cloud computing
下载PDF
全局与局部模型交替辅助的差分进化算法 被引量:4
15
作者 于成龙 付国霞 +1 位作者 孙超利 张国晨 《计算机工程》 CAS CSCD 北大核心 2022年第3期115-123,共9页
为求解实际复杂工程应用中的高维计算费时优化问题,提出一种全局与局部代理模型交替辅助的差分进化算法。利用历史样本训练全局和局部代理模型,通过交替搜索全局和局部代理模型得到模型最优解并对其进行真实目标函数评价,实现探索和开... 为求解实际复杂工程应用中的高维计算费时优化问题,提出一种全局与局部代理模型交替辅助的差分进化算法。利用历史样本训练全局和局部代理模型,通过交替搜索全局和局部代理模型得到模型最优解并对其进行真实目标函数评价,实现探索和开采的平衡以减少真实目标函数的计算次数,同时通过针对性地选择个体进行真实目标函数计算,辅助算法快速找到目标函数的较优解。在15个低维测试问题和14个高维测试问题上的实验结果表明,在有限的计算资源情况下,该算法在12个低维测试问题上相较于最优重启策略代理辅助的社会学习粒子群优化算法、基于主动学习的代理模型辅助的粒子群优化算法等表现更好,在7个高维测试问题上相较于高斯过程辅助的进化算法、代理模型辅助的分层粒子群优化算法、求解高维费时问题的代理辅助的多种群优化算法等能找到目标函数的更优解。 展开更多
关键词 全局代理模型 局部代理模型 差分进化算法 计算费时优化问题 径向基函数网络
下载PDF
Reputation-based joint optimization of user satisfaction and resource utilization in a computing force network
16
作者 Yuexia FU Jing WANG +2 位作者 Lu LU Qinqin TANG Sheng ZHANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第5期685-700,共16页
Under the development of computing and network convergence,considering the computing and network resources of multiple providers as a whole in a computing force network(CFN)has gradually become a new trend.However,sin... Under the development of computing and network convergence,considering the computing and network resources of multiple providers as a whole in a computing force network(CFN)has gradually become a new trend.However,since each computing and network resource provider(CNRP)considers only its own interest and competes with other CNRPs,introducing multiple CNRPs will result in a lack of trust and difficulty in unified scheduling.In addition,concurrent users have different requirements,so there is an urgent need to study how to optimally match users and CNRPs on a many-to-many basis,to improve user satisfaction and ensure the utilization of limited resources.In this paper,we adopt a reputation model based on the beta distribution function to measure the credibility of CNRPs and propose a performance-based reputation update model.Then,we formalize the problem into a constrained multi-objective optimization problem and find feasible solutions using a modified fast and elitist non-dominated sorting genetic algorithm(NSGA-II).We conduct extensive simulations to evaluate the proposed algorithm.Simulation results demonstrate that the proposed model and the problem formulation are valid,and the NSGA-II is effective and can find the Pareto set of CFN,which increases user satisfaction and resource utilization.Moreover,a set of solutions provided by the Pareto set give us more choices of the many-to-many matching of users and CNRPs according to the actual situation. 展开更多
关键词 Computing force network Resource scheduling Performance-based reputation User satisfaction
原文传递
Communication efficiency optimization of federated learning for computing and network convergence of 6G networks
17
作者 Yizhuo CAI Bo LEI +4 位作者 Qianying ZHAO Jing PENG Min WEI Yushun ZHANG Xing ZHANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第5期713-727,共15页
Federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models.However,factors such as network topology and computing power of devices can aff... Federated learning effectively addresses issues such as data privacy by collaborating across participating devices to train global models.However,factors such as network topology and computing power of devices can affect its training or communication process in complex network environments.Computing and network convergence(CNC)of sixth-generation(6G)networks,a new network architecture and paradigm with computing-measurable,perceptible,distributable,dispatchable,and manageable capabilities,can effectively support federated learning training and improve its communication efficiency.By guiding the participating devices'training in federated learning based on business requirements,resource load,network conditions,and computing power of devices,CNC can reach this goal.In this paper,to improve the communication eficiency of federated learning in complex networks,we study the communication eficiency optimization methods of federated learning for CNC of 6G networks that give decisions on the training process for different network conditions and computing power of participating devices.The simulations address two architectures that exist for devices in federated learning and arrange devices to participate in training based on arithmetic power while achieving optimization of communication efficiency in the process of transferring model parameters.The results show that the methods we proposed can cope well with complex network situations,effectively balance the delay distribution of participating devices for local training,improve the communication eficiency during the transfer of model parameters,and improve the resource utilization in the network. 展开更多
关键词 Computing and network convergence Communication efficiency Federated learning Two architectures
原文传递
Combining graph neural network with deep reinforcement learning for resource allocation in computing force networks
18
作者 Xueying HAN Mingxi XIE +3 位作者 Ke YU Xiaohong HUANG Zongpeng DU Huijuan YAO 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第5期701-712,共12页
Fueled by the explosive growth of ultra-low-latency and real-time applications with specific computing and network performance requirements,the computing force network(CFN)has become a hot research subject.The primary... Fueled by the explosive growth of ultra-low-latency and real-time applications with specific computing and network performance requirements,the computing force network(CFN)has become a hot research subject.The primary CFN challenge is to leverage network resources and computing resources.Although recent advances in deep reinforcement learning(DRL)have brought significant improvement in network optimization,these methods still suffer from topology changes and fail to generalize for those topologies not seen in training.This paper proposes a graph neural network(GNN)based DRL framework to accommodate network trafic and computing resources jointly and efficiently.By taking advantage of the generalization capability in GNN,the proposed method can operate over variable topologies and obtain higher performance than the other DRL methods. 展开更多
关键词 Computing force network Routing optimization Deep learning Graph neural network Resource allocation
原文传递
AN OBJECT-ORIENTED DISTRIBUTED MULTIMEDIA EDITOR
19
作者 Xu Dan(Computer Science Department Yunnan University Kunming,Yunnan Province,650091P.R.China) Pan Zhigeng Shi Jiaoying(State Key Lab.of CAD and CG Department of Applied Mathematics Zhejiang University Hangzhou Zhejiang Province 310027 P.R.China) 《Computer Aided Drafting,Design and Manufacturing》 1996年第2期35-42,共2页
Open Editor is an Object-Oriented multimedia editor,which runs in the network distributed environment.To add audio media into multimedia application,an audio server based on Client/Server paradigm is designed.In this ... Open Editor is an Object-Oriented multimedia editor,which runs in the network distributed environment.To add audio media into multimedia application,an audio server based on Client/Server paradigm is designed.In this paper,we first give an overview of Open Editor,then an in-depth discussion of the implementation techniques of audio functions is presented. 展开更多
关键词 ss:multimedia window system audio server networked computing OBJECT-ORIENTED
全文增补中
Optimum Tactics of Parallel Multi-Grid Algorithm with Virtual Boundary Forecast Method Running on a Local Network with the PVM Platform 被引量:2
20
作者 郭庆平 章社生 卫加宁 《Journal of Computer Science & Technology》 SCIE EI CSCD 2000年第4期355-359,共5页
In this paper, an optimum tactic of multi-grid parallel algorithmwith virtual boundary forecast method is disscussed, and a two-stage implementationis presented. The numerical results of solving a non-linear heat tran... In this paper, an optimum tactic of multi-grid parallel algorithmwith virtual boundary forecast method is disscussed, and a two-stage implementationis presented. The numerical results of solving a non-linear heat transfer equationshow that the optimum implementation is much better than the non-optimum one. 展开更多
关键词 ALGORITHM parallel multi-grid virtual boundary forecast (VBF) speedup network computing PVM
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部