期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
Adaptive Load Balancing for Parameter Servers in Distributed Machine Learning over Heterogeneous Networks 被引量:1
1
作者 CAI Weibo YANG Shulin +2 位作者 SUN Gang ZHANG Qiming YU Hongfang 《ZTE Communications》 2023年第1期72-80,共9页
In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous network... In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth.To address this problem,a network-aware adaptive PS load distribution scheme is proposed,which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states.We evaluate the proposed scheme on MXNet,known as a realworld distributed training platform,and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment. 展开更多
关键词 distributed machine learning network awareness parameter server load distribution heterogeneous network
下载PDF
DRPS:efficient disk-resident parameter servers for distributed machine learning
2
作者 Zhen Song Yu Gu +1 位作者 Zhigang Wang Ge Yu 《Frontiers of Computer Science》 SCIE EI CSCD 2022年第4期79-90,共12页
Parameter server(PS)as the state-of-the-art distributed framework for large-scale iterative machine learning tasks has been extensively studied.However,existing PS-based systems often depend on memory implementations.... Parameter server(PS)as the state-of-the-art distributed framework for large-scale iterative machine learning tasks has been extensively studied.However,existing PS-based systems often depend on memory implementations.With memory constraints,machine learning(ML)developers cannot train large-scale ML models in their rather small local clusters.Moreover,renting large-scale cloud servers is always economically infeasible for research teams and small companies.In this paper,we propose a disk-resident parameter server system named DRPS,which reduces the hardware requirement of large-scale machine learning tasks by storing high dimensional models on disk.To further improve the performance of DRPS,we build an efficient index structure for parameters to reduce the disk I/O cost.Based on this index structure,we propose a novel multi-objective partitioning algorithm for the parameters.Finally,a flexible workerselection parallel model of computation(WSP)is proposed to strike a right balance between the problem of inconsistent parameter versions(staleness)and that of inconsistent execution progresses(straggler).Extensive experiments on many typical machine learning applications with real and synthetic datasets validate the effectiveness of DRPS. 展开更多
关键词 parameter servers machine learning disk resident parallel model
原文传递
基于WEB的三维标准模架管理信息系统
3
作者 周茂军 丁金华 +1 位作者 杨林 张川 《大连工业大学学报》 CAS 2008年第1期91-93,共3页
模具设计中广泛应用标准模架,但标准模架多采用企业标准,绘制重复工作量大,模架数据更新不及时。介绍了以SolidWorks为支撑软件、以C#作为开发语言和数据库采用SQL server 2000开发三维标准模架库的方法。采用B/S和C/S相结合的系统结构... 模具设计中广泛应用标准模架,但标准模架多采用企业标准,绘制重复工作量大,模架数据更新不及时。介绍了以SolidWorks为支撑软件、以C#作为开发语言和数据库采用SQL server 2000开发三维标准模架库的方法。采用B/S和C/S相结合的系统结构模式,实现了模架数据的网络管理、查询和维护,保证了标准模架数据的准确性和及时更新,并实现了三维模架装配体在客户端的参数化,显著提高了设计效率。 展开更多
关键词 模架 B/S结构 C/S结构 参数化
下载PDF
I-DEAS在齿轮造型中的应用 被引量:2
4
作者 赵玉伦 王永刚 陈小安 《机械工程师》 北大核心 2000年第2期49-50,共2页
简述了I-DEAS软件的造型技术,讨论了对I-DEAS进行二次开发建立齿轮参数化造型程序的 过程及应用程序与I-DEAS进行交互的方式。
关键词 参数化造型 客户机/服务器 I-DEAS 齿轮
下载PDF
ASP进行动态和交互式数据查询 被引量:5
5
作者 刘海清 张永林 《计算机应用研究》 CSCD 北大核心 2001年第8期68-70,共3页
详细介绍了一种交互式、动态生成SQL语句的参数化数据查询方法 ,有一定的参考价值。
关键词 参灵敏化查询 动态SQL语句 数据库 ASP 交互式数据查询
下载PDF
可参数化网络零件库的研制与开发 被引量:1
6
作者 遇金伟 陈东 《矿业工程》 CAS 2007年第6期64-66,共3页
建立了基于C/S结构的网络环境下的可参数化三维零件库系统,建立了标准件和非标准件库。本文以"联轴器、键、轴"的装配体为例,对部件参数化技术进行介绍,并对在Internet环境下零件库的建立进行研究。该零件库具有可扩充性,可... 建立了基于C/S结构的网络环境下的可参数化三维零件库系统,建立了标准件和非标准件库。本文以"联轴器、键、轴"的装配体为例,对部件参数化技术进行介绍,并对在Internet环境下零件库的建立进行研究。该零件库具有可扩充性,可涵盖机械领域中所有标准零件以及企业内部经常使用或加工的非标准件,是一种较实用的企业应用软件。 展开更多
关键词 参数化 C/S 零件库 SOLIDWORKS 数据管理 SQLserver
下载PDF
Impact of data set noise on distributed deep learning
7
作者 Guo Qinghao Shuai Liguo Hu Sunying 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2020年第2期37-45,共9页
The training efficiency and test accuracy are important factors in judging the scalability of distributed deep learning.In this dissertation,the impact of noise introduced in the mixed national institute of standards ... The training efficiency and test accuracy are important factors in judging the scalability of distributed deep learning.In this dissertation,the impact of noise introduced in the mixed national institute of standards and technology database(MNIST)and CIFAR-10 datasets is explored,which are selected as benchmark in distributed deep learning.The noise in the training set is manually divided into cross-noise and random noise,and each type of noise has a different ratio in the dataset.Under the premise of minimizing the influence of parameter interactions in distributed deep learning,we choose a compressed model(SqueezeNet)based on the proposed flexible communication method.It is used to reduce the communication frequency and we evaluate the influence of noise on distributed deep training in the synchronous and asynchronous stochastic gradient descent algorithms.Focusing on the experimental platform TensorFlowOnSpark,we obtain the training accuracy rate at different noise ratios and the training time for different numbers of nodes.The existence of cross-noise in the training set not only decreases the test accuracy and increases the time for distributed training.The noise has positive effect on destroying the scalability of distributed deep learning. 展开更多
关键词 distributed deep learning stochastic gradient descent parameter server(PS) dataset noise
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部