期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
心智模式:语言能力的基石
1
作者 孙仕光 刘睿 《通化师范学院学报》 2022年第7期35-40,共6页
心智模式浮现能力是人类语言知识或能力的基石。语言知识就是语言模式。语言模式是人类在感知世界并同时接收言语输入后浮现出来的一种心智模式。语言能力就是人类利用这些模式产出言语和识别言语的能力。语言模式的形成是渐进的,从具... 心智模式浮现能力是人类语言知识或能力的基石。语言知识就是语言模式。语言模式是人类在感知世界并同时接收言语输入后浮现出来的一种心智模式。语言能力就是人类利用这些模式产出言语和识别言语的能力。语言模式的形成是渐进的,从具体到抽象,从粗糙、不完备到精细、完备,从不稳定到稳固。在模式形成的初期,儿童易产生识别错误、过度延伸错误。语言模式具有能产性或泛化能力。语言模式成熟、完善之后,语言模式识别具有一定的鲁棒性。 展开更多
关键词 心智模 语言模 浮现 统计式学习
下载PDF
Strategies and Principles of Distributed Machine Learning on Big Data 被引量:17
2
作者 Eric P. Xing Qirong Ho +1 位作者 Dai Wei Pengtao Xie 《Engineering》 SCIE EI 2016年第2期179-195,共17页
The rise of big data has led to new demands for machine learning (ML) systems to learn complex mod- els, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer p... The rise of big data has led to new demands for machine learning (ML) systems to learn complex mod- els, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate repre- sentations, and decision functions) thereupon. In order to run ML algorithms at such scales, on a distrib- uted cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required-and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that "big" ML systems can benefit greatly from ML-rooted statistical and algo- rithmic insights-and that ML researchers should therefore not shy away from such systems design-we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solu- tions. These principles and strategies span a continuum from application, to engineering, and to theo- retical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guaran- tees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area that lies between ML and systems.. 展开更多
关键词 Machine learningArtificial intelligence big dataBig modelDistributed systemsPrinciplesTheoryData-parallelismModel-parallelism
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部