期刊文献+

人类语言习得的亲知还原模式——从ChatGPT的言知还原模式说起 被引量:3

The Model of Familiarity Reduction in Human Language Acquisition——From the Viewpoint of ChatGPT's Model of Verbal Reduction
原文传递
导出
摘要 尽管语言人工智能的大语言模型ChatGPT取得了比较大的进展,哲学上的图灵和塞尔之争仍然在继续。不过ChatGPT能够生成符合语法的崭新的句子,一定还原出了语言单位(tokens)和规则,解决了长期以来人工智能中自然语言理解的难题,这是一个重要的转折。ChatGPT的学习模型依赖强大的运算能力和计算机的海量存储能力,这两种能力可以合称为强储算能力。相比之下,人脑只具有弱储算能力。正是因为弱储算能力的限制,人脑语言学习不可能完全走ChatGPT的语言学习模式。人脑是在基于经验的亲知活动中还原出有限的单位和规则,从而生成崭新的句子。ChatGPT目前采用的是言知学习模式,而不是基于经验的亲知学习模式,将来的大语言模型可能扩展出亲知学习模式,真正模拟人类获得亲知还原模式。那个时候或许可以说机器人真正理解了自然语言,哲学上的图灵和塞尔之争或许可能得到解决。 Despite the significant progress made in ChatGPT,a big language model of artificial intelligence,the philosophical debate between Turing(图灵)and Searle(塞尔)is still going on.However,ChatGPT is able to generate new sentences that conform to grammar,and has certainly reduced language units(tokens)and rules,thus solving the long-standing problems of natural language understanding in artificial intelligence.This is an important turning point.The learning model of ChatGPT relies on strong computing power and the massive storage capacity of computers,which can be collectively referred to as strong power of storaging and computing.In contrast,human brain has only weak of storaging and computing power reduce.It is precisely because of the limitations of its weak power of storaging and computing that human brain language learning cannot completely follow the language learning model of ChatGPT.Human brain reduce limited units and rules through activities of familiarity based on experience,thereby generating new sentences.ChatGPT currently adopts a text-based learning model,rather than an experience-based learning model of familiarity.The future bigger language models may expand to include a learning model of familiarity,truly simulating the model of familiarity reduction in human language acquisition.Until then,it may be said that robots can truly understand natural language,and the philosophical dispute between Turing and Searle may have been resolved.
作者 陈保亚 陈樾 Chen Baoya;Chen Yue(Center for Chinese Linguistics,Department of Chinese Language and Literature,Peking University,Beijing 100871 China;Department of Mathematics,theUniversity of Georgia,USA)
出处 《北京大学学报(哲学社会科学版)》 CSSCI 北大核心 2024年第2期167-174,共8页 Journal of Peking University(Philosophy and Social Sciences)
基金 国家社科基金重大项目“我国民族音乐文化与语言数据集成及共演化研究”(22&ZD218)。
关键词 人工智能 图灵测试 中国房间 自然语言理解 思维 artificial intelligence Turing test Chinese room natural language understanding thinking
  • 相关文献

参考文献2

二级参考文献15

  • 1陈保亚.对剩余语素提取方法的限制[J].汉语学习,1997(3):12-13. 被引量:11
  • 2陈保亚.再论平行周遍原则和不规则字组的判定[J].汉语学习,2005(1):9-13. 被引量:14
  • 3王洪君.汉语的韵律词与韵律短语[J].中国语文,2000(6):525-536. 被引量:102
  • 4罗素.1905《论指谓》.牟博译.《语言哲学名著选辑》,三联书店,1988.
  • 5Bloomfield, L. 1926 A set of postulates for the science of language, Language Vol. 2.
  • 6Bloomfield, L.1933 Language, New York: Henry Holt.
  • 7Chomsky, N. 1957 Syntactic Structures, the Hague: Mouton.
  • 8Chomsky, N.1965 Aspects of the Theory of Syntax, Cambridge, MASS: MIT Press.
  • 9Chomsky, N.1970 Remarks on nominalization, in R. Jacobs & P. Rosenbaum, eds., Readings in English Tranaformational Grammar,184-221. Waltham, MASS: Ginn.
  • 10Chomsky, N.1986 Barriers, Cambridge, MASS: MIT Press.

共引文献105

同被引文献43

引证文献3

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部