摘要
自然语言处理是用计算机来研究和处理自然语言的一门交叉学科,近年来发展迅速,引起语言学界的极大关注。文章讨论了自然语言处理中的四种神经网络模型,即前馈神经网络模型、卷积神经网络模型、循环神经网络模型和预训练模型,其中包括模型的原理、结构、算法、机制,并突出强调它们在自然语言处理中的应用。文章指出,尽管神经网络模型已经成为自然语言处理的主流,但这些模型还缺乏可解释性,未来需要得到基于规则的语言模型和基于统计的语言模型的支持。
Natural Language Processing(NLP)is a new interdisciplinary subject to study and process the natural language by computer.In recent years,NLP has developed very rapidly and it attracted great attention from the linguistic community.This paper discusses four types of neural network model in natural language processing:Feed-forward Neural Network model(FNN),Convolutional Neural Network model(CNN),Recurrent Neural Network model(RNN),and Pre-Training model(PT),including the basic principle,structure,algorithm,and mechanism of the model,highlights their application in NLP.The paper points out that although the neural network models have become the mainstream of NLP,but these models still lack interpretability and need to be supported by rule-based language models and statistics-based language models in the futur.
作者
冯志伟
丁晓梅
FENG Zhiwei;DING Xiaomei
出处
《当代外语研究》
CSSCI
2022年第4期98-110,153,共14页
Contemporary Foreign Language Studies
基金
国家社会科学基金项目“基于平行语料库的俄汉语言学术语词典编纂研究”(编号17BYY220)的阶段性成果。
关键词
自然语言处理
神经网络模型
前馈神经网络模型
卷积神经网络模型
循环神经网络模型
预训练模型
Natural Language Processing
Neural Network model
Feed-forward Neural Network model
Convolutional Neural Network model
Recurrent Neural Network model
Pre-Training model