With the emergence of pre-trained models,current neural networks are able to give task performance that is comparable to humans.However,we know little about the fundamental working mechanism of pre-trained models in w...With the emergence of pre-trained models,current neural networks are able to give task performance that is comparable to humans.However,we know little about the fundamental working mechanism of pre-trained models in which we do not know how they approach such performance and how the task is solved by the model.For example,given a task,human learns from easy to hard,whereas the model learns randomly.Undeniably,difficulty-insensitive learning leads to great success in natural language processing(NLP),but little attention has been paid to the effect of text difficulty in NLP.We propose a human learning matching index(HLM Index)to investigate the effect of text difficulty.Experiment results show:1)LSTM gives more human-like learning behavior than BERT.Additionally,UID-SuperLinear gives the best evaluation of text difficulty among four text difficulty criteria.Among nine tasks,some tasks’performance is related to text difficulty,whereas others are not.2)Model trained on easy data performs best in both easy and medium test data,whereas trained on hard data only performs well on hard test data.3)Train the model from easy to hard,leading to quicker convergence.展开更多
为解决传统真值发现算法无法提取文本数据关键语义信息的问题,提出一种基于胶囊网络的文本数据真值发现算法(Truth Discovery of Text Data Based on Capsule Network,Caps-Truth),对传统卷积神经网络(Convolutional Neural Network,CNN...为解决传统真值发现算法无法提取文本数据关键语义信息的问题,提出一种基于胶囊网络的文本数据真值发现算法(Truth Discovery of Text Data Based on Capsule Network,Caps-Truth),对传统卷积神经网络(Convolutional Neural Network,CNN)进行改进,在神经网络模型中构造语义胶囊层替代CNN池化层表征文本语义信息。首先通过CNN卷积层获取文本数据全局特征,利用初级胶囊层将特征信息向量化,再通过语义胶囊层表征文本数据细粒度语义信息,将特征向量输入全连接神经网络挖掘文本数据可信度并获得可靠答案。上述算法在真值发现中引入胶囊网络,利用动态路由算法整合零散语义,有效提高了文本数据真值发现的效果。实验结果表明,Caps-Truth算法优于对比算法。展开更多
基金the support of the National Natural Science Foundation of China(Nos.U22B2059,62176079)National Natural Science Foundation of Heilongjiang Province,China(No.YQ 2022F005)the Industry-University-Research Innovation Foundation of China University(No.2021ITA05009).
文摘With the emergence of pre-trained models,current neural networks are able to give task performance that is comparable to humans.However,we know little about the fundamental working mechanism of pre-trained models in which we do not know how they approach such performance and how the task is solved by the model.For example,given a task,human learns from easy to hard,whereas the model learns randomly.Undeniably,difficulty-insensitive learning leads to great success in natural language processing(NLP),but little attention has been paid to the effect of text difficulty in NLP.We propose a human learning matching index(HLM Index)to investigate the effect of text difficulty.Experiment results show:1)LSTM gives more human-like learning behavior than BERT.Additionally,UID-SuperLinear gives the best evaluation of text difficulty among four text difficulty criteria.Among nine tasks,some tasks’performance is related to text difficulty,whereas others are not.2)Model trained on easy data performs best in both easy and medium test data,whereas trained on hard data only performs well on hard test data.3)Train the model from easy to hard,leading to quicker convergence.
文摘为解决传统真值发现算法无法提取文本数据关键语义信息的问题,提出一种基于胶囊网络的文本数据真值发现算法(Truth Discovery of Text Data Based on Capsule Network,Caps-Truth),对传统卷积神经网络(Convolutional Neural Network,CNN)进行改进,在神经网络模型中构造语义胶囊层替代CNN池化层表征文本语义信息。首先通过CNN卷积层获取文本数据全局特征,利用初级胶囊层将特征信息向量化,再通过语义胶囊层表征文本数据细粒度语义信息,将特征向量输入全连接神经网络挖掘文本数据可信度并获得可靠答案。上述算法在真值发现中引入胶囊网络,利用动态路由算法整合零散语义,有效提高了文本数据真值发现的效果。实验结果表明,Caps-Truth算法优于对比算法。