With the emergence of pre-trained models,current neural networks are able to give task performance that is comparable to humans.However,we know little about the fundamental working mechanism of pre-trained models in w...With the emergence of pre-trained models,current neural networks are able to give task performance that is comparable to humans.However,we know little about the fundamental working mechanism of pre-trained models in which we do not know how they approach such performance and how the task is solved by the model.For example,given a task,human learns from easy to hard,whereas the model learns randomly.Undeniably,difficulty-insensitive learning leads to great success in natural language processing(NLP),but little attention has been paid to the effect of text difficulty in NLP.We propose a human learning matching index(HLM Index)to investigate the effect of text difficulty.Experiment results show:1)LSTM gives more human-like learning behavior than BERT.Additionally,UID-SuperLinear gives the best evaluation of text difficulty among four text difficulty criteria.Among nine tasks,some tasks’performance is related to text difficulty,whereas others are not.2)Model trained on easy data performs best in both easy and medium test data,whereas trained on hard data only performs well on hard test data.3)Train the model from easy to hard,leading to quicker convergence.展开更多
基金the support of the National Natural Science Foundation of China(Nos.U22B2059,62176079)National Natural Science Foundation of Heilongjiang Province,China(No.YQ 2022F005)the Industry-University-Research Innovation Foundation of China University(No.2021ITA05009).
文摘With the emergence of pre-trained models,current neural networks are able to give task performance that is comparable to humans.However,we know little about the fundamental working mechanism of pre-trained models in which we do not know how they approach such performance and how the task is solved by the model.For example,given a task,human learns from easy to hard,whereas the model learns randomly.Undeniably,difficulty-insensitive learning leads to great success in natural language processing(NLP),but little attention has been paid to the effect of text difficulty in NLP.We propose a human learning matching index(HLM Index)to investigate the effect of text difficulty.Experiment results show:1)LSTM gives more human-like learning behavior than BERT.Additionally,UID-SuperLinear gives the best evaluation of text difficulty among four text difficulty criteria.Among nine tasks,some tasks’performance is related to text difficulty,whereas others are not.2)Model trained on easy data performs best in both easy and medium test data,whereas trained on hard data only performs well on hard test data.3)Train the model from easy to hard,leading to quicker convergence.