摘要
远距离规则的知识是如何被内隐学习的,研究尚未得出结论。该研究采用和人类被试相同的实验材料和程序,考察了简单循环网络模型(SRN)对两种汉语声调远距离规则——倒映和逆行规则的学习。结果发现:(1)在广泛的参数范围上,SRN能够学会倒映和逆行规则,表明模型的记忆缓冲器可以模拟人类远距离规则的内隐学习;(2)SRN学习倒映规则比逆行规则更好,表明在功能上远距离规则的内隐学习可能优先使用了先进先出的记忆存储器及信息加工模式。该研究为探究远距离规则内隐学习的机制提供了新的证据和视角。
In implicit learning literature, a basic question concerning how knowledge of structures and regularities is learned is whether the learning mechanism uses a temporary storage buffer, and, if so, what the nature of the buffer is. Recently, Li et al.(2013) found that people acquired unconscious structural knowledge of both Chinese tonal retrogrades and inversions. Moreover, inversions were implicitly learnt more easily than retrogrades, pattern predicted by implicit learning used a first-in-first-out buffer rather than a last-in-first-out buffer. However, because Chinese Tang poetry used an inversion, which was common knowledge that participants were likely exposed to as children, it was not clear whether prior expectations of structure instantiating inversions could override the effect of what type of buffer the system used. The network did not have prior knowledge. Accordingly, the present study investigated whether the Simple Recurrent Network(SRN), that used a buffer to allow learning of nonlocal dependencies, could learn tonal inversions and retrogrades, and replicate the advantage of inversions over retrogrades.The SRN was tested on the same materials and procedures as Li et al.(2013). The networks were assigned to four cells of two training conditions(trained vs. untrained) by two rules(inversion vs. retrograde) design. The simulations were carried out using all possible permutations of the parameter values, resulting in 150 different models for each group. The materials were strings of tonal syllables. Each string consisted of 10 different tonal syllables, where the tone types(pings and zes) of first five syllables predicted the tone types of following five by forming an inversion or a retrograde. In training phase, 144 grammatical strings were used for two trained groups. In test phase, four groups of networks were presented with 48 test sequences(half grammatical and half ungrammatical), and their ability to predict the next tone in the predictable second five elements was used as an index of performance.T-test(with Bonferronni correction) showed that trained networks performed significantly better than untrained networks for both inversion and retrograde groups, suggesting that the networks possibly learnt the two rules. Moreover, for both trained and untrained groups, inversion group performed significantly better than retrograde group. The performance difference between inversion and retrograde for trained networks was greater than that for untrained networks, indicating that inversions were implicitly learnt more easily than retrogrades. Further, the effects of learning were calculated by subtracting the z-scores of the untrained networks/participants from that of the trained networks/participants. A substantial number of the SRNs fell within the area covered by the human data(M ± 1 SE)(15/150 for inversion, 38/150 for retrograde), suggesting that the SRN could match the characteristic performance of human participants.To conclude, consistent with the results of human experiments, the present simulations showed that: SRN could learn the two nonlocal dependencies, and tonal inversions were implicitly learnt more easily than retrogrades, tentatively suggesting that functionally a first-in-first-out memory buffer was more likely to be involved in implicit learning of nonlocal dependencies. Thus the present study provided new evidence and a new perspective for exploring the implicit learning mechanism of nonlocal dependencies.
作者
李菲菲
刘宝根
Li Feifei;Liu Baogen(Hangzhou College of Preschool Teacher Education,Zhejiang Normal University,Hangzhou,311215)
出处
《心理科学》
CSSCI
CSCD
北大核心
2018年第4期796-802,共7页
Journal of Psychological Science
基金
教育部人文社会科学研究青年基金项目(17YJC880050)
浙江省自然科学基金计划(LY18C090006)的资助
关键词
远距离规则
内隐学习
记忆存储器
神经网络模拟
nonlocal dependencies
implicit learning
memory buffer
neural network simulations