摘要
面孔、语音情绪信息的整合加工是社会交往的重要技能,近年来逐渐引起心理学、神经科学研究的关注。当前研究较为系统地考察了双通道情绪信息整合加工的行为表现和影响因素,也很好地回答了"何时整合"与"在哪里整合"两个认知神经科学关注的问题,但对"面孔、语音情绪信息能否整合为一致情绪客体?双通道情绪信息在大脑中如何整合为一?"两个关键问题都还缺乏系统研究。因此,本项目拟系统操纵面孔、语音刺激的情绪凸显度和任务要求,引入动态面孔-语音刺激以增加外部效度,综合运用行为和电生理技术,从多角度挖掘数据,特别是引入神经振荡(时频、相干)分析,系统考察动态性面孔和语音情绪信息是否能整合成一致情绪客体,并在神经振荡层面探明双通道情绪信息整合的机制。
The integration of facial-vocal emotion is an important factor for successful communication that intrigue psychologists and neuroscientists in recent years. Previous studies have elaborated on the behavioral performance and the influence factors for facial-vocal emotion integration, as well as "when" and "where" information from the two modes integrated. However, it remains open questions whether the integration of facial-vocal emotion follows the principles of multisensory integration (eg.the principle of inverse effectiveness), and how the bimodal emotional information merges into a coherence emotional object. Therefore, taking "whether facial-vocal emotion integration obeys the principle of inverse effectiveness" as main line, we designed six experiments which manipulated emotional salience of the dynamic facial-vocal emotional stimuli and task demands systematically. Moreover, using multi-dimensional analysis of behavioral and EEG data, especially time-frequency and coherence analysis of EEG data, we aimed to answer the two proposed questions, to further reveal the neurophysiological mechanism of facial-vocal emotion integration.
出处
《心理科学进展》
CSSCI
CSCD
北大核心
2015年第7期1109-1117,共9页
Advances in Psychological Science
基金
国家自然科学基金项目(31300835)
教育部人文社科基金(12XJC190003)
中央高校基本科研业务费(14SZYB07)资助