This is a study of the feasibility and effects of teaching listening and speaking with videos. The research is a part of the English Learning Program sponsored by Academic Affairs Office of Huazhong Agricultural Unive...This is a study of the feasibility and effects of teaching listening and speaking with videos. The research is a part of the English Learning Program sponsored by Academic Affairs Office of Huazhong Agricultural University. The author introduced a pilot study conducted by utilizing videos to teach listening and speaking from September 2002 till now. The statistical result from the experiment suggests that this method has some advantages over teaching listening and speaking with the pre-taped materials in the text.展开更多
The perception of human languages is inherently a multi-modalprocess, in which audio information can be compensated by visual information to improve the recognition performance. Such a phenomenon in English, German, S...The perception of human languages is inherently a multi-modalprocess, in which audio information can be compensated by visual information to improve the recognition performance. Such a phenomenon in English, German, Spanish and so on has been researched, but in Chinese it has not been reported yet. In our experiment, 14 syllables (/ba, bi, bian, biao, bin, de, di, dian, duo, dong, gai, gan, gen, gu/), extracted from Chinese audiovisual bimodal speech database CAVSR-1.0, were pronounced by 10 subjects. The audio-only stimuli, audiovisual stimuli, and visual-only stimuli were recognized by 20 observers. The audio-only stimuli and audiovisual stimuli both were presented under 5 conditions: no noise, SNR 0 dB, -8 dB, -12 dB, and -16 dB. The experimental result is studied and the following conclusions for Chinese speech are reached. Human beings can recognize visual-only stimuli rather well. The place of articulation determines the visual distinction. In noisy environment, audio information can remarkably be compensated by visual information and as a result the recognition performance is greatly improved.展开更多
文摘This is a study of the feasibility and effects of teaching listening and speaking with videos. The research is a part of the English Learning Program sponsored by Academic Affairs Office of Huazhong Agricultural University. The author introduced a pilot study conducted by utilizing videos to teach listening and speaking from September 2002 till now. The statistical result from the experiment suggests that this method has some advantages over teaching listening and speaking with the pre-taped materials in the text.
基金This work was supported by the President Foundation of the Institute of Acoustics, Chinese Academy of Sciences (No.98-02) "863" High Tech R&D Project of China (No. 863-306-ZD-11-1).
文摘The perception of human languages is inherently a multi-modalprocess, in which audio information can be compensated by visual information to improve the recognition performance. Such a phenomenon in English, German, Spanish and so on has been researched, but in Chinese it has not been reported yet. In our experiment, 14 syllables (/ba, bi, bian, biao, bin, de, di, dian, duo, dong, gai, gan, gen, gu/), extracted from Chinese audiovisual bimodal speech database CAVSR-1.0, were pronounced by 10 subjects. The audio-only stimuli, audiovisual stimuli, and visual-only stimuli were recognized by 20 observers. The audio-only stimuli and audiovisual stimuli both were presented under 5 conditions: no noise, SNR 0 dB, -8 dB, -12 dB, and -16 dB. The experimental result is studied and the following conclusions for Chinese speech are reached. Human beings can recognize visual-only stimuli rather well. The place of articulation determines the visual distinction. In noisy environment, audio information can remarkably be compensated by visual information and as a result the recognition performance is greatly improved.