期刊文献+
共找到23篇文章
< 1 2 >
每页显示 20 50 100
Integrating Audio-Visual Features and Text Information for Story Segmentation of News Video 被引量:1
1
作者 Liu Hua-yong, Zhou Dong-ru School of Computer,Wuhan University,Wuhan 430072, Hubei, China 《Wuhan University Journal of Natural Sciences》 CAS 2003年第04A期1070-1074,共5页
Video data are composed of multimodal information streams including visual, auditory and textual streams, so an approach of story segmentation for news video using multimodal analysis is described in this paper. The p... Video data are composed of multimodal information streams including visual, auditory and textual streams, so an approach of story segmentation for news video using multimodal analysis is described in this paper. The proposed approach detects the topic-caption frames, and integrates them with silence clips detection results, as well as shot segmentation results to locate the news story boundaries. The integration of audio-visual features and text information overcomes the weakness of the approach using only image analysis techniques. On test data with 135 400 frames, when the boundaries between news stories are detected, the accuracy rate 85.8% and the recall rate 97.5% are obtained. The experimental results show the approach is valid and robust. 展开更多
关键词 news video story segmentation audio-visual features analysis text detection
下载PDF
A Review on Audio-visual Translation Studies
2
作者 李瑶 《语言与文化研究》 2008年第1期146-150,共5页
This paper is dedicated to a thorough review on the audio-visual related translations from both home and abroad.In reviewing the foreign achievements on this specific field of translation studies it can shed some ligh... This paper is dedicated to a thorough review on the audio-visual related translations from both home and abroad.In reviewing the foreign achievements on this specific field of translation studies it can shed some lights on our national audio-visual practice and research.The review on the Chinese scholars’ audio-visual translation studies is to offer the potential developing direction and guidelines to the studies and aspects neglected as well.Based on the summary of relevant studies,possible topics for further studies are proposed. 展开更多
关键词 audio-visuAL TRANSLATION SUBTITLING DUBBING
下载PDF
Audio-visual emotion recognition with multilayer boosted HMM
3
作者 吕坤 贾云得 张欣 《Journal of Beijing Institute of Technology》 EI CAS 2013年第1期89-93,共5页
Emotion recognition has become an important task of modern human-computer interac- tion. A multilayer boosted HMM ( MBHMM ) classifier for automatic audio-visual emotion recognition is presented in this paper. A mod... Emotion recognition has become an important task of modern human-computer interac- tion. A multilayer boosted HMM ( MBHMM ) classifier for automatic audio-visual emotion recognition is presented in this paper. A modified Baum-Welch algorithm is proposed for component HMM learn- ing and adaptive boosting (AdaBoost) is used to train ensemble classifiers for different layers (cues). Except for the first layer, the initial weights of training samples in current layer are decided by recognition results of the ensemble classifier in the upper layer. Thus the training procedure using current cue can focus more on the difficult samples according to the previous cue. Our MBHMM clas- sifier is combined by these ensemble classifiers and takes advantage of the complementary informa- tion from multiple cues and modalities. Experimental results on audio-visual emotion data collected in Wizard of Oz scenarios and labeled under two types of emotion category sets demonstrate that our approach is effective and promising. 展开更多
关键词 emotion recognition audio-visual fusion Baum-Welch algorithm multilayer boostedHMM Wizard of Oz scenario
下载PDF
The Audio-Visual Performance Highlighted Craze in Chicago During Chinese New Year
4
《China & The World Cultural Exchange》 2019年第2期38-39,共2页
February 10 (US Central Time), 2019, China National Peking Opera Company (CNPOC) and the Hubei Chime Bells National Chinese Orchestra presented a fantastic audio-visual performance of Chinese Peking Opera and Chinese ... February 10 (US Central Time), 2019, China National Peking Opera Company (CNPOC) and the Hubei Chime Bells National Chinese Orchestra presented a fantastic audio-visual performance of Chinese Peking Opera and Chinese chime bells for the American audience at the world s top-level Buntrock Hall at Symphony Center. 展开更多
关键词 audio-visuAL PERFORMANCE Chicago CHINESE New YEAR
下载PDF
Research on National Identity Based on National Audio-Visual Works: Taking Inner Mongolia as an Example
5
作者 LIU Haitao ZHANG Pei 《Cultural and Religious Studies》 2021年第8期391-396,共6页
Mongolian audio-visual works are an important carrier of exploring the true significance to this national culture.This paper believes that the Mongolian people in Inner Mongolia constantly enhance the individual sense... Mongolian audio-visual works are an important carrier of exploring the true significance to this national culture.This paper believes that the Mongolian people in Inner Mongolia constantly enhance the individual sense of identity to the overall ethnic group through the influence of film and television and music,and on this basis constantly evolve a new culture in line with modern and contemporary life to further enhance their sense of belonging to the ethnic nation. 展开更多
关键词 MONGOLIAN audio-visual works national identity
下载PDF
Application of Task-based Teaching Method to College Audio-visual English Teaching
6
作者 Liguo Shi 《International Journal of Technology Management》 2015年第9期65-67,共3页
Based on the current situation of college audio-visual English teaching in China, this article points out that the avoidance in class is a serious phenomenon in the process of college audio-visual English teaching. Af... Based on the current situation of college audio-visual English teaching in China, this article points out that the avoidance in class is a serious phenomenon in the process of college audio-visual English teaching. After further analysis and combination with the characteristics of college English audio-visual teaching in China, it puts forward the application of task-based teaching method to college audio-visual English teaching and its steps, attempting to alleviate the avoidance phenomenon in students through task-based teaching method. 展开更多
关键词 task-based teaching method college English audio-visual English teaching
下载PDF
Prioritized MPEG-4 Audio-Visual Objects Streaming over the DiffServ
7
作者 黄天云 郑婵 《Journal of Electronic Science and Technology of China》 2005年第4期314-320,共7页
The object-based scalable coding in MPEG-4 is investigated, and a prioritized transmission scheme of MPEG-4 audio-visual objects (AVOs) over the DiffServ network with the QoS guarantee is proposed. MPEG-4 AVOs are e... The object-based scalable coding in MPEG-4 is investigated, and a prioritized transmission scheme of MPEG-4 audio-visual objects (AVOs) over the DiffServ network with the QoS guarantee is proposed. MPEG-4 AVOs are extracted and classified into different groups according to their priority values and scalable layers (visual importance). These priority values are mapped to the 1P DiffServ per hop behaviors (PHB). This scheme can selectively discard packets with low importance, in order to avoid the network congestion. Simulation results show that the quality of received video can gracefully adapt to network state, as compared with the ‘best-effort' manner. Also, by allowing the content provider to define prioritization of each audio-visual object, the adaptive transmission of object-based scalable video can be customized based on the content. 展开更多
关键词 video streaming quality of service (QoS) MPEG-4 audio-visual objects (AVOs) DIFFSERV PRIORITIZATION
下载PDF
Self-supervised Learning for Speech Emotion Recognition Task Using Audio-visual Features and Distil Hubert Model on BAVED and RAVDESS Databases
8
作者 Karim Dabbabi Abdelkarim Mars 《Journal of Systems Science and Systems Engineering》 SCIE EI CSCD 2024年第5期576-606,共31页
Existing pre-trained models like Distil HuBERT excel at uncovering hidden patterns and facilitating accurate recognition across diverse data types, such as audio and visual information. We harnessed this capability to... Existing pre-trained models like Distil HuBERT excel at uncovering hidden patterns and facilitating accurate recognition across diverse data types, such as audio and visual information. We harnessed this capability to develop a deep learning model that utilizes Distil HuBERT for jointly learning these combined features in speech emotion recognition (SER). Our experiments highlight its distinct advantages: it significantly outperforms Wav2vec 2.0 in both offline and real-time accuracy on RAVDESS and BAVED datasets. Although slightly trailing HuBERT’s offline accuracy, Distil HuBERT shines with comparable performance at a fraction of the model size, making it an ideal choice for resource-constrained environments like mobile devices. This smaller size does come with a slight trade-off: Distil HuBERT achieved notable accuracy in offline evaluation, with 96.33% on the BAVED database and 87.01% on the RAVDESS database. In real-time evaluation, the accuracy decreased to 79.3% on the BAVED database and 77.87% on the RAVDESS database. This decrease is likely a result of the challenges associated with real-time processing, including latency and noise, but still demonstrates strong performance in practical scenarios. Therefore, Distil HuBERT emerges as a compelling choice for SER, especially when prioritizing accuracy over real-time processing. Its compact size further enhances its potential for resource-limited settings, making it a versatile tool for a wide range of applications. 展开更多
关键词 Wav2vec 2.0 Distil HuBERT HuBERT SER audio and audio-visual features
原文传递
Cogeneration of Innovative Audio-visual Content: A New Challenge for Computing Art
9
作者 Mengting Liu Ying Zhou +1 位作者 Yuwei Wu Feng Gao 《Machine Intelligence Research》 EI CSCD 2024年第1期4-28,共25页
In recent years,computing art has developed rapidly with the in-depth cross study of artificial intelligence generated con-tent(AIGC)and the main features of artworks.Audio-visual content generation has gradually been... In recent years,computing art has developed rapidly with the in-depth cross study of artificial intelligence generated con-tent(AIGC)and the main features of artworks.Audio-visual content generation has gradually been applied to various practical tasks,including video or game score,assisting artists in creation,art education and other aspects,which demonstrates a broad application pro-spect.In this paper,we introduce innovative achievements in audio-visual content generation from the perspective of visual art genera-tion and auditory art generation based on artificial intelligence(Al).We outline the development tendency of image and music datasets,visual and auditory content modelling,and related automatic generation systems.The objective and subjective evaluation of generated samples plays an important role in the measurement of algorithm performance.We provide a cogeneration mechanism of audio-visual content in multimodal tasks from image to music and display the construction of specific stylized datasets.There are still many new op-portunities and challenges in the field of audio-visual synesthesia generation,and we provide a comprehensive discussion on them. 展开更多
关键词 Artificial intelligence(AI)art audio-visuAL artificial intelligence generated content(AIGC) MULTIMODAL artistic evalu-ation
原文传递
AV-FDTI:Audio-visual fusion for drone threat identification
10
作者 Yizhuo Yang Shenghai Yuan +5 位作者 Jianfei Yang Thien Hoang Nguyen Muqing Cao Thien-Minh Nguyen Han Wang Lihua Xie 《Journal of Automation and Intelligence》 2024年第3期144-151,共8页
In response to the evolving challenges posed by small unmanned aerial vehicles(UAVs),which have the potential to transport harmful payloads or cause significant damage,we present AV-FDTI,an innovative Audio-Visual Fus... In response to the evolving challenges posed by small unmanned aerial vehicles(UAVs),which have the potential to transport harmful payloads or cause significant damage,we present AV-FDTI,an innovative Audio-Visual Fusion system designed for Drone Threat Identification.AV-FDTI leverages the fusion of audio and omnidirectional camera feature inputs,providing a comprehensive solution to enhance the precision and resilience of drone classification and 3D localization.Specifically,AV-FDTI employs a CRNN network to capture vital temporal dynamics within the audio domain and utilizes a pretrained ResNet50 model for image feature extraction.Furthermore,we adopt a visual information entropy and cross-attention-based mechanism to enhance the fusion of visual and audio data.Notably,our system is trained based on automated Leica tracking annotations,offering accurate ground truth data with millimeter-level accuracy.Comprehensive comparative evaluations demonstrate the superiority of our solution over the existing systems.In our commitment to advancing this field,we will release this work as open-source code and wearable AV-FDTI design,contributing valuable resources to the research community. 展开更多
关键词 audio-visual fusion Anti-UAV Multi-modal fusion Classification 3D localization Self-attention
下载PDF
How to Teach Spoken English in College
11
作者 曾雪梅 《海外英语》 2011年第3X期34-34,47,共2页
Language is considered as a tool of communication in the world. Spoken English is very important in English learning and teaching. As an English teacher, we should speak English more and foster students' ability o... Language is considered as a tool of communication in the world. Spoken English is very important in English learning and teaching. As an English teacher, we should speak English more and foster students' ability of speaking. By more practice, the students can speak fluent English and express themselves freely. 展开更多
关键词 SPOKEN ENGLISH communication ORAL tasks audio-visuAL aids
下载PDF
The Advantages of Movies as Listening Material in College English Teaching
12
作者 唐丽丽 《海外英语》 2013年第9X期107-108,共2页
In multimedia environment, many teachers try to use new means and methods to teach listening and of which English movies with great advantages become more and more popular listening material easily accepted by student... In multimedia environment, many teachers try to use new means and methods to teach listening and of which English movies with great advantages become more and more popular listening material easily accepted by students in college English class. 展开更多
关键词 ENGLISH movies COLLEGE ENGLISH TEACHING audio-visu
下载PDF
How Do Patients Prefer to Receive Patient Education Material about Treatment, Diagnosis and Procedures? <br/>—A Survey Study of Patients Preferences Regarding Forms of Patient Education Materials;Leaflets, Podcasts, and Video
13
作者 Anna Krontoft 《Open Journal of Nursing》 2021年第10期809-827,共19页
<strong>Aim:</strong> The aim of this study was to explore patients’ preferences for forms of patient education material, including leaflets, podcasts, and videos;that is, to determine what forms of infor... <strong>Aim:</strong> The aim of this study was to explore patients’ preferences for forms of patient education material, including leaflets, podcasts, and videos;that is, to determine what forms of information, besides that provided verbally by healthcare personnel, do patients prefer following visits to hospital? <strong>Methods: </strong>The study was a mixed-methods study, using a survey design with primarily quantitative items but with a qualitative component. A survey was distributed to patients over 18 years between May and July 2020 and 480 patients chose to respond.<strong> Results:</strong> Text-based patient education materials (leaflets), is the form that patients have the most experience with and was preferred by 86.46% of respondents;however, 50.21% and 31.67% of respondents would also like to receive patient education material in video and podcast formats, respectively. Furthermore, several respondents wrote about the need for different forms of patient education material, depending on the subject of the supplementary information. <strong>Conclusion: </strong>This study provides an overview of patient preferences regarding forms of patient education material. The results show that the majority of respondents prefer to use combinations of written, audio, and video material, thus applying and co-constructing a multimodal communication system, from which they select and apply different modes of communication from different sources simultaneously. 展开更多
关键词 Audio Information audio-visual Information Text-Based Information Health Literacy Patient Education Material Nursing
下载PDF
AB005.The effect of audio quality on eye movements in a video chat
14
作者 Sophie Hallot Aaron Johnson 《Annals of Eye Science》 2019年第1期180-180,共1页
Background:Difficulty in hearing can occur for numerous reasons across a variety of ages in humans.To overcome this,humans can employ a number of techniques to help improve their understanding of sound in other ways.O... Background:Difficulty in hearing can occur for numerous reasons across a variety of ages in humans.To overcome this,humans can employ a number of techniques to help improve their understanding of sound in other ways.One is to use vision,and attempt to lip-read in order to understand someone else in a face-to-face conversation.Audio-visual integration has a long history in perception(e.g.,the McGurk Effect),and researchers have shown that older adults will look at the mouth region for additional information in noisy situations.However,this concept has not been explored in the context of social media.A common way to communicate virtually that simulates a live conversation is the concept of video chatting or conferencing.It is used for a variety of reasons including work,maintaining social interactions,and has started to be used in clinical settings.However,video chat session quality is often sub-optimal,and may contain degraded audio and/or decoupled audio and video.The goal of this study is to determine whether humans use the same visual compensation mechanism,lip reading,in a digital setting as they would in a face-to-face conversation.Methods:The participants(n=116,age 18 to 41)answered a demographics questionnaire including questions about their use of the video chatting software.Then,the participants viewed two videos of a video call:one with synchronized audio and video,and the other dyssynchronous(1 second delay).The order of video was randomized across participants.Binocular eye movements were monitored at 60 Hz using a Mirametrix S2 eye tracker connected to Ogama 5.0(http://www.ogama.net/).After each video,the participants answered questions about the call quality,and the content of the video.Results:There was no significant difference in the total dwell time at the eyes and the mouth of the speaker remained,t(116)=−1.574,P=0.059,d=−0.147,BF10=0.643.However,using the heat maps generated by Ogama,we observed when viewing the poor-quality video,the participants looked more towards the mouth than the eyes of the speaker.It was found that as call quality decreased,the number of fixations increased from n=79.87 in the synchronous condition to n=113.4 in the asynchronous condition,and the median duration of each fixation decreased from 218.3 ms in the synchronous condition to 205ms in the asynchronous condition.Conclusions:The above results may indicate that humans employ similar compensation mechanisms in response to a decrease in auditory comprehension,given the tendency of participants looking towards the mouth of the speaker more.However,more study is needed because of the inconsistency in the results. 展开更多
关键词 Video chat audio-visual integration social media visual compensation
下载PDF
The New Development for Teaching Foreign Language-Combination of Traditional Teaching with Modern Multi-Media Methods
15
作者 李丰芮 《海外英语》 2013年第4X期96-97,共2页
With the development of society and economy,more and more talents capable persons are badly needed in the world.Under the influence of traditional English teaching mode,most English learners can only read and write.Th... With the development of society and economy,more and more talents capable persons are badly needed in the world.Under the influence of traditional English teaching mode,most English learners can only read and write.They are usually called "deaf-mutes".Therefore,traditional English teaching mode isn't satisfied by the teachers and received greatly challenged.Due to applying multimedia to English teaching could create more authentic language environment for the learners,which enables them to communicate in English in real-life situations.At present,the multi-media approach is the most popular language teaching method in the world.The most effective way to develop the teaching is combine multimedia with the traditional methods.This is of special significance to English teaching and make the English teaching receiving the best effect. 展开更多
关键词 traditional TEACHING MODERN MULTI-MEDIA audio-visu
下载PDF
What Adults Can Learn from Kids A Literature Review
16
作者 庄宏维 《海外英语》 2018年第4期245-246,共2页
Many adults especially business people have the need to learn English for their work. Yet, a lot of them have problems in different language skills. For example, across U.S.A, business English teachers encounter Chine... Many adults especially business people have the need to learn English for their work. Yet, a lot of them have problems in different language skills. For example, across U.S.A, business English teachers encounter Chinese speaking students who had problems in writing proper English business messages(Beamer, 1994).Although a lot of educators have been trying creative approaches on teaching children, the adult classrooms are relatively more traditional. This paper aims at reviewing some prospective problems and sharing with the practitioners some approaches for language instruction. 展开更多
关键词 activities approach adult learners read aloud ANIMATION audio-visuAL
下载PDF
The Subtitle Translation of Movie from the Perspective of Multimodal Discourse Analysis
17
作者 赵劲洁 沈莹 《海外英语》 2020年第11期264-266,共3页
With the development of science and technology,especially the development of digital technology,mankind has entered the age of multimedia,and the mode of human life and communication have undergone profound changes.As... With the development of science and technology,especially the development of digital technology,mankind has entered the age of multimedia,and the mode of human life and communication have undergone profound changes.As a single communicative mode,language has been gradually replaced by complex communicative mode composed of language,image and sound.Multimodal discourse analysis provides a new perspective for discourse analysis composed of a variety of symbols,which can help readers understand how symbols such as images and music work together and form meanings.Firm analysis is often analyzed from the perspective of psychology,aesthetics and other macro aspects,but seldom from the perspective of linguistics.The paper analyzes how the theory of multimodal discourse analysis affects the translation of film by discussing the interaction between film translation and multimodal modes in the film Pride and Prejudice. 展开更多
关键词 multimodal discourse analysis subtitle translation audio-visual product
下载PDF
Deep Audio-visual Learning:A Survey 被引量:3
18
作者 Hao Zhu Man-Di Luo +2 位作者 Rui Wang Ai-Hua Zheng Ran He 《International Journal of Automation and computing》 EI CSCD 2021年第3期351-376,共26页
Audio-visual learning,aimed at exploiting the relationship between audio and visual modalities,has drawn considerable attention since deep learning started to be used successfully.Researchers tend to leverage these tw... Audio-visual learning,aimed at exploiting the relationship between audio and visual modalities,has drawn considerable attention since deep learning started to be used successfully.Researchers tend to leverage these two modalities to improve the performance of previously considered single-modality tasks or address new challenging problems.In this paper,we provide a comprehensive survey of recent audio-visual learning development.We divide the current audio-visual learning tasks into four different subfields:audiovisual separation and localization,audio-visual correspondence learning,audio-visual generation,and audio-visual representation learning.State-of-the-art methods,as well as the remaining challenges of each subfield,are further discussed.Finally,we summarize the commonly used datasets and challenges. 展开更多
关键词 Deep audio-visual learning audio-visual separation and localization correspondence learning generative models representation learning
原文传递
Neural correlates of audio-visual modal interference inhibition investigated in children by ERP 被引量:2
19
作者 WANG YiWen LIN ChongDe +2 位作者 LIANG Jing WANG Yu ZHANG WenXin 《Science China(Life Sciences)》 SCIE CAS 2011年第2期194-200,共7页
In order to detect cross-sectional age characteristics of cognitive neural mechanisms in audio-visual modal interference inhibition,event-related potentials(ERP) of 14 10-year-old children were recorded while performi... In order to detect cross-sectional age characteristics of cognitive neural mechanisms in audio-visual modal interference inhibition,event-related potentials(ERP) of 14 10-year-old children were recorded while performing the words interference task.In incongruent conditions,the participants were required to inhibit the audio interference words of the same category.The present findings provided the preliminary evidence of brain mechanism for the children's inhibition development in the specific childhood stage. 展开更多
关键词 audio-visual modal INTERFERENCE INHIBITION event-related potentials(ERP)
原文传递
Stream Weight Training Based on MCE for Audio-Visual LVCSR 被引量:1
20
作者 刘鹏 王作英 《Tsinghua Science and Technology》 SCIE EI CAS 2005年第2期141-144,共4页
In this paper we address the problem of audio-visual speech recognition in the framework of the multi-stream hidden Markov model. Stream weight training based on minimum classification error criterion is dis... In this paper we address the problem of audio-visual speech recognition in the framework of the multi-stream hidden Markov model. Stream weight training based on minimum classification error criterion is discussed for use in large vocabulary continuous speech recognition (LVCSR). We present the lattice re- scoring and Viterbi approaches for calculating the loss function of continuous speech. The experimental re- sults show that in the case of clean audio, the system performance can be improved by 36.1% in relative word error rate reduction when using state-based stream weights trained by a Viterbi approach, compared to an audio only speech recognition system. Further experimental results demonstrate that our audio-visual LVCSR system provides significant enhancement of robustness in noisy environments. 展开更多
关键词 audio-visual speech recognition (AVSR) large vocabulary continuous speech recognition (LVCSR) discriminative training minimum classification error (MCE)
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部