期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Triple Multimodal Cyclic Fusion and Self-Adaptive Balancing for Video Q&A Systems
1
作者 Xiliang Zhang Jin Liu +2 位作者 Yue Li Zhongdai Wu Y.Ken Wang 《Computers, Materials & Continua》 SCIE EI 2022年第12期6407-6424,共18页
Performance of Video Question and Answer(VQA)systems relies on capturing key information of both visual images and natural language in the context to generate relevant questions’answers.However,traditional linear com... Performance of Video Question and Answer(VQA)systems relies on capturing key information of both visual images and natural language in the context to generate relevant questions’answers.However,traditional linear combinations of multimodal features focus only on shallow feature interactions,fall far short of the need of deep feature fusion.Attention mechanisms were used to perform deep fusion,but most of them can only process weight assignment of single-modal information,leading to attention imbalance for different modalities.To address above problems,we propose a novel VQA model based on Triple Multimodal feature Cyclic Fusion(TMCF)and Self-AdaptiveMultimodal Balancing Mechanism(SAMB).Our model is designed to enhance complex feature interactions among multimodal features with cross-modal information balancing.In addition,TMCF and SAMB can be used as an extensible plug-in for exploring new feature combinations in the visual image domain.Extensive experiments were conducted on MSVDQA and MSRVTT-QA datasets.The results confirm the advantages of our approach in handling multimodal tasks.Besides,we also provide analyses for ablation studies to verify the effectiveness of each proposed component. 展开更多
关键词 Video question and answer systems feature fusion scaling matrix attention mechanism
下载PDF
A Multi-Level Circulant Cross-Modal Transformer for Multimodal Speech Emotion Recognition 被引量:1
2
作者 Peizhu Gong Jin Liu +3 位作者 Zhongdai Wu Bing Han YKenWang Huihua He 《Computers, Materials & Continua》 SCIE EI 2023年第2期4203-4220,共18页
Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due... Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due to its inclusion of the semantic features of two different modalities,i.e.,audio and text.However,existing methods often fail in effectively represent features and capture correlations.This paper presents a multi-level circulant cross-modal Transformer(MLCCT)formultimodal speech emotion recognition.The proposed model can be divided into three steps,feature extraction,interaction and fusion.Self-supervised embedding models are introduced for feature extraction,which give a more powerful representation of the original data than those using spectrograms or audio features such as Mel-frequency cepstral coefficients(MFCCs)and low-level descriptors(LLDs).In particular,MLCCT contains two types of feature interaction processes,where a bidirectional Long Short-term Memory(Bi-LSTM)with circulant interaction mechanism is proposed for low-level features,while a two-stream residual cross-modal Transformer block is appliedwhen high-level features are involved.Finally,we choose self-attention blocks for fusion and a fully connected layer to make predictions.To evaluate the performance of our proposed model,comprehensive experiments are conducted on three widely used benchmark datasets including IEMOCAP,MELD and CMU-MOSEI.The competitive results verify the effectiveness of our approach. 展开更多
关键词 Speech emotion recognition self-supervised embedding model cross-modal transformer self-attention
下载PDF
Cyclic Autoencoder for Multimodal Data Alignment Using Custom Datasets
3
作者 Zhenyu Tang Jin Liu +1 位作者 Chao Yu Y.Ken Wang 《Computer Systems Science & Engineering》 SCIE EI 2021年第10期37-54,共18页
The subtitle recognition under multimodal data fusion in this paper aims to recognize text lines from image and audio data.Most existing multimodal fusion methods tend to be associated with pre-fusion as well as post-... The subtitle recognition under multimodal data fusion in this paper aims to recognize text lines from image and audio data.Most existing multimodal fusion methods tend to be associated with pre-fusion as well as post-fusion,which is not reasonable and difficult to interpret.We believe that fusing images and audio before the decision layer,i.e.,intermediate fusion,to take advantage of the complementary multimodal data,will benefit text line recognition.To this end,we propose:(i)a novel cyclic autoencoder based on convolutional neural network.The feature dimensions of the two modal data are aligned under the premise of stabilizing the compressed image features,thus the high-dimensional features of different modal data are fused at the shallow level of the model.(ii)A residual attention mechanism that helps us improve the performance of the recognition.Regions of interest in the image are enhanced and regions of disinterest are weakened,thus we can extract the features of the text regions without further increasing the depth of the model(iii)a fully convolutional network for video subtitle recognition.We choose DenseNet-121 as the backbone network for feature extraction,which effectively enabling the recognition of video subtitles in complex backgrounds.The experiments are performed on our custom datasets,and the automatic and manual evaluation results show that our method reaches the state-of-the-art. 展开更多
关键词 Deep learning convolutional neural network MULTIMODAL text recognition
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部