期刊文献+
共找到744篇文章
< 1 2 38 >
每页显示 20 50 100
Cross-Modal Consistency with Aesthetic Similarity for Multimodal False Information Detection
1
作者 Weijian Fan Ziwei Shi 《Computers, Materials & Continua》 SCIE EI 2024年第5期2723-2741,共19页
With the explosive growth of false information on social media platforms, the automatic detection of multimodalfalse information has received increasing attention. Recent research has significantly contributed to mult... With the explosive growth of false information on social media platforms, the automatic detection of multimodalfalse information has received increasing attention. Recent research has significantly contributed to multimodalinformation exchange and fusion, with many methods attempting to integrate unimodal features to generatemultimodal news representations. However, they still need to fully explore the hierarchical and complex semanticcorrelations between different modal contents, severely limiting their performance detecting multimodal falseinformation. This work proposes a two-stage detection framework for multimodal false information detection,called ASMFD, which is based on image aesthetic similarity to segment and explores the consistency andinconsistency features of images and texts. Specifically, we first use the Contrastive Language-Image Pre-training(CLIP) model to learn the relationship between text and images through label awareness and train an imageaesthetic attribute scorer using an aesthetic attribute dataset. Then, we calculate the aesthetic similarity betweenthe image and related images and use this similarity as a threshold to divide the multimodal correlation matrixinto consistency and inconsistencymatrices. Finally, the fusionmodule is designed to identify essential features fordetectingmultimodal false information. In extensive experiments on four datasets, the performance of the ASMFDis superior to state-of-the-art baseline methods. 展开更多
关键词 Social media false information detection image aesthetic assessment cross-modal consistency
下载PDF
Multimodal Sentiment Analysis Based on a Cross-Modal Multihead Attention Mechanism
2
作者 Lujuan Deng Boyi Liu Zuhe Li 《Computers, Materials & Continua》 SCIE EI 2024年第1期1157-1170,共14页
Multimodal sentiment analysis aims to understand people’s emotions and opinions from diverse data.Concate-nating or multiplying various modalities is a traditional multi-modal sentiment analysis fusion method.This fu... Multimodal sentiment analysis aims to understand people’s emotions and opinions from diverse data.Concate-nating or multiplying various modalities is a traditional multi-modal sentiment analysis fusion method.This fusion method does not utilize the correlation information between modalities.To solve this problem,this paper proposes amodel based on amulti-head attention mechanism.First,after preprocessing the original data.Then,the feature representation is converted into a sequence of word vectors and positional encoding is introduced to better understand the semantic and sequential information in the input sequence.Next,the input coding sequence is fed into the transformer model for further processing and learning.At the transformer layer,a cross-modal attention consisting of a pair of multi-head attention modules is employed to reflect the correlation between modalities.Finally,the processed results are input into the feedforward neural network to obtain the emotional output through the classification layer.Through the above processing flow,the model can capture semantic information and contextual relationships and achieve good results in various natural language processing tasks.Our model was tested on the CMU Multimodal Opinion Sentiment and Emotion Intensity(CMU-MOSEI)and Multimodal EmotionLines Dataset(MELD),achieving an accuracy of 82.04% and F1 parameters reached 80.59% on the former dataset. 展开更多
关键词 Emotion analysis deep learning cross-modal attention mechanism
下载PDF
A Multi-Level Circulant Cross-Modal Transformer for Multimodal Speech Emotion Recognition 被引量:1
3
作者 Peizhu Gong Jin Liu +3 位作者 Zhongdai Wu Bing Han YKenWang Huihua He 《Computers, Materials & Continua》 SCIE EI 2023年第2期4203-4220,共18页
Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due... Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due to its inclusion of the semantic features of two different modalities,i.e.,audio and text.However,existing methods often fail in effectively represent features and capture correlations.This paper presents a multi-level circulant cross-modal Transformer(MLCCT)formultimodal speech emotion recognition.The proposed model can be divided into three steps,feature extraction,interaction and fusion.Self-supervised embedding models are introduced for feature extraction,which give a more powerful representation of the original data than those using spectrograms or audio features such as Mel-frequency cepstral coefficients(MFCCs)and low-level descriptors(LLDs).In particular,MLCCT contains two types of feature interaction processes,where a bidirectional Long Short-term Memory(Bi-LSTM)with circulant interaction mechanism is proposed for low-level features,while a two-stream residual cross-modal Transformer block is appliedwhen high-level features are involved.Finally,we choose self-attention blocks for fusion and a fully connected layer to make predictions.To evaluate the performance of our proposed model,comprehensive experiments are conducted on three widely used benchmark datasets including IEMOCAP,MELD and CMU-MOSEI.The competitive results verify the effectiveness of our approach. 展开更多
关键词 Speech emotion recognition self-supervised embedding model cross-modal transformer self-attention
下载PDF
TECMH:Transformer-Based Cross-Modal Hashing For Fine-Grained Image-Text Retrieval
4
作者 Qiqi Li Longfei Ma +2 位作者 Zheng Jiang Mingyong Li Bo Jin 《Computers, Materials & Continua》 SCIE EI 2023年第5期3713-3728,共16页
In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalm... In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalmedical processing,etc.The existing main method is to use amulti-label matching paradigm to finish the retrieval tasks.However,such methods do not use fine-grained information in the multi-modal data,which may lead to suboptimal results.To avoid cross-modal matching turning into label matching,this paper proposes an end-to-end fine-grained cross-modal hash retrieval method,which can focus more on the fine-grained semantic information of multi-modal data.First,the method refines the image features and no longer uses multiple labels to represent text features but uses BERT for processing.Second,this method uses the inference capabilities of the transformer encoder to generate global fine-grained features.Finally,in order to better judge the effect of the fine-grained model,this paper uses the datasets in the image text matching field instead of the traditional label-matching datasets.This article experiment on Microsoft COCO(MS-COCO)and Flickr30K datasets and compare it with the previous classicalmethods.The experimental results show that this method can obtain more advanced results in the cross-modal hash retrieval field. 展开更多
关键词 Deep learning cross-modal retrieval hash learning TRANSFORMER
下载PDF
ViT2CMH:Vision Transformer Cross-Modal Hashing for Fine-Grained Vision-Text Retrieval
5
作者 Mingyong Li Qiqi Li +1 位作者 Zheng Jiang Yan Ma 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1401-1414,共14页
In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)... In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance. 展开更多
关键词 Hash learning cross-modal retrieval fine-grained matching TRANSFORMER
下载PDF
Adequate alignment and interaction for cross-modal retrieval
6
作者 Mingkang WANG Min MENG +1 位作者 Jigang LIU Jigang WU 《Virtual Reality & Intelligent Hardware》 EI 2023年第6期509-522,共14页
Background Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications,particularly image-text retrieval in the fields of computer vision and natural language processing... Background Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications,particularly image-text retrieval in the fields of computer vision and natural language processing.Recently,visual and semantic embedding(VSE)learning has shown promising improvements in image text retrieval tasks.Most existing VSE models employ two unrelated encoders to extract features and then use complex methods to contextualize and aggregate these features into holistic embeddings.Despite recent advances,existing approaches still suffer from two limitations:(1)without considering intermediate interactions and adequate alignment between different modalities,these models cannot guarantee the discriminative ability of representations;and(2)existing feature aggregators are susceptible to certain noisy regions,which may lead to unreasonable pooling coefficients and affect the quality of the final aggregated features.Methods To address these challenges,we propose a novel cross-modal retrieval model containing a well-designed alignment module and a novel multimodal fusion encoder that aims to learn the adequate alignment and interaction of aggregated features to effectively bridge the modality gap.Results Experiments on the Microsoft COCO and Flickr30k datasets demonstrated the superiority of our model over state-of-the-art methods. 展开更多
关键词 cross-modal retrieval Visual semantic embedding Feature aggregation Transformer
下载PDF
Review of Visible-Infrared Cross-Modality Person Re-Identification
7
作者 Yinyin Zhang 《Journal of New Media》 2023年第1期23-31,共9页
Person re-identification(ReID)is a sub-problem under image retrieval.It is a technology that uses computer vision to identify a specific pedestrian in a collection of pictures or videos.The pedestrian image under cros... Person re-identification(ReID)is a sub-problem under image retrieval.It is a technology that uses computer vision to identify a specific pedestrian in a collection of pictures or videos.The pedestrian image under cross-device is taken from a monitored pedestrian image.At present,most ReID methods deal with the matching between visible and visible images,but with the continuous improvement of security monitoring system,more and more infrared cameras are used to monitor at night or in dim light.Due to the image differences between infrared camera and RGB camera,there is a huge visual difference between cross-modality images,so the traditional ReID method is difficult to apply in this scene.In view of this situation,studying the pedestrian matching between visible and infrared modalities is particularly crucial.Visible-infrared person re-identification(VI-ReID)was first proposed in 2017,and then attracted more and more attention,and many advanced methods emerged. 展开更多
关键词 Person re-identification cross-modality
下载PDF
Cross-Modal Hashing Retrieval Based on Deep Residual Network
8
作者 Zhiyi Li Xiaomian Xu +1 位作者 Du Zhang Peng Zhang 《Computer Systems Science & Engineering》 SCIE EI 2021年第2期383-405,共23页
In the era of big data rich inWe Media,the single mode retrieval system has been unable to meet people’s demand for information retrieval.This paper proposes a new solution to the problem of feature extraction and un... In the era of big data rich inWe Media,the single mode retrieval system has been unable to meet people’s demand for information retrieval.This paper proposes a new solution to the problem of feature extraction and unified mapping of different modes:A Cross-Modal Hashing retrieval algorithm based on Deep Residual Network(CMHR-DRN).The model construction is divided into two stages:The first stage is the feature extraction of different modal data,including the use of Deep Residual Network(DRN)to extract the image features,using the method of combining TF-IDF with the full connection network to extract the text features,and the obtained image and text features used as the input of the second stage.In the second stage,the image and text features are mapped into Hash functions by supervised learning,and the image and text features are mapped to the common binary Hamming space.In the process of mapping,the distance measurement of the original distance measurement and the common feature space are kept unchanged as far as possible to improve the accuracy of Cross-Modal Retrieval.In training the model,adaptive moment estimation(Adam)is used to calculate the adaptive learning rate of each parameter,and the stochastic gradient descent(SGD)is calculated to obtain the minimum loss function.The whole training process is completed on Caffe deep learning framework.Experiments show that the proposed algorithm CMHR-DRN based on Deep Residual Network has better retrieval performance and stronger advantages than other Cross-Modal algorithms CMFH,CMDN and CMSSH. 展开更多
关键词 Deep residual network cross-modal retrieval HASHING cross-modal hashing retrieval based on deep residual network
下载PDF
Mechanism of Cross-modal Information Influencing Taste 被引量:1
9
作者 Pei LIANG Jia-yu JIANG +2 位作者 Qiang LIU Su-lin ZHANG Hua-jing YANG 《Current Medical Science》 SCIE CAS 2020年第3期474-479,共6页
Studies on the integration of cross-modal information with taste perception has been mostly limited to uni-modal level.The cross-modal sensory interaction and the neural network of information processing and its contr... Studies on the integration of cross-modal information with taste perception has been mostly limited to uni-modal level.The cross-modal sensory interaction and the neural network of information processing and its control were not fully explored and the mechanisms remain poorly understood.This mini review investigated the impact of uni-modal and multi-modal information on the taste perception,from the perspective of cognitive status,such as emotion,expectation and attention,and discussed the hypothesis that the cognitive status is the key step for visual sense to exert influence on taste.This work may help researchers better understand the mechanism of cross-modal information processing and further develop neutrally-based artificial intelligent(AI)system. 展开更多
关键词 cross-modal information integration cognitive status taste perception
下载PDF
CSMCCVA:Framework of cross-modal semantic mapping based on cognitive computing of visual and auditory sensations 被引量:1
10
作者 刘扬 Zheng Fengbin Zuo Xianyu 《High Technology Letters》 EI CAS 2016年第1期90-98,共9页
Cross-modal semantic mapping and cross-media retrieval are key problems of the multimedia search engine.This study analyzes the hierarchy,the functionality,and the structure in the visual and auditory sensations of co... Cross-modal semantic mapping and cross-media retrieval are key problems of the multimedia search engine.This study analyzes the hierarchy,the functionality,and the structure in the visual and auditory sensations of cognitive system,and establishes a brain-like cross-modal semantic mapping framework based on cognitive computing of visual and auditory sensations.The mechanism of visual-auditory multisensory integration,selective attention in thalamo-cortical,emotional control in limbic system and the memory-enhancing in hippocampal were considered in the framework.Then,the algorithms of cross-modal semantic mapping were given.Experimental results show that the framework can be effectively applied to the cross-modal semantic mapping,and also provides an important significance for brain-like computing of non-von Neumann structure. 展开更多
关键词 multimedia neural cognitive computing (MNCC) brain-like computing cross-modal semantic mapping (CSM) selective attention limbic system multisensory integration memory-enhancing mechanism
下载PDF
Use of sensory substitution devices as a model system for investigating cross-modal neuroplasticity in humans 被引量:1
11
作者 Amy C.Nau Matthew C.Murphy Kevin C.Chan 《Neural Regeneration Research》 SCIE CAS CSCD 2015年第11期1717-1719,共3页
Blindness provides an unparalleled opportunity to study plasticity of the nervous system in humans.Seminal work in this area examined the often dramatic modifications to the visual cortex that result when visual input... Blindness provides an unparalleled opportunity to study plasticity of the nervous system in humans.Seminal work in this area examined the often dramatic modifications to the visual cortex that result when visual input is completely absent from birth or very early in life(Kupers and Ptito,2014).More recent studies explored what happens to the visual pathways in the context of acquired blindness.This is particularly relevant as the majority of diseases that cause vision loss occur in the elderly. 展开更多
关键词 Use of sensory substitution devices as a model system for investigating cross-modal neuroplasticity in humans BOLD
下载PDF
Cross-modal learning using privileged information for long-tailed image classification
12
作者 Xiangxian Li Yuze Zheng +3 位作者 Haokai Ma Zhuang Qi Xiangxu Meng Lei Meng 《Computational Visual Media》 SCIE EI CSCD 2024年第5期981-992,共12页
The prevalence of long-tailed distributions in real-world data often results in classification models favoring the dominant classes,neglecting the less frequent ones.Current approaches address the issues in long-taile... The prevalence of long-tailed distributions in real-world data often results in classification models favoring the dominant classes,neglecting the less frequent ones.Current approaches address the issues in long-tailed image classification by rebalancing data,optimizing weights,and augmenting information.However,these methods often struggle to balance the performance between dominant and minority classes because of inadequate representation learning of the latter.To address these problems,we introduce descriptional words into images as cross-modal privileged information and propose a cross-modal enhanced method for long-tailed image classification,referred to as CMLTNet.CMLTNet improves the learning of intraclass similarity of tail-class representations by cross-modal alignment and captures the difference between the head and tail classes in semantic space by cross-modal inference.After fusing the above information,CMLTNet achieved an overall performance that was better than those of benchmark long-tailed and cross-modal learning methods on the long-tailed cross-modal datasets,NUS-WIDE and VireoFood-172.The effectiveness of the proposed modules was further studied through ablation experiments.In a case study of feature distribution,the proposed model was better in learning representations of tail classes,and in the experiments on model attention,CMLTNet has the potential to help learn some rare concepts in the tail class through mapping to the semantic space. 展开更多
关键词 long-tailed classification cross-modal learning representation learning privileged infor-mation
原文传递
Robust cross-modal retrieval with alignment refurbishment
13
作者 Jinyi GUO Jieyu DING 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2023年第10期1403-1415,共13页
Cross-modal retrieval tries to achieve mutual retrieval between modalities by establishing consistent alignment for different modal data.Currently,many cross-modal retrieval methods have been proposed and have achieve... Cross-modal retrieval tries to achieve mutual retrieval between modalities by establishing consistent alignment for different modal data.Currently,many cross-modal retrieval methods have been proposed and have achieved excellent results;however,these are trained with clean cross-modal pairs,which are semantically matched but costly,compared with easily available data with noise alignment(i.e.,paired but mismatched in semantics).When training these methods with noise-aligned data,the performance degrades dramatically.Therefore,we propose a robust cross-modal retrieval with alignment refurbishment(RCAR),which significantly reduces the impact of noise on the model.Specifically,RCAR first conducts multi-task learning to slow down the overfitting to the noise to make data separable.Then,RCAR uses a two-component beta-mixture model to divide them into clean and noise alignments and refurbishes the label according to the posterior probability of the noise-alignment component.In addition,we define partial and complete noises in the noise-alignment paradigm.Experimental results show that,compared with the popular cross-modal retrieval methods,RCAR achieves more robust performance with both types of noise. 展开更多
关键词 cross-modal retrieval Robust learning Alignment correction Beta-mixture model
原文传递
Nondestructive perception of potato quality in actual online production based on cross-modal technology
14
作者 Qiquan Wei Yurui Zheng +6 位作者 Zhaoqing Chen Yun Huang Changqing Chen Zhenbo Wei Shuiqin Zhou Hongwei Sun Fengnong Chen 《International Journal of Agricultural and Biological Engineering》 SCIE 2023年第6期280-290,共11页
Nowadays,China stands as the global leader in terms of potato planting area and total potato production.The rapid and nondestructive detection of the potato quality before processing is of great significance in promot... Nowadays,China stands as the global leader in terms of potato planting area and total potato production.The rapid and nondestructive detection of the potato quality before processing is of great significance in promoting rural revitalization and augmenting farmers’income.However,existing potato quality sorting methods are primarily confined to theoretical research,and the market lacks an integrated intelligent detection system.Therefore,there is an urgent need for a post-harvest potato detection method adapted to the actual production needs.The study proposes a potato quality sorting method based on cross-modal technology.First,an industrial camera obtains image information for external quality detection.A model using the YOLOv5s algorithm to detect external green-skinned,germinated,rot and mechanical damage defects.VIS/NIR spectroscopy is used to obtain spectral information for internal quality detection.A convolutional neural network(CNN)algorithm is used to detect internal blackheart disease defects.The mean average precision(mAP)of the external detection model is 0.892 when intersection of union(IoU)=0.5.The accuracy of the internal detection model is 98.2%.The real-time dynamic defect detection rate for the final online detection system is 91.3%,and the average detection time is 350 ms per potato.In contrast to samples collected in an ideal laboratory setting for analysis,the dynamic detection results of this study are more applicable based on a real-time online working environment.It also provides a valuable reference for the subsequent online quality testing of similar agricultural products. 展开更多
关键词 cross-modal technology potato quality YOLOv5s VIS/NIR spectroscopy online nondestructive detection
原文传递
Cross-Modal Entity Resolution for Image and Text Integrating Global and Fine-Grained Joint Attention Mechanism
15
作者 曾志贤 曹建军 +2 位作者 翁年凤 袁震 余旭 《Journal of Shanghai Jiaotong university(Science)》 EI 2023年第6期728-737,共10页
In order to solve the problem that the existing cross-modal entity resolution methods easily ignore the high-level semantic informational correlations between cross-modal data,we propose a novel cross-modal entity res... In order to solve the problem that the existing cross-modal entity resolution methods easily ignore the high-level semantic informational correlations between cross-modal data,we propose a novel cross-modal entity resolution for image and text integrating global and fine-grained joint attention mechanism method.First,we map the cross-modal data to a common embedding space utilizing a feature extraction network.Then,we integrate global joint attention mechanism and fine-grained joint attention mechanism,making the model have the ability to learn the global semantic characteristics and the local fine-grained semantic characteristics of the cross-modal data,which is used to fully exploit the cross-modal semantic correlation and boost the performance of cross-modal entity resolution.Moreover,experiments on Flickr-30K and MS-COCO datasets show that the overall performance of R@sum outperforms by 4.30%and 4.54%compared with 5 state-of-the-art methods,respectively,which can fully demonstrate the superiority of our proposed method. 展开更多
关键词 cross-modal entity resolution joint attention mechanism deep learning feature extraction semantic correlation
原文传递
TACFN:Transformer-Based Adaptive Cross-Modal Fusion Network for Multimodal Emotion Recognition
16
作者 Feng Liu Ziwang Fu +1 位作者 Yunlong Wang Qijian Zheng 《CAAI Artificial Intelligence Research》 2023年第1期75-82,共8页
The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suf... The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suffers from redundant features and does not capture complementary features well.We find that it is not necessary to use the entire information of one modality to reinforce the other during cross-modal interaction,and the features that can reinforce a modality may contain only a part of it.To this end,we design an innovative Transformer-based Adaptive Cross-modal Fusion Network(TACFN).Specifically,for the redundant features,we make one modality perform intra-modal feature selection through a self-attention mechanism,so that the selected features can adaptively and efficiently interact with another modality.To better capture the complementary information between the modalities,we obtain the fused weight vector by splicing and use the weight vector to achieve feature reinforcement of the modalities.We apply TCAFN to the RAVDESS and IEMOCAP datasets.For fair comparison,we use the same unimodal representations to validate the effectiveness of the proposed fusion method.The experimental results show that TACFN brings a significant performance improvement compared to other methods and reaches the state-of-the-art performance.All code and models could be accessed from https://github.com/shuzihuaiyu/TACFN. 展开更多
关键词 multimodal emotion recognition multimodal fusion adaptive cross-modal blocks TRANSFORMER computational perception
原文传递
Cross-Modal Complementary Network with Hierarchical Fusion for Multimodal Sentiment Classification 被引量:4
17
作者 Cheng Peng Chunxia Zhang +3 位作者 Xiaojun Xue Jiameng Gao Hongjian Liang Zhengdong Niu 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第4期664-679,共16页
Multimodal Sentiment Classification(MSC)uses multimodal data,such as images and texts,to identify the users'sentiment polarities from the information posted by users on the Internet.MSC has attracted considerable ... Multimodal Sentiment Classification(MSC)uses multimodal data,such as images and texts,to identify the users'sentiment polarities from the information posted by users on the Internet.MSC has attracted considerable attention because of its wide applications in social computing and opinion mining.However,improper correlation strategies can cause erroneous fusion as the texts and the images that are unrelated to each other may integrate.Moreover,simply concatenating them modal by modal,even with true correlation,cannot fully capture the features within and between modals.To solve these problems,this paper proposes a Cross-Modal Complementary Network(CMCN)with hierarchical fusion for MSC.The CMCN is designed as a hierarchical structure with three key modules,namely,the feature extraction module to extract features from texts and images,the feature attention module to learn both text and image attention features generated by an image-text correlation generator,and the cross-modal hierarchical fusion module to fuse features within and between modals.Such a CMCN provides a hierarchical fusion framework that can fully integrate different modal features and helps reduce the risk of integrating unrelated modal features.Extensive experimental results on three public datasets show that the proposed approach significantly outperforms the state-of-the-art methods. 展开更多
关键词 multimodal sentiment analysis multimodal fusion cross-modal Complementary Network(CMCN) hierarchical fusion joint optimization
原文传递
Mismatch negativity of ERP in cross-modal attention 被引量:1
18
作者 罗跃嘉 魏景汉 《Science China(Life Sciences)》 SCIE CAS 1997年第6期604-612,共9页
Event-related potentials were measured in 12 healthy youth subjects aged 19-22 using the paradigm 'cross-modal and delayed response' which is able to improve unattended purity and to avoid the effect of task t... Event-related potentials were measured in 12 healthy youth subjects aged 19-22 using the paradigm 'cross-modal and delayed response' which is able to improve unattended purity and to avoid the effect of task target on the deviant components of ERP. The experiment included two conditions: (i) Attend visual modality, ignore auditory modality; (ii) attend auditory modality, ignore visual modality. The stimuli under the two conditions were the same. The difference wave was obtained by subtracting ERPs of the standard stimuli from that of the deviant stim-uli. The present results showed that mismatch negativity (MMN), N2b and P3 components can be produced in the auditory and visual modalities under attention condition. However, only MMN was observed in the two modalities un-der inattention condition. Auditory and visual MMN have some features in common: their largest MMN wave peaks were distributed respectively over their primary sensory projection areas of the scalp under attention condition, but over fronto-central scalp under inattention condition. There was no significant difference between the amplitudes of visual and auditory MMN. Their amplitudes and the scalp distribution were unaffected by attention, thus suggesting that MMN amplitude is an important index reflecting automatic processing in the brain. However, the latency of the audi-tory and visual MMN were affected by attention, showing that MMN not only reflects automatic processing but also probably relates to control processing. 展开更多
关键词 EVENT-RELATED potentials (ERPs) mismatch NEGATIVITY (MMN) selective ATTENTION cross-modal and delayed response.
原文传递
Event-related potentials study on cross-modal discrimination of Chinese characters 被引量:1
19
作者 罗跃嘉 魏景汉 《Science China(Life Sciences)》 SCIE CAS 1999年第2期113-121,共9页
Event-related potentials (ERPs) were measured in 15 normal young subjects (18-22 years old) using the cross-modal and delayed response paradigm, which is able to improve inattention purity. The stimuli consisted of wr... Event-related potentials (ERPs) were measured in 15 normal young subjects (18-22 years old) using the cross-modal and delayed response paradigm, which is able to improve inattention purity. The stimuli consisted of written and spoken single Chinese characters. The presentation probability of standard stimuli was 82.5% and that of deviant stimuli was 17.5%. The attention components were obtained by subtracting the ERPs of inattention condition from those of attention condition. The results of the Nl scalp distribution demonstrated a cross-modal difference. This result is in contrast to studies with non-verbal as well as with English verbal stimuli. This probably reflected the brain mechanism feature of Chinese language processing. The processing location of attention was varied along with verbal/ non-verbal stimuli, auditory/visual modalities and standard/deviant stimuli, and thus it has plasticity. The early attention effects occurred before the exogenous components, and thus provided evidence supporting the early selective theory of attention. According to the relationship of Nl and Ndl, the present result supported the viewpoint that the Nl enhancement was caused by endogenous components overlapping with exogenous one rather than by pure exogenous component. 展开更多
关键词 EVENT-RELATED potential (ERP) early negative difference wave (Nd1) selective attention Chinese CHARACTER cross-modal and DELAYED response paradigm.
原文传递
A Computational Model of Concept Generalization in Cross-Modal Reference 被引量:1
20
作者 Patrick McCrae Wolfgang Menzel Maosong SUN 《Tsinghua Science and Technology》 SCIE EI CAS 2011年第2期113-120,共8页
Cross-modal interactions between visual understanding and linguistic processing substantially contribute to the remarkable robustness of human language processing.We argue that the formation of cross-modal referential... Cross-modal interactions between visual understanding and linguistic processing substantially contribute to the remarkable robustness of human language processing.We argue that the formation of cross-modal referential links is a prerequisite for the occurrence of cross-modal interactions between vision and language.In this paper we examine a computational model for a cross-modal reference formation with respect to its robustness against conceptual underspecification in the visual modality.This investigation is motivated by the fact that natural systems are well capable of establishing a cross-modal reference between modalities with different degrees of conceptual specification.In the investigated model,conceptually underspecified context information continues to drive the syntactic disambiguation of verb-centered syntactic ambiguities as long as the visual context contains the situation arity information of the visual scene. 展开更多
关键词 vision-language interaction cross-modal reference syntactic disambiguation
原文传递
上一页 1 2 38 下一页 到第
使用帮助 返回顶部