Background Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications,particularly image-text retrieval in the fields of computer vision and natural language processing...Background Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications,particularly image-text retrieval in the fields of computer vision and natural language processing.Recently,visual and semantic embedding(VSE)learning has shown promising improvements in image text retrieval tasks.Most existing VSE models employ two unrelated encoders to extract features and then use complex methods to contextualize and aggregate these features into holistic embeddings.Despite recent advances,existing approaches still suffer from two limitations:(1)without considering intermediate interactions and adequate alignment between different modalities,these models cannot guarantee the discriminative ability of representations;and(2)existing feature aggregators are susceptible to certain noisy regions,which may lead to unreasonable pooling coefficients and affect the quality of the final aggregated features.Methods To address these challenges,we propose a novel cross-modal retrieval model containing a well-designed alignment module and a novel multimodal fusion encoder that aims to learn the adequate alignment and interaction of aggregated features to effectively bridge the modality gap.Results Experiments on the Microsoft COCO and Flickr30k datasets demonstrated the superiority of our model over state-of-the-art methods.展开更多
Cross-modal semantic mapping and cross-media retrieval are key problems of the multimedia search engine.This study analyzes the hierarchy,the functionality,and the structure in the visual and auditory sensations of co...Cross-modal semantic mapping and cross-media retrieval are key problems of the multimedia search engine.This study analyzes the hierarchy,the functionality,and the structure in the visual and auditory sensations of cognitive system,and establishes a brain-like cross-modal semantic mapping framework based on cognitive computing of visual and auditory sensations.The mechanism of visual-auditory multisensory integration,selective attention in thalamo-cortical,emotional control in limbic system and the memory-enhancing in hippocampal were considered in the framework.Then,the algorithms of cross-modal semantic mapping were given.Experimental results show that the framework can be effectively applied to the cross-modal semantic mapping,and also provides an important significance for brain-like computing of non-von Neumann structure.展开更多
Because of everyone's involvement in social networks, social networks are full of massive multimedia data, and events are got released and disseminated through social networks in the form of multi-modal and multi-att...Because of everyone's involvement in social networks, social networks are full of massive multimedia data, and events are got released and disseminated through social networks in the form of multi-modal and multi-attribute heterogeneous data. There have been numerous researches on social network search. Considering the spatio-temporal feature of messages and social relationships among users, we summarized an overall social network search framework from the perspective of semantics based on existing researches. For social network search, the acquisition and representation of spatio-temporal data is the basis, the semantic analysis and modeling of social network cross-media big data is an important component, deep semantic learning of social networks is the key research field, and the indexing and ranking mechanism is the indispensable part. This paper reviews the current studies in these fields, and then main challenges of social network search are given. Finally, we give an outlook to the prospect and further work of social network search.展开更多
As the popularity of digital images is rapidly increasing on the Internet, research on technologies for semantic image classification has become an important research topic. However, the well-known content-based image...As the popularity of digital images is rapidly increasing on the Internet, research on technologies for semantic image classification has become an important research topic. However, the well-known content-based image classification methods do not overcome the so-called semantic gap problem in which low-level visual features cannot represent the high-level semantic content of images. Image classification using visual and textual information often performs poorly since the extracted textual features are often too limited to accurately represent the images. In this paper, we propose a semantic image classification ap- proach using multi-context analysis. For a given image, we model the relevant textual information as its multi-modal context, and regard the related images connected by hyperlinks as its link context. Two kinds of context analysis models, i.e., cross-modal correlation analysis and link-based correlation model, are used to capture the correlation among different modals of features and the topical dependency among images induced by the link structure. We propose a new collective classification model called relational support vector classifier (RSVC) based on the well-known Support Vector Machines (SVMs) and the link-based cor- relation model. Experiments showed that the proposed approach significantly improved classification accuracy over that of SVM classifiers using visual and/or textual features.展开更多
In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a p...In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.展开更多
The Information Technology (IT) developments have changed the use of Healthcare terminologies from paper-based mortality statistics with the WHO international classifications of diseases (ICD) to the IT-based morbidit...The Information Technology (IT) developments have changed the use of Healthcare terminologies from paper-based mortality statistics with the WHO international classifications of diseases (ICD) to the IT-based morbidity implementations for instance for Casemix-based healthcare funding and managing systems. This higher level of granularity is worldwide spread under the umbrella of several national modifications named ICD10 XM. These developments have met the increased use of the International Clinical Reference Terminology named SNOMED. When the updating of WHO ICD10 to WHO ICD11 was decided a merging was envisaged and a WHO SNOMED CT common work proposed a methodology to create a common formal ontology between the 11th version of the WHO International Classification of Diseases and Health Problems (ICD) and the most used in the world clinical terminology named Systematized Nomenclature of Human and Veterinary Medicine - Clinical Terms (SCT). The present work follows this unachieved work and aims to develop a SNOMED-based formal ontology for ICD11 chapter 1 using the textual definitions of ICD11 codes which is a completely new character of ICD and the ontology tools provided by SCT in the publicly available SNOMED Browser. There are two key results: the lexical alignment is complete and the ontology alignment is incomplete with the validated SNOMED concept model can be completed with not yet validated attributes and values of the SNOMED Compositional Grammar. The work opens a new era for the seamless use of both international terminologies for morbidity for instance for DRG/Casemix and clinical management use. The main limitation is that it is restricted to 1 out of 26 chapters of ICD11.展开更多
基金Supported by the National Natural Science Foundation of China (62172109,62072118)the National Science Foundation of Guangdong Province (2022A1515010322)+1 种基金the Guangdong Basic and Applied Basic Research Foundation (2021B1515120010)the Huangpu International Sci&Tech Cooperation foundation of Guangzhou (2021GH12)。
文摘Background Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications,particularly image-text retrieval in the fields of computer vision and natural language processing.Recently,visual and semantic embedding(VSE)learning has shown promising improvements in image text retrieval tasks.Most existing VSE models employ two unrelated encoders to extract features and then use complex methods to contextualize and aggregate these features into holistic embeddings.Despite recent advances,existing approaches still suffer from two limitations:(1)without considering intermediate interactions and adequate alignment between different modalities,these models cannot guarantee the discriminative ability of representations;and(2)existing feature aggregators are susceptible to certain noisy regions,which may lead to unreasonable pooling coefficients and affect the quality of the final aggregated features.Methods To address these challenges,we propose a novel cross-modal retrieval model containing a well-designed alignment module and a novel multimodal fusion encoder that aims to learn the adequate alignment and interaction of aggregated features to effectively bridge the modality gap.Results Experiments on the Microsoft COCO and Flickr30k datasets demonstrated the superiority of our model over state-of-the-art methods.
基金Supported by the National Natural Science Foundation of China(No.61305042,61202098)Projects of Center for Remote Sensing Mission Study of China National Space Administration(No.2012A03A0939)Science and Technological Research of Key Projects of Education Department of Henan Province of China(No.13A520071)
文摘Cross-modal semantic mapping and cross-media retrieval are key problems of the multimedia search engine.This study analyzes the hierarchy,the functionality,and the structure in the visual and auditory sensations of cognitive system,and establishes a brain-like cross-modal semantic mapping framework based on cognitive computing of visual and auditory sensations.The mechanism of visual-auditory multisensory integration,selective attention in thalamo-cortical,emotional control in limbic system and the memory-enhancing in hippocampal were considered in the framework.Then,the algorithms of cross-modal semantic mapping were given.Experimental results show that the framework can be effectively applied to the cross-modal semantic mapping,and also provides an important significance for brain-like computing of non-von Neumann structure.
文摘Because of everyone's involvement in social networks, social networks are full of massive multimedia data, and events are got released and disseminated through social networks in the form of multi-modal and multi-attribute heterogeneous data. There have been numerous researches on social network search. Considering the spatio-temporal feature of messages and social relationships among users, we summarized an overall social network search framework from the perspective of semantics based on existing researches. For social network search, the acquisition and representation of spatio-temporal data is the basis, the semantic analysis and modeling of social network cross-media big data is an important component, deep semantic learning of social networks is the key research field, and the indexing and ranking mechanism is the indispensable part. This paper reviews the current studies in these fields, and then main challenges of social network search are given. Finally, we give an outlook to the prospect and further work of social network search.
基金Project supported by the Hi-Tech Research and Development Pro-gram (863) of China (No. 2003AA119010), and China-American Digital Academic Library (CADAL) Project (No. CADAL2004002)
文摘As the popularity of digital images is rapidly increasing on the Internet, research on technologies for semantic image classification has become an important research topic. However, the well-known content-based image classification methods do not overcome the so-called semantic gap problem in which low-level visual features cannot represent the high-level semantic content of images. Image classification using visual and textual information often performs poorly since the extracted textual features are often too limited to accurately represent the images. In this paper, we propose a semantic image classification ap- proach using multi-context analysis. For a given image, we model the relevant textual information as its multi-modal context, and regard the related images connected by hyperlinks as its link context. Two kinds of context analysis models, i.e., cross-modal correlation analysis and link-based correlation model, are used to capture the correlation among different modals of features and the topical dependency among images induced by the link structure. We propose a new collective classification model called relational support vector classifier (RSVC) based on the well-known Support Vector Machines (SVMs) and the link-based cor- relation model. Experiments showed that the proposed approach significantly improved classification accuracy over that of SVM classifiers using visual and/or textual features.
基金This research was funded by Shenzhen Science and Technology Program(Grant No.RCBS20221008093121051)the General Higher Education Project of Guangdong Provincial Education Department(Grant No.2020ZDZX3085)+1 种基金China Postdoctoral Science Foundation(Grant No.2021M703371)the Post-Doctoral Foundation Project of Shenzhen Polytechnic(Grant No.6021330002K).
文摘In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.
文摘The Information Technology (IT) developments have changed the use of Healthcare terminologies from paper-based mortality statistics with the WHO international classifications of diseases (ICD) to the IT-based morbidity implementations for instance for Casemix-based healthcare funding and managing systems. This higher level of granularity is worldwide spread under the umbrella of several national modifications named ICD10 XM. These developments have met the increased use of the International Clinical Reference Terminology named SNOMED. When the updating of WHO ICD10 to WHO ICD11 was decided a merging was envisaged and a WHO SNOMED CT common work proposed a methodology to create a common formal ontology between the 11th version of the WHO International Classification of Diseases and Health Problems (ICD) and the most used in the world clinical terminology named Systematized Nomenclature of Human and Veterinary Medicine - Clinical Terms (SCT). The present work follows this unachieved work and aims to develop a SNOMED-based formal ontology for ICD11 chapter 1 using the textual definitions of ICD11 codes which is a completely new character of ICD and the ontology tools provided by SCT in the publicly available SNOMED Browser. There are two key results: the lexical alignment is complete and the ontology alignment is incomplete with the validated SNOMED concept model can be completed with not yet validated attributes and values of the SNOMED Compositional Grammar. The work opens a new era for the seamless use of both international terminologies for morbidity for instance for DRG/Casemix and clinical management use. The main limitation is that it is restricted to 1 out of 26 chapters of ICD11.