Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing ...Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.展开更多
Biography videos based on life performances of prominent figures in history aim to describe great mens' life.In this paper,a novel interactive video summarization for biography video based on multimodal fusion is ...Biography videos based on life performances of prominent figures in history aim to describe great mens' life.In this paper,a novel interactive video summarization for biography video based on multimodal fusion is proposed,which is a novel approach of visualizing the specific features for biography video and interacting with video content by taking advantage of the ability of multimodality.In general,a story of movie progresses by dialogues of characters and the subtitles are produced with the basis on the dialogues which contains all the information related to the movie.In this paper,JGibbsLDA is applied to extract key words from subtitles because the biography video consists of different aspects to depict the characters' whole life.In terms of fusing keywords and key-frames,affinity propagation is adopted to calculate the similarity between each key-frame cluster and keywords.Through the method mentioned above,a video summarization is presented based on multimodal fusion which describes video content more completely.In order to reduce the time spent on searching the interest video content and get the relationship between main characters,a kind of map is adopted to visualize video content and interact with video summarization.An experiment is conducted to evaluate video summarization and the results demonstrate that this system can formally facilitate the exploration of video content while improving interaction and finding events of interest efficiently.展开更多
This paper presents a video context enhancement method for night surveillance. The basic idea is to extract and fuse the meaningful information of video sequence captured from a fixed camera under different illuminati...This paper presents a video context enhancement method for night surveillance. The basic idea is to extract and fuse the meaningful information of video sequence captured from a fixed camera under different illuminations. A unique characteristic of the algorithm is to separate the image context into two classes and estimate them in different ways. One class contains basic surrounding scene in- formation and scene model, which is obtained via background modeling and object tracking in daytime video sequence. The other class is extracted from nighttime video, including frequently moving region, high illumination region and high gradient region. The scene model and pixel-wise difference method are used to segment the three regions. A shift-invariant discrete wavelet based image fusion technique is used to integral all those context information in the final result. Experiment results demonstrate that the proposed approach can provide much more details and meaningful information for nighttime video.展开更多
Performance of Video Question and Answer(VQA)systems relies on capturing key information of both visual images and natural language in the context to generate relevant questions’answers.However,traditional linear com...Performance of Video Question and Answer(VQA)systems relies on capturing key information of both visual images and natural language in the context to generate relevant questions’answers.However,traditional linear combinations of multimodal features focus only on shallow feature interactions,fall far short of the need of deep feature fusion.Attention mechanisms were used to perform deep fusion,but most of them can only process weight assignment of single-modal information,leading to attention imbalance for different modalities.To address above problems,we propose a novel VQA model based on Triple Multimodal feature Cyclic Fusion(TMCF)and Self-AdaptiveMultimodal Balancing Mechanism(SAMB).Our model is designed to enhance complex feature interactions among multimodal features with cross-modal information balancing.In addition,TMCF and SAMB can be used as an extensible plug-in for exploring new feature combinations in the visual image domain.Extensive experiments were conducted on MSVDQA and MSRVTT-QA datasets.The results confirm the advantages of our approach in handling multimodal tasks.Besides,we also provide analyses for ablation studies to verify the effectiveness of each proposed component.展开更多
基金Supported by the National Key R&D Program of China(2018YFB2100601)National Natural Science Foundation of China(61872024)。
文摘Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.
基金Supported by the National Key Research and Development Plan(2016YFB1001200)the Natural Science Foundation of China(U1435220,61232013)Natural Science Research Projects of Universities in Jiangsu Province(16KJA520003)
文摘Biography videos based on life performances of prominent figures in history aim to describe great mens' life.In this paper,a novel interactive video summarization for biography video based on multimodal fusion is proposed,which is a novel approach of visualizing the specific features for biography video and interacting with video content by taking advantage of the ability of multimodality.In general,a story of movie progresses by dialogues of characters and the subtitles are produced with the basis on the dialogues which contains all the information related to the movie.In this paper,JGibbsLDA is applied to extract key words from subtitles because the biography video consists of different aspects to depict the characters' whole life.In terms of fusing keywords and key-frames,affinity propagation is adopted to calculate the similarity between each key-frame cluster and keywords.Through the method mentioned above,a video summarization is presented based on multimodal fusion which describes video content more completely.In order to reduce the time spent on searching the interest video content and get the relationship between main characters,a kind of map is adopted to visualize video content and interact with video summarization.An experiment is conducted to evaluate video summarization and the results demonstrate that this system can formally facilitate the exploration of video content while improving interaction and finding events of interest efficiently.
基金Supported by the National Natural Science Foundation of China (No.60634030 and No.60372085)
文摘This paper presents a video context enhancement method for night surveillance. The basic idea is to extract and fuse the meaningful information of video sequence captured from a fixed camera under different illuminations. A unique characteristic of the algorithm is to separate the image context into two classes and estimate them in different ways. One class contains basic surrounding scene in- formation and scene model, which is obtained via background modeling and object tracking in daytime video sequence. The other class is extracted from nighttime video, including frequently moving region, high illumination region and high gradient region. The scene model and pixel-wise difference method are used to segment the three regions. A shift-invariant discrete wavelet based image fusion technique is used to integral all those context information in the final result. Experiment results demonstrate that the proposed approach can provide much more details and meaningful information for nighttime video.
基金This work was supported by the National Natural Science Foundation of China(No.61872231)the National Key Research and Development Program of China(No.2021YFC2801000)the Major Research plan of the National Social Science Foundation of China(No.20&ZD130).
文摘Performance of Video Question and Answer(VQA)systems relies on capturing key information of both visual images and natural language in the context to generate relevant questions’answers.However,traditional linear combinations of multimodal features focus only on shallow feature interactions,fall far short of the need of deep feature fusion.Attention mechanisms were used to perform deep fusion,but most of them can only process weight assignment of single-modal information,leading to attention imbalance for different modalities.To address above problems,we propose a novel VQA model based on Triple Multimodal feature Cyclic Fusion(TMCF)and Self-AdaptiveMultimodal Balancing Mechanism(SAMB).Our model is designed to enhance complex feature interactions among multimodal features with cross-modal information balancing.In addition,TMCF and SAMB can be used as an extensible plug-in for exploring new feature combinations in the visual image domain.Extensive experiments were conducted on MSVDQA and MSRVTT-QA datasets.The results confirm the advantages of our approach in handling multimodal tasks.Besides,we also provide analyses for ablation studies to verify the effectiveness of each proposed component.