Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical ...Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes.展开更多
Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image ...Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image contour and detail information by traditional image fusion methods,a new multimodal medical image fusion method is proposed.This method first uses non-subsampled shearlet transform to decompose the source image to obtain high and low frequency subband coefficients,then uses the latent low rank representation algorithm to fuse the low frequency subband coefficients,and applies the improved PAPCNN algorithm to fuse the high frequency subband coefficients.Finally,based on the automatic setting of parameters,the optimization method configuration of the time decay factorαe is carried out.The experimental results show that the proposed method solves the problems of difficult parameter setting and insufficient detail protection ability in traditional PCNN algorithm fusion images,and at the same time,it has achieved great improvement in visual quality and objective evaluation indicators.展开更多
Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image f...Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image fusion represents an indispensible role infixing major solutions for the complicated medical predicaments,while the recent research results have an enhanced affinity towards the preservation of medical image details,leaving color distortion and halo artifacts to remain unaddressed.This paper proposes a novel method of fusing Computer Tomography(CT)and Magnetic Resonance Imaging(MRI)using a hybrid model of Non Sub-sampled Contourlet Transform(NSCT)and Joint Sparse Representation(JSR).This model gratifies the need for precise integration of medical images of different modalities,which is an essential requirement in the diagnosing process towards clinical activities and treating the patients accordingly.In the proposed model,the medical image is decomposed using NSCT which is an efficient shift variant decomposition transformation method.JSR is exercised to extricate the common features of the medical image for the fusion process.The performance analysis of the proposed system proves that the proposed image fusion technique for medical image fusion is more efficient,provides better results,and a high level of distinctness by integrating the advantages of complementary images.The comparative analysis proves that the proposed technique exhibits better-quality than the existing medical image fusion practices.展开更多
Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion im...Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion images have disadvantages such as blurred edges,low contrast,and loss of details.Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform(NSST).Furthermore,the low-frequency subbands were fused by convolutional sparse representation(CSR),and the high-frequency subbands were fused by an improved pulse coupled neural network(IPCNN)algorithm,which can effectively solve the problem of difficulty in setting parameters of the traditional PCNN algorithm,improving the performance of sparse representation with details injection.The result reveals that the proposed method in this paper has more advantages than the existing mainstream fusion algorithms in terms of visual effects and objective indicators.展开更多
The combination of the Industrial Internet of Things(IIoT)and digital twin(DT)technology makes it possible for the DT model to realize the dynamic perception of equipment status and performance.However,conventional di...The combination of the Industrial Internet of Things(IIoT)and digital twin(DT)technology makes it possible for the DT model to realize the dynamic perception of equipment status and performance.However,conventional digital modeling is weak in the fusion and adjustment ability between virtual and real information.The performance prediction based on experience greatly reduces the inclusiveness and accuracy of the model.In this paper,a DT-IIoT optimization model is proposed to improve the real-time representation and prediction ability of the key equipment state.Firstly,a global real-time feedback and the dynamic adjustment mechanism is established by combining DT-IIoT with algorithm optimization.Secondly,a strong screening dual-model optimization(SSDO)prediction method based on Stacking integration and fusion is proposed in the dynamic regulation mechanism.Lightweight screening and multi-round optimization are used to improve the prediction accuracy of the evolution model.Finally,tak-ing the boiler performance of a power plant in Shanxi as an example,the accurate representation and evolution prediction of boiler steam quantity is realized.The results show that the real-time state representation and life cycle performance prediction of large key equipment is optimized through these methods.The self-lifting ability of the Stacking integration and fusion-based SSDO prediction method is 15.85%on average,and the optimal self-lifting ability is 18.16%.The optimization model reduces the MSE loss from the initial 0.318 to the optimal 0.1074,and increases R2 from the initial 0.731 to the optimal 0.9092.The adaptability and reliability of the model are comprehensively improved,and better prediction and analysis results are achieved.This ensures the stable operation of core equipment,and is of great significance to comprehensively understanding the equipment status and performance.展开更多
Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant resear...Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities.Under complex scenes,multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions.However,achieving outstanding performance is challenging because of equipment performance limitations,missing information,and data noise.This paper comprehensively reviews existing methods based onmulti-modal fusion techniques and completes a detailed and in-depth analysis.According to the data fusion stage,multi-modal fusion has four primary methods:early fusion,deep fusion,late fusion,and hybrid fusion.The paper surveys the three majormulti-modal fusion technologies that can significantly enhance the effect of data fusion and further explore the applications of multi-modal fusion technology in various fields.Finally,it discusses the challenges and explores potential research opportunities.Multi-modal tasks still need intensive study because of data heterogeneity and quality.Preserving complementary information and eliminating redundant information between modalities is critical in multi-modal technology.Invalid data fusion methods may introduce extra noise and lead to worse results.This paper provides a comprehensive and detailed summary in response to these challenges.展开更多
Social media has become increasingly significant in modern society,but it has also turned into a breeding ground for the propagation of misleading information,potentially causing a detrimental impact on public opinion...Social media has become increasingly significant in modern society,but it has also turned into a breeding ground for the propagation of misleading information,potentially causing a detrimental impact on public opinion and daily life.Compared to pure text content,multmodal content significantly increases the visibility and share ability of posts.This has made the search for efficient modality representations and cross-modal information interaction methods a key focus in the field of multimodal fake news detection.To effectively address the critical challenge of accurately detecting fake news on social media,this paper proposes a fake news detection model based on crossmodal message aggregation and a gated fusion network(MAGF).MAGF first uses BERT to extract cumulative textual feature representations and word-level features,applies Faster Region-based ConvolutionalNeuralNetwork(Faster R-CNN)to obtain image objects,and leverages ResNet-50 and Visual Geometry Group-19(VGG-19)to obtain image region features and global features.The image region features and word-level text features are then projected into a low-dimensional space to calculate a text-image affinity matrix for cross-modal message aggregation.The gated fusion network combines text and image region features to obtain adaptively aggregated features.The interaction matrix is derived through an attention mechanism and further integrated with global image features using a co-attention mechanism to producemultimodal representations.Finally,these fused features are fed into a classifier for news categorization.Experiments were conducted on two public datasets,Twitter and Weibo.Results show that the proposed model achieves accuracy rates of 91.8%and 88.7%on the two datasets,respectively,significantly outperforming traditional unimodal and existing multimodal models.展开更多
This article proposes a VGG network with histogram of oriented gradient(HOG) feature fusion(HOG-VGG) for polarization synthetic aperture radar(PolSAR) image terrain classification.VGG-Net has a strong ability of deep ...This article proposes a VGG network with histogram of oriented gradient(HOG) feature fusion(HOG-VGG) for polarization synthetic aperture radar(PolSAR) image terrain classification.VGG-Net has a strong ability of deep feature extraction,which can fully extract the global deep features of different terrains in PolSAR images,so it is widely used in PolSAR terrain classification.However,VGG-Net ignores the local edge & shape features,resulting in incomplete feature representation of the PolSAR terrains,as a consequence,the terrain classification accuracy is not promising.In fact,edge and shape features play an important role in PolSAR terrain classification.To solve this problem,a new VGG network with HOG feature fusion was specifically proposed for high-precision PolSAR terrain classification.HOG-VGG extracts both the global deep semantic features and the local edge & shape features of the PolSAR terrains,so the terrain feature representation completeness is greatly elevated.Moreover,HOG-VGG optimally fuses the global deep features and the local edge & shape features to achieve the best classification results.The superiority of HOG-VGG is verified on the Flevoland,San Francisco and Oberpfaffenhofen datasets.Experiments show that the proposed HOG-VGG achieves much better PolSAR terrain classification performance,with overall accuracies of 97.54%,94.63%,and 96.07%,respectively.展开更多
Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational com...Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational complexity and neglecting the local features of an image,resulting in limited image detail retention and a high registration misalignment sensitivity.In order to overcome these shortcomings and the noise existing in the image of the fusion process,this paper proposes a new signal decomposition model,namely the multi-source image fusion algorithm of the gradient regularization convolution SR(CSR).The main innovation of this work is using the sparse optimization function to perform two-scale decomposition of the source image to obtain high-frequency components and low-frequency components.The sparse coefficient is obtained by the gradient regularization CSR model,and the sparse coefficient is taken as the maximum value to get the optimal high frequency component of the fused image.The best low frequency component is obtained by using the fusion strategy of the extreme or the average value.The final fused image is obtained by adding two optimal components.Experimental results demonstrate that this method greatly improves the ability to maintain image details and reduces image registration sensitivity.展开更多
Recently,sparse representation classification(SRC)and fisher discrimination dictionary learning(FDDL)methods have emerged as important methods for vehicle classification.In this paper,inspired by recent breakthroughs ...Recently,sparse representation classification(SRC)and fisher discrimination dictionary learning(FDDL)methods have emerged as important methods for vehicle classification.In this paper,inspired by recent breakthroughs of discrimination dictionary learning approach and multi-task joint covariate selection,we focus on the problem of vehicle classification in real-world applications by formulating it as a multi-task joint sparse representation model based on fisher discrimination dictionary learning to merge the strength of multiple features among multiple sensors.To improve the classification accuracy in complex scenes,we develop a new method,called multi-task joint sparse representation classification based on fisher discrimination dictionary learning,for vehicle classification.In our proposed method,the acoustic and seismic sensor data sets are captured to measure the same physical event simultaneously by multiple heterogeneous sensors and the multi-dimensional frequency spectrum features of sensors data are extracted using Mel frequency cepstral coefficients(MFCC).Moreover,we extend our model to handle sparse environmental noise.We experimentally demonstrate the benefits of joint information fusion based on fisher discrimination dictionary learning from different sensors in vehicle classification tasks.展开更多
多模态数据处理是一个重要的研究领域,它可以通过结合文本、图像等多种信息来提高模型性能.然而,由于不同模态之间的异构性以及信息融合的挑战,设计有效的多模态分类模型仍然是一个具有挑战性的问题.本文提出了一种新的多模态分类模型—...多模态数据处理是一个重要的研究领域,它可以通过结合文本、图像等多种信息来提高模型性能.然而,由于不同模态之间的异构性以及信息融合的挑战,设计有效的多模态分类模型仍然是一个具有挑战性的问题.本文提出了一种新的多模态分类模型——MCM-ICE,它通过联合独立编码和协同编码策略来解决特征表示和特征融合的挑战.MCM-ICE在Fashion-Gen和Hateful Memes Challenge两个数据集上进行了实验,结果表明该模型在这两项任务中均优于现有的最先进方法.本文还探究了协同编码模块Transformer输出层的不同向量选取对结果的影响,结果表明选取[CLS]向量和去除[CLS]的向量的平均池化向量可以获得最佳结果.消融研究和探索性分析支持了MCM-ICE模型在处理多模态分类任务方面的有效性.展开更多
基金funded by the National Natural Science Foundation of China,grant number 61302188.
文摘Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes.
文摘Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image contour and detail information by traditional image fusion methods,a new multimodal medical image fusion method is proposed.This method first uses non-subsampled shearlet transform to decompose the source image to obtain high and low frequency subband coefficients,then uses the latent low rank representation algorithm to fuse the low frequency subband coefficients,and applies the improved PAPCNN algorithm to fuse the high frequency subband coefficients.Finally,based on the automatic setting of parameters,the optimization method configuration of the time decay factorαe is carried out.The experimental results show that the proposed method solves the problems of difficult parameter setting and insufficient detail protection ability in traditional PCNN algorithm fusion images,and at the same time,it has achieved great improvement in visual quality and objective evaluation indicators.
文摘Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image fusion represents an indispensible role infixing major solutions for the complicated medical predicaments,while the recent research results have an enhanced affinity towards the preservation of medical image details,leaving color distortion and halo artifacts to remain unaddressed.This paper proposes a novel method of fusing Computer Tomography(CT)and Magnetic Resonance Imaging(MRI)using a hybrid model of Non Sub-sampled Contourlet Transform(NSCT)and Joint Sparse Representation(JSR).This model gratifies the need for precise integration of medical images of different modalities,which is an essential requirement in the diagnosing process towards clinical activities and treating the patients accordingly.In the proposed model,the medical image is decomposed using NSCT which is an efficient shift variant decomposition transformation method.JSR is exercised to extricate the common features of the medical image for the fusion process.The performance analysis of the proposed system proves that the proposed image fusion technique for medical image fusion is more efficient,provides better results,and a high level of distinctness by integrating the advantages of complementary images.The comparative analysis proves that the proposed technique exhibits better-quality than the existing medical image fusion practices.
基金supported in part by the National Natural Science Foundation of China under Grant 41505017.
文摘Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion images have disadvantages such as blurred edges,low contrast,and loss of details.Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform(NSST).Furthermore,the low-frequency subbands were fused by convolutional sparse representation(CSR),and the high-frequency subbands were fused by an improved pulse coupled neural network(IPCNN)algorithm,which can effectively solve the problem of difficulty in setting parameters of the traditional PCNN algorithm,improving the performance of sparse representation with details injection.The result reveals that the proposed method in this paper has more advantages than the existing mainstream fusion algorithms in terms of visual effects and objective indicators.
基金Major Science and Technology Project of Anhui Province(Grant Number:201903a05020011)Talents Research Fund Project of Hefei University(Grant Number:20RC14)+2 种基金the Natural Science Research Project of Anhui Universities(Grant Number:KJ2021A0995)Graduate Student Quality Engineering Project of Hefei University(Grant Number:2021Yjyxm09)Enterprise Research Project:Research on Robot Intelligent Magnetic Force Recognition and Diagnosis Technology Based on DT and Deep Learning Optimization.
文摘The combination of the Industrial Internet of Things(IIoT)and digital twin(DT)technology makes it possible for the DT model to realize the dynamic perception of equipment status and performance.However,conventional digital modeling is weak in the fusion and adjustment ability between virtual and real information.The performance prediction based on experience greatly reduces the inclusiveness and accuracy of the model.In this paper,a DT-IIoT optimization model is proposed to improve the real-time representation and prediction ability of the key equipment state.Firstly,a global real-time feedback and the dynamic adjustment mechanism is established by combining DT-IIoT with algorithm optimization.Secondly,a strong screening dual-model optimization(SSDO)prediction method based on Stacking integration and fusion is proposed in the dynamic regulation mechanism.Lightweight screening and multi-round optimization are used to improve the prediction accuracy of the evolution model.Finally,tak-ing the boiler performance of a power plant in Shanxi as an example,the accurate representation and evolution prediction of boiler steam quantity is realized.The results show that the real-time state representation and life cycle performance prediction of large key equipment is optimized through these methods.The self-lifting ability of the Stacking integration and fusion-based SSDO prediction method is 15.85%on average,and the optimal self-lifting ability is 18.16%.The optimization model reduces the MSE loss from the initial 0.318 to the optimal 0.1074,and increases R2 from the initial 0.731 to the optimal 0.9092.The adaptability and reliability of the model are comprehensively improved,and better prediction and analysis results are achieved.This ensures the stable operation of core equipment,and is of great significance to comprehensively understanding the equipment status and performance.
基金supported by the Natural Science Foundation of Liaoning Province(Grant No.2023-MSBA-070)the National Natural Science Foundation of China(Grant No.62302086).
文摘Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities.Under complex scenes,multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions.However,achieving outstanding performance is challenging because of equipment performance limitations,missing information,and data noise.This paper comprehensively reviews existing methods based onmulti-modal fusion techniques and completes a detailed and in-depth analysis.According to the data fusion stage,multi-modal fusion has four primary methods:early fusion,deep fusion,late fusion,and hybrid fusion.The paper surveys the three majormulti-modal fusion technologies that can significantly enhance the effect of data fusion and further explore the applications of multi-modal fusion technology in various fields.Finally,it discusses the challenges and explores potential research opportunities.Multi-modal tasks still need intensive study because of data heterogeneity and quality.Preserving complementary information and eliminating redundant information between modalities is critical in multi-modal technology.Invalid data fusion methods may introduce extra noise and lead to worse results.This paper provides a comprehensive and detailed summary in response to these challenges.
基金supported by the National Natural Science Foundation of China(No.62302540)with author Fangfang Shan.For more information,please visit their website at https://www.nsfc.gov.cn/(accessed on 31/05/2024)+3 种基金Additionally,it is also funded by the Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022020)where Fangfang Shan is an author.Further details can be found at http://xt.hnkjt.gov.cn/data/pingtai/(accessed on 31/05/2024)supported by the Natural Science Foundation of Henan Province Youth Science Fund Project(No.232300420422)for more information,you can visit https://kjt.henan.gov.cn/2022/09-02/2599082.html(accessed on 31/05/2024).
文摘Social media has become increasingly significant in modern society,but it has also turned into a breeding ground for the propagation of misleading information,potentially causing a detrimental impact on public opinion and daily life.Compared to pure text content,multmodal content significantly increases the visibility and share ability of posts.This has made the search for efficient modality representations and cross-modal information interaction methods a key focus in the field of multimodal fake news detection.To effectively address the critical challenge of accurately detecting fake news on social media,this paper proposes a fake news detection model based on crossmodal message aggregation and a gated fusion network(MAGF).MAGF first uses BERT to extract cumulative textual feature representations and word-level features,applies Faster Region-based ConvolutionalNeuralNetwork(Faster R-CNN)to obtain image objects,and leverages ResNet-50 and Visual Geometry Group-19(VGG-19)to obtain image region features and global features.The image region features and word-level text features are then projected into a low-dimensional space to calculate a text-image affinity matrix for cross-modal message aggregation.The gated fusion network combines text and image region features to obtain adaptively aggregated features.The interaction matrix is derived through an attention mechanism and further integrated with global image features using a co-attention mechanism to producemultimodal representations.Finally,these fused features are fed into a classifier for news categorization.Experiments were conducted on two public datasets,Twitter and Weibo.Results show that the proposed model achieves accuracy rates of 91.8%and 88.7%on the two datasets,respectively,significantly outperforming traditional unimodal and existing multimodal models.
基金Sponsored by the Fundamental Research Funds for the Central Universities of China(Grant No.PA2023IISL0098)the Hefei Municipal Natural Science Foundation(Grant No.202201)+1 种基金the National Natural Science Foundation of China(Grant No.62071164)the Open Fund of Information Materials and Intelligent Sensing Laboratory of Anhui Province(Anhui University)(Grant No.IMIS202214 and IMIS202102)。
文摘This article proposes a VGG network with histogram of oriented gradient(HOG) feature fusion(HOG-VGG) for polarization synthetic aperture radar(PolSAR) image terrain classification.VGG-Net has a strong ability of deep feature extraction,which can fully extract the global deep features of different terrains in PolSAR images,so it is widely used in PolSAR terrain classification.However,VGG-Net ignores the local edge & shape features,resulting in incomplete feature representation of the PolSAR terrains,as a consequence,the terrain classification accuracy is not promising.In fact,edge and shape features play an important role in PolSAR terrain classification.To solve this problem,a new VGG network with HOG feature fusion was specifically proposed for high-precision PolSAR terrain classification.HOG-VGG extracts both the global deep semantic features and the local edge & shape features of the PolSAR terrains,so the terrain feature representation completeness is greatly elevated.Moreover,HOG-VGG optimally fuses the global deep features and the local edge & shape features to achieve the best classification results.The superiority of HOG-VGG is verified on the Flevoland,San Francisco and Oberpfaffenhofen datasets.Experiments show that the proposed HOG-VGG achieves much better PolSAR terrain classification performance,with overall accuracies of 97.54%,94.63%,and 96.07%,respectively.
基金the National Natural Science Foundation of China(61671383)Shaanxi Key Industry Innovation Chain Project(2018ZDCXL-G-12-2,2019ZDLGY14-02-02,2019ZDLGY14-02-03).
文摘Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational complexity and neglecting the local features of an image,resulting in limited image detail retention and a high registration misalignment sensitivity.In order to overcome these shortcomings and the noise existing in the image of the fusion process,this paper proposes a new signal decomposition model,namely the multi-source image fusion algorithm of the gradient regularization convolution SR(CSR).The main innovation of this work is using the sparse optimization function to perform two-scale decomposition of the source image to obtain high-frequency components and low-frequency components.The sparse coefficient is obtained by the gradient regularization CSR model,and the sparse coefficient is taken as the maximum value to get the optimal high frequency component of the fused image.The best low frequency component is obtained by using the fusion strategy of the extreme or the average value.The final fused image is obtained by adding two optimal components.Experimental results demonstrate that this method greatly improves the ability to maintain image details and reduces image registration sensitivity.
基金This work was supported by National Natural Science Foundation of China(NSFC)under Grant No.61771299,No.61771322,No.61375015,No.61301027.
文摘Recently,sparse representation classification(SRC)and fisher discrimination dictionary learning(FDDL)methods have emerged as important methods for vehicle classification.In this paper,inspired by recent breakthroughs of discrimination dictionary learning approach and multi-task joint covariate selection,we focus on the problem of vehicle classification in real-world applications by formulating it as a multi-task joint sparse representation model based on fisher discrimination dictionary learning to merge the strength of multiple features among multiple sensors.To improve the classification accuracy in complex scenes,we develop a new method,called multi-task joint sparse representation classification based on fisher discrimination dictionary learning,for vehicle classification.In our proposed method,the acoustic and seismic sensor data sets are captured to measure the same physical event simultaneously by multiple heterogeneous sensors and the multi-dimensional frequency spectrum features of sensors data are extracted using Mel frequency cepstral coefficients(MFCC).Moreover,we extend our model to handle sparse environmental noise.We experimentally demonstrate the benefits of joint information fusion based on fisher discrimination dictionary learning from different sensors in vehicle classification tasks.
文摘多模态数据处理是一个重要的研究领域,它可以通过结合文本、图像等多种信息来提高模型性能.然而,由于不同模态之间的异构性以及信息融合的挑战,设计有效的多模态分类模型仍然是一个具有挑战性的问题.本文提出了一种新的多模态分类模型——MCM-ICE,它通过联合独立编码和协同编码策略来解决特征表示和特征融合的挑战.MCM-ICE在Fashion-Gen和Hateful Memes Challenge两个数据集上进行了实验,结果表明该模型在这两项任务中均优于现有的最先进方法.本文还探究了协同编码模块Transformer输出层的不同向量选取对结果的影响,结果表明选取[CLS]向量和去除[CLS]的向量的平均池化向量可以获得最佳结果.消融研究和探索性分析支持了MCM-ICE模型在处理多模态分类任务方面的有效性.