The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera...The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable.展开更多
Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation ...Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation algorithms have problems with mis-segmentation and poor edge segmentation.To address these challenges,we propose a medical image segmentation network(AF-Net)based on attention mechanism and feature fusion,which can effectively capture global information while focusing the network on the object area.In this approach,we add dual attention blocks(DA-block)to the backbone network,which comprises parallel channels and spatial attention branches,to adaptively calibrate and weigh features.Secondly,the multi-scale feature fusion block(MFF-block)is proposed to obtain feature maps of different receptive domains and get multi-scale information with less computational consumption.Finally,to restore the locations and shapes of organs,we adopt the global feature fusion blocks(GFF-block)to fuse high-level and low-level information,which can obtain accurate pixel positioning.We evaluate our method on multiple datasets(the aorta and lungs dataset),and the experimental results achieve 94.0%in mIoU and 96.3%in DICE,showing that our approach performs better than U-Net and other state-of-art methods.展开更多
Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared...Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared and visible image fusion network based on a multiscale feature learning and attention mechanism(MsAFusion).A multiscale dilation convolution framework is employed to capture image features across various scales and broaden the perceptual scope.Furthermore,an attention network is introduced to enhance the focus on salient targets in infrared images and detailed textures in visible images.To compensate for information loss during convolution,jump connections are utilized during the image reconstruction phase.The fusion process utilizes a combined loss function consisting of pixel loss and gradient loss for unsupervised fusion of infrared and visible images.Extensive experiments on the dataset of electricity facilities demonstrate that our proposed method outperforms nine state-of-theart methods in terms of visual perception and four objective evaluation metrics.展开更多
The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregula...The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregular and multi-scale nature of food images.Addressing these complexities,our study introduces an advanced model that leverages multiple attention mechanisms and multi-stage local fusion,grounded in the ConvNeXt architecture.Our model employs hybrid attention(HA)mechanisms to pinpoint critical discriminative regions within images,substantially mitigating the influence of background noise.Furthermore,it introduces a multi-stage local fusion(MSLF)module,fostering long-distance dependencies between feature maps at varying stages.This approach facilitates the assimilation of complementary features across scales,significantly bolstering the model’s capacity for feature extraction.Furthermore,we constructed a dataset named Roushi60,which consists of 60 different categories of common meat dishes.Empirical evaluation of the ETH Food-101,ChineseFoodNet,and Roushi60 datasets reveals that our model achieves recognition accuracies of 91.12%,82.86%,and 92.50%,respectively.These figures not only mark an improvement of 1.04%,3.42%,and 1.36%over the foundational ConvNeXt network but also surpass the performance of most contemporary food image recognition methods.Such advancements underscore the efficacy of our proposed model in navigating the intricate landscape of food image recognition,setting a new benchmark for the field.展开更多
Visual question answering(VQA)has attracted more and more attention in computer vision and natural language processing.Scholars are committed to studying how to better integrate image features and text features to ach...Visual question answering(VQA)has attracted more and more attention in computer vision and natural language processing.Scholars are committed to studying how to better integrate image features and text features to achieve better results in VQA tasks.Analysis of all features may cause information redundancy and heavy computational burden.Attention mechanism is a wise way to solve this problem.However,using single attention mechanism may cause incomplete concern of features.This paper improves the attention mechanism method and proposes a hybrid attention mechanism that combines the spatial attention mechanism method and the channel attention mechanism method.In the case that the attention mechanism will cause the loss of the original features,a small portion of image features were added as compensation.For the attention mechanism of text features,a selfattention mechanism was introduced,and the internal structural features of sentences were strengthened to improve the overall model.The results show that attention mechanism and feature compensation add 6.1%accuracy to multimodal low-rank bilinear pooling network.展开更多
Facial expression recognition(FER) in video has attracted the increasing interest and many approaches have been made.The crucial problem of classifying a given video sequence into several basic emotions is how to fuse...Facial expression recognition(FER) in video has attracted the increasing interest and many approaches have been made.The crucial problem of classifying a given video sequence into several basic emotions is how to fuse facial features of individual frames.In this paper, a frame-level attention module is integrated into an improved VGG-based frame work and a lightweight facial expression recognition method is proposed.The proposed network takes a sub video cut from an experimental video sequence as its input and generates a fixed-dimension representation.The VGG-based network with an enhanced branch embeds face images into feature vectors.The frame-level attention module learns weights which are used to adaptively aggregate the feature vectors to form a single discriminative video representation.Finally, a regression module outputs the classification results.The experimental results on CK+and AFEW databases show that the recognition rates of the proposed method can achieve the state-of-the-art performance.展开更多
The haze weather environment leads to the deterioration of the visual effect of the image,and it is difficult to carry out the work of the advanced vision task.Therefore,dehazing the haze image is an important step be...The haze weather environment leads to the deterioration of the visual effect of the image,and it is difficult to carry out the work of the advanced vision task.Therefore,dehazing the haze image is an important step before the execution of the advanced vision task.Traditional dehazing algorithms achieve image dehazing by improving image brightness and contrast or constructing artificial priors such as color attenuation priors and dark channel priors.However,the effect is unstable when dealing with complex scenes.In the method based on convolutional neural network,the image dehazing network of the encoding and decoding structure does not consider the difference before and after the dehazing image,and the image spatial information is lost in the encoding stage.In order to overcome these problems,this paper proposes a novel end-to-end two-stream convolutional neural network for single-image dehazing.The network model is composed of a spatial information feature stream and a highlevel semantic feature stream.The spatial information feature stream retains the detailed information of the dehazing image,and the high-level semantic feature stream extracts the multi-scale structural features of the dehazing image.A spatial information auxiliary module is designed and placed between the feature streams.This module uses the attention mechanism to construct a unified expression of different types of information and realizes the gradual restoration of the clear image with the semantic information auxiliary spatial information in the dehazing network.A parallel residual twicing module is proposed,which performs dehazing on the difference information of features at different stages to improve the model’s ability to discriminate haze images.The peak signal-to-noise ratio(PSNR)and structural similarity are used to quantitatively evaluate the similarity between the dehazing results of each algorithm and the original image.The structure similarity and PSNR of the method in this paper reached 0.852 and 17.557dB on the HazeRD dataset,which were higher than existing comparison algorithms.On the SOTS dataset,the indicators are 0.955 and 27.348dB,which are sub-optimal results.In experiments with real haze images,this method can also achieve excellent visual restoration effects.The experimental results show that the model proposed in this paper can restore desired visual effects without fog images,and it also has good generalization performance in real haze scenes.展开更多
高光谱图像以其高分辨率的空间和光谱信息在军事、航空航天及民用等遥感领域均有重要应用,具有重要的研究意义。深度学习具有学习能力强、覆盖范围广及可移植性强的优势,成为目前高精度高光谱图像分类技术研究的热点。其中卷积神经网络(...高光谱图像以其高分辨率的空间和光谱信息在军事、航空航天及民用等遥感领域均有重要应用,具有重要的研究意义。深度学习具有学习能力强、覆盖范围广及可移植性强的优势,成为目前高精度高光谱图像分类技术研究的热点。其中卷积神经网络(CNN)因强大的特征提取能力广泛应用于高光谱图像分类方法研究中,取得了有效的研究成果,但该类方法通常单独基于2D-CNN或3D-CNN进行,针对高光谱图像的单一特征,一是不能充分利用高光谱数据本身完整的特征信息;二是虽然相应提取网络局部特征优化性好,但是整体泛化能力不足,在深度挖掘HSI的空间和光谱信息方面存在局限性。鉴于此,提出了基于注意力机制的混合卷积神经网络模型(HybridSN_AM),使用主成分分析法对高光谱图像进行降维,采用卷积神经网络作为分类模型的主体,通过注意力机制筛选出更有区分度的特征,使模型能够提取到更精确、更核心的空间-光谱信息,实现高光谱图像的高精度分类。对Indian Pines(IP)、University of Pavia(UP)和Salinas(SA)三个数据集进行了应用实验,结果表明,基于该模型的目标图像总体分类精度、平均分类精度和Kappa系数均高于98.14%、97.17%、97.87%。与常规HybridSN模型对比表明,HybridSN_AM模型在三个数据集上的分类精度分别提升了0.89%、0.07%和0.73%。有效解决了高光谱图像空间-光谱特征提取与融合的难题,提高HSI分类的精度,具有较强的泛化能力,充分验证了注意力机制结合混合卷积神经网络在高光谱图像分类中的有效性和可行性,对高光谱图像分类技术的发展及应用具有重要的科学价值。展开更多
基金the National Natural Science Foundation of China(No.61976080)the Academic Degrees&Graduate Education Reform Project of Henan Province(No.2021SJGLX195Y)+1 种基金the Teaching Reform Research and Practice Project of Henan Undergraduate Universities(No.2022SYJXLX008)the Key Project on Research and Practice of Henan University Graduate Education and Teaching Reform(No.YJSJG2023XJ006)。
文摘The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable.
基金This work was supported in part by the National Natural Science Foundation of China under Grant 61772561,author J.Q,http://www.nsfc.gov.cn/in part by the Science Research Projects of Hunan Provincial Education Department under Grant 18A174,author X.X,http://kxjsc.gov.hnedu.cn/+5 种基金in part by the Science Research Projects of Hunan Provincial Education Department under Grant 19B584,author Y.T,http://kxjsc.gov.hnedu.cn/in part by the Natural Science Foundation of Hunan Province(No.2020JJ4140),author Y.T,http://kjt.hunan.gov.cn/in part by the Natural Science Foundation of Hunan Province(No.2020JJ4141),author X.X,http://kjt.hunan.gov.cn/in part by the Key Research and Development Plan of Hunan Province under Grant 2019SK2022,author Y.T,http://kjt.hunan.gov.cn/in part by the Key Research and Development Plan of Hunan Province under Grant CX20200730,author G.H,http://kjt.hunan.gov.cn/in part by the Graduate Science and Technology Innovation Fund Project of Central South University of Forestry and Technology under Grant CX20202038,author G.H,http://jwc.csuft.edu.cn/.
文摘Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation algorithms have problems with mis-segmentation and poor edge segmentation.To address these challenges,we propose a medical image segmentation network(AF-Net)based on attention mechanism and feature fusion,which can effectively capture global information while focusing the network on the object area.In this approach,we add dual attention blocks(DA-block)to the backbone network,which comprises parallel channels and spatial attention branches,to adaptively calibrate and weigh features.Secondly,the multi-scale feature fusion block(MFF-block)is proposed to obtain feature maps of different receptive domains and get multi-scale information with less computational consumption.Finally,to restore the locations and shapes of organs,we adopt the global feature fusion blocks(GFF-block)to fuse high-level and low-level information,which can obtain accurate pixel positioning.We evaluate our method on multiple datasets(the aorta and lungs dataset),and the experimental results achieve 94.0%in mIoU and 96.3%in DICE,showing that our approach performs better than U-Net and other state-of-art methods.
基金supported by the project of CSG Electric Power Research Institute(Grant No.SEPRI-K22B100)。
文摘Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared and visible image fusion network based on a multiscale feature learning and attention mechanism(MsAFusion).A multiscale dilation convolution framework is employed to capture image features across various scales and broaden the perceptual scope.Furthermore,an attention network is introduced to enhance the focus on salient targets in infrared images and detailed textures in visible images.To compensate for information loss during convolution,jump connections are utilized during the image reconstruction phase.The fusion process utilizes a combined loss function consisting of pixel loss and gradient loss for unsupervised fusion of infrared and visible images.Extensive experiments on the dataset of electricity facilities demonstrate that our proposed method outperforms nine state-of-theart methods in terms of visual perception and four objective evaluation metrics.
基金The support of this research was by Hubei Provincial Natural Science Foundation(2022CFB449)Science Research Foundation of Education Department of Hubei Province(B2020061),are gratefully acknowledged.
文摘The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregular and multi-scale nature of food images.Addressing these complexities,our study introduces an advanced model that leverages multiple attention mechanisms and multi-stage local fusion,grounded in the ConvNeXt architecture.Our model employs hybrid attention(HA)mechanisms to pinpoint critical discriminative regions within images,substantially mitigating the influence of background noise.Furthermore,it introduces a multi-stage local fusion(MSLF)module,fostering long-distance dependencies between feature maps at varying stages.This approach facilitates the assimilation of complementary features across scales,significantly bolstering the model’s capacity for feature extraction.Furthermore,we constructed a dataset named Roushi60,which consists of 60 different categories of common meat dishes.Empirical evaluation of the ETH Food-101,ChineseFoodNet,and Roushi60 datasets reveals that our model achieves recognition accuracies of 91.12%,82.86%,and 92.50%,respectively.These figures not only mark an improvement of 1.04%,3.42%,and 1.36%over the foundational ConvNeXt network but also surpass the performance of most contemporary food image recognition methods.Such advancements underscore the efficacy of our proposed model in navigating the intricate landscape of food image recognition,setting a new benchmark for the field.
基金This work was supported by the Sichuan Science and Technology Program(2021YFQ0003).
文摘Visual question answering(VQA)has attracted more and more attention in computer vision and natural language processing.Scholars are committed to studying how to better integrate image features and text features to achieve better results in VQA tasks.Analysis of all features may cause information redundancy and heavy computational burden.Attention mechanism is a wise way to solve this problem.However,using single attention mechanism may cause incomplete concern of features.This paper improves the attention mechanism method and proposes a hybrid attention mechanism that combines the spatial attention mechanism method and the channel attention mechanism method.In the case that the attention mechanism will cause the loss of the original features,a small portion of image features were added as compensation.For the attention mechanism of text features,a selfattention mechanism was introduced,and the internal structural features of sentences were strengthened to improve the overall model.The results show that attention mechanism and feature compensation add 6.1%accuracy to multimodal low-rank bilinear pooling network.
基金Supported by the Future Network Scientific Research Fund Project of Jiangsu Province (No. FNSRFP2021YB26)the Jiangsu Key R&D Fund on Social Development (No. BE2022789)the Science Foundation of Nanjing Institute of Technology (No. ZKJ202003)。
文摘Facial expression recognition(FER) in video has attracted the increasing interest and many approaches have been made.The crucial problem of classifying a given video sequence into several basic emotions is how to fuse facial features of individual frames.In this paper, a frame-level attention module is integrated into an improved VGG-based frame work and a lightweight facial expression recognition method is proposed.The proposed network takes a sub video cut from an experimental video sequence as its input and generates a fixed-dimension representation.The VGG-based network with an enhanced branch embeds face images into feature vectors.The frame-level attention module learns weights which are used to adaptively aggregate the feature vectors to form a single discriminative video representation.Finally, a regression module outputs the classification results.The experimental results on CK+and AFEW databases show that the recognition rates of the proposed method can achieve the state-of-the-art performance.
基金supported by the National Natural Science Foundationof China under Grant No. 61803061, 61906026Innovation research groupof universities in Chongqing+4 种基金the Chongqing Natural Science Foundationunder Grant cstc2020jcyj-msxmX0577, cstc2020jcyj-msxmX0634“Chengdu-Chongqing Economic Circle” innovation funding of Chongqing Municipal Education Commission KJCXZD2020028the Science andTechnology Research Program of Chongqing Municipal Education Commission grants KJQN202000602Ministry of Education China MobileResearch Fund (MCM 20180404)Special key project of Chongqingtechnology innovation and application development: cstc2019jscxzdztzx0068.
文摘The haze weather environment leads to the deterioration of the visual effect of the image,and it is difficult to carry out the work of the advanced vision task.Therefore,dehazing the haze image is an important step before the execution of the advanced vision task.Traditional dehazing algorithms achieve image dehazing by improving image brightness and contrast or constructing artificial priors such as color attenuation priors and dark channel priors.However,the effect is unstable when dealing with complex scenes.In the method based on convolutional neural network,the image dehazing network of the encoding and decoding structure does not consider the difference before and after the dehazing image,and the image spatial information is lost in the encoding stage.In order to overcome these problems,this paper proposes a novel end-to-end two-stream convolutional neural network for single-image dehazing.The network model is composed of a spatial information feature stream and a highlevel semantic feature stream.The spatial information feature stream retains the detailed information of the dehazing image,and the high-level semantic feature stream extracts the multi-scale structural features of the dehazing image.A spatial information auxiliary module is designed and placed between the feature streams.This module uses the attention mechanism to construct a unified expression of different types of information and realizes the gradual restoration of the clear image with the semantic information auxiliary spatial information in the dehazing network.A parallel residual twicing module is proposed,which performs dehazing on the difference information of features at different stages to improve the model’s ability to discriminate haze images.The peak signal-to-noise ratio(PSNR)and structural similarity are used to quantitatively evaluate the similarity between the dehazing results of each algorithm and the original image.The structure similarity and PSNR of the method in this paper reached 0.852 and 17.557dB on the HazeRD dataset,which were higher than existing comparison algorithms.On the SOTS dataset,the indicators are 0.955 and 27.348dB,which are sub-optimal results.In experiments with real haze images,this method can also achieve excellent visual restoration effects.The experimental results show that the model proposed in this paper can restore desired visual effects without fog images,and it also has good generalization performance in real haze scenes.
文摘高光谱图像以其高分辨率的空间和光谱信息在军事、航空航天及民用等遥感领域均有重要应用,具有重要的研究意义。深度学习具有学习能力强、覆盖范围广及可移植性强的优势,成为目前高精度高光谱图像分类技术研究的热点。其中卷积神经网络(CNN)因强大的特征提取能力广泛应用于高光谱图像分类方法研究中,取得了有效的研究成果,但该类方法通常单独基于2D-CNN或3D-CNN进行,针对高光谱图像的单一特征,一是不能充分利用高光谱数据本身完整的特征信息;二是虽然相应提取网络局部特征优化性好,但是整体泛化能力不足,在深度挖掘HSI的空间和光谱信息方面存在局限性。鉴于此,提出了基于注意力机制的混合卷积神经网络模型(HybridSN_AM),使用主成分分析法对高光谱图像进行降维,采用卷积神经网络作为分类模型的主体,通过注意力机制筛选出更有区分度的特征,使模型能够提取到更精确、更核心的空间-光谱信息,实现高光谱图像的高精度分类。对Indian Pines(IP)、University of Pavia(UP)和Salinas(SA)三个数据集进行了应用实验,结果表明,基于该模型的目标图像总体分类精度、平均分类精度和Kappa系数均高于98.14%、97.17%、97.87%。与常规HybridSN模型对比表明,HybridSN_AM模型在三个数据集上的分类精度分别提升了0.89%、0.07%和0.73%。有效解决了高光谱图像空间-光谱特征提取与融合的难题,提高HSI分类的精度,具有较强的泛化能力,充分验证了注意力机制结合混合卷积神经网络在高光谱图像分类中的有效性和可行性,对高光谱图像分类技术的发展及应用具有重要的科学价值。