期刊文献+
共找到40篇文章
< 1 2 >
每页显示 20 50 100
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
1
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet image Classification Lightweight Convolutional Neural Network Depthwise Dilated Separable Convolution Hierarchical multi-scale feature Fusion
下载PDF
Underwater Image Enhancement Based on Multi-scale Adversarial Network
2
作者 ZENG Jun-yang SI Zhan-jun 《印刷与数字媒体技术研究》 CAS 北大核心 2024年第5期70-77,共8页
In this study,an underwater image enhancement method based on multi-scale adversarial network was proposed to solve the problem of detail blur and color distortion in underwater images.Firstly,the local features of ea... In this study,an underwater image enhancement method based on multi-scale adversarial network was proposed to solve the problem of detail blur and color distortion in underwater images.Firstly,the local features of each layer were enhanced into the global features by the proposed residual dense block,which ensured that the generated images retain more details.Secondly,a multi-scale structure was adopted to extract multi-scale semantic features of the original images.Finally,the features obtained from the dual channels were fused by an adaptive fusion module to further optimize the features.The discriminant network adopted the structure of the Markov discriminator.In addition,by constructing mean square error,structural similarity,and perceived color loss function,the generated image is consistent with the reference image in structure,color,and content.The experimental results showed that the enhanced underwater image deblurring effect of the proposed algorithm was good and the problem of underwater image color bias was effectively improved.In both subjective and objective evaluation indexes,the experimental results of the proposed algorithm are better than those of the comparison algorithm. 展开更多
关键词 Underwater image enhancement Generative adversarial network multi-scale feature extraction Residual dense block
下载PDF
MSD-Net: Pneumonia Classification Model Based on Multi-Scale Directional Feature Enhancement
3
作者 Tao Zhou Yujie Guo +3 位作者 Caiyue Peng Yuxia Niu Yunfeng Pan Huiling Lu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4863-4882,共20页
Computer-aided diagnosis of pneumonia based on deep learning is a research hotspot.However,there are some problems that the features of different sizes and different directions are not sufficient when extracting the f... Computer-aided diagnosis of pneumonia based on deep learning is a research hotspot.However,there are some problems that the features of different sizes and different directions are not sufficient when extracting the features in lung X-ray images.A pneumonia classification model based on multi-scale directional feature enhancement MSD-Net is proposed in this paper.The main innovations are as follows:Firstly,the Multi-scale Residual Feature Extraction Module(MRFEM)is designed to effectively extract multi-scale features.The MRFEM uses dilated convolutions with different expansion rates to increase the receptive field and extract multi-scale features effectively.Secondly,the Multi-scale Directional Feature Perception Module(MDFPM)is designed,which uses a three-branch structure of different sizes convolution to transmit direction feature layer by layer,and focuses on the target region to enhance the feature information.Thirdly,the Axial Compression Former Module(ACFM)is designed to perform global calculations to enhance the perception ability of global features in different directions.To verify the effectiveness of the MSD-Net,comparative experiments and ablation experiments are carried out.In the COVID-19 RADIOGRAPHY DATABASE,the Accuracy,Recall,Precision,F1 Score,and Specificity of MSD-Net are 97.76%,95.57%,95.52%,95.52%,and 98.51%,respectively.In the chest X-ray dataset,the Accuracy,Recall,Precision,F1 Score and Specificity of MSD-Net are 97.78%,95.22%,96.49%,95.58%,and 98.11%,respectively.This model improves the accuracy of lung image recognition effectively and provides an important clinical reference to pneumonia Computer-Aided Diagnosis. 展开更多
关键词 PNEUMONIA X-ray image ResNet multi-scale feature direction feature TRANSFORMER
下载PDF
Integrating Transformer and Bidirectional Long Short-Term Memory for Intelligent Breast Cancer Detection from Histopathology Biopsy Images
4
作者 Prasanalakshmi Balaji Omar Alqahtani +2 位作者 Sangita Babu Mousmi Ajay Chaurasia Shanmugapriya Prakasam 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期443-458,共16页
Breast cancer is a significant threat to the global population,affecting not only women but also a threat to the entire population.With recent advancements in digital pathology,Eosin and hematoxylin images provide enh... Breast cancer is a significant threat to the global population,affecting not only women but also a threat to the entire population.With recent advancements in digital pathology,Eosin and hematoxylin images provide enhanced clarity in examiningmicroscopic features of breast tissues based on their staining properties.Early cancer detection facilitates the quickening of the therapeutic process,thereby increasing survival rates.The analysis made by medical professionals,especially pathologists,is time-consuming and challenging,and there arises a need for automated breast cancer detection systems.The upcoming artificial intelligence platforms,especially deep learning models,play an important role in image diagnosis and prediction.Initially,the histopathology biopsy images are taken from standard data sources.Further,the gathered images are given as input to the Multi-Scale Dilated Vision Transformer,where the essential features are acquired.Subsequently,the features are subjected to the Bidirectional Long Short-Term Memory(Bi-LSTM)for classifying the breast cancer disorder.The efficacy of the model is evaluated using divergent metrics.When compared with other methods,the proposed work reveals that it offers impressive results for detection. 展开更多
关键词 Bidirectional long short-term memory breast cancer detection feature extraction histopathology biopsy images multi-scale dilated vision transformer
下载PDF
Multi-Scale Feature Extraction for Joint Classification of Hyperspectral and LiDAR Data
5
作者 Yongqiang Xi Zhen Ye 《Journal of Beijing Institute of Technology》 EI CAS 2023年第1期13-22,共10页
With the development of sensors,the application of multi-source remote sensing data has been widely concerned.Since hyperspectral image(HSI)contains rich spectral information while light detection and ranging(LiDAR)da... With the development of sensors,the application of multi-source remote sensing data has been widely concerned.Since hyperspectral image(HSI)contains rich spectral information while light detection and ranging(LiDAR)data contains elevation information,joint use of them for ground object classification can yield positive results,especially by building deep networks.Fortu-nately,multi-scale deep networks allow to expand the receptive fields of convolution without causing the computational and training problems associated with simply adding more network layers.In this work,a multi-scale feature fusion network is proposed for the joint classification of HSI and LiDAR data.First,we design a multi-scale spatial feature extraction module with cross-channel connections,by which spatial information of HSI data and elevation information of LiDAR data are extracted and fused.In addition,a multi-scale spectral feature extraction module is employed to extract the multi-scale spectral features of HSI data.Finally,joint multi-scale features are obtained by weighting and concatenation operations and then fed into the classifier.To verify the effective-ness of the proposed network,experiments are carried out on the MUUFL Gulfport and Trento datasets.The experimental results demonstrate that the classification performance of the proposed method is superior to that of other state-of-the-art methods. 展开更多
关键词 hyperspectral image(HSI) light detection and ranging(LiDAR) multi-scale feature classification
下载PDF
Bidirectional parallel multi-branch convolution feature pyramid network for target detection in aerial images of swarm UAVs 被引量:3
6
作者 Lei Fu Wen-bin Gu +3 位作者 Wei Li Liang Chen Yong-bao Ai Hua-lei Wang 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2021年第4期1531-1541,共11页
In this paper,based on a bidirectional parallel multi-branch feature pyramid network(BPMFPN),a novel one-stage object detector called BPMFPN Det is proposed for real-time detection of ground multi-scale targets by swa... In this paper,based on a bidirectional parallel multi-branch feature pyramid network(BPMFPN),a novel one-stage object detector called BPMFPN Det is proposed for real-time detection of ground multi-scale targets by swarm unmanned aerial vehicles(UAVs).First,the bidirectional parallel multi-branch convolution modules are used to construct the feature pyramid to enhance the feature expression abilities of different scale feature layers.Next,the feature pyramid is integrated into the single-stage object detection framework to ensure real-time performance.In order to validate the effectiveness of the proposed algorithm,experiments are conducted on four datasets.For the PASCAL VOC dataset,the proposed algorithm achieves the mean average precision(mAP)of 85.4 on the VOC 2007 test set.With regard to the detection in optical remote sensing(DIOR)dataset,the proposed algorithm achieves 73.9 mAP.For vehicle detection in aerial imagery(VEDAI)dataset,the detection accuracy of small land vehicle(slv)targets reaches 97.4 mAP.For unmanned aerial vehicle detection and tracking(UAVDT)dataset,the proposed BPMFPN Det achieves the mAP of 48.75.Compared with the previous state-of-the-art methods,the results obtained by the proposed algorithm are more competitive.The experimental results demonstrate that the proposed algorithm can effectively solve the problem of real-time detection of ground multi-scale targets in aerial images of swarm UAVs. 展开更多
关键词 Aerial images Object detection feature pyramid networks multi-scale feature fusion Swarm UAVs
下载PDF
IMTNet:Improved Multi-Task Copy-Move Forgery Detection Network with Feature Decoupling and Multi-Feature Pyramid
7
作者 Huan Wang Hong Wang +2 位作者 Zhongyuan Jiang Qing Qian Yong Long 《Computers, Materials & Continua》 SCIE EI 2024年第9期4603-4620,共18页
Copy-Move Forgery Detection(CMFD)is a technique that is designed to identify image tampering and locate suspicious areas.However,the practicality of the CMFD is impeded by the scarcity of datasets,inadequate quality a... Copy-Move Forgery Detection(CMFD)is a technique that is designed to identify image tampering and locate suspicious areas.However,the practicality of the CMFD is impeded by the scarcity of datasets,inadequate quality and quantity,and a narrow range of applicable tasks.These limitations significantly restrict the capacity and applicability of CMFD.To overcome the limitations of existing methods,a novel solution called IMTNet is proposed for CMFD by employing a feature decoupling approach.Firstly,this study formulates the objective task and network relationship as an optimization problem using transfer learning.Furthermore,it thoroughly discusses and analyzes the relationship between CMFD and deep network architecture by employing ResNet-50 during the optimization solving phase.Secondly,a quantitative comparison between fine-tuning and feature decoupling is conducted to evaluate the degree of similarity between the image classification and CMFD domains by the enhanced ResNet-50.Finally,suspicious regions are localized using a feature pyramid network with bottom-up path augmentation.Experimental results demonstrate that IMTNet achieves faster convergence,shorter training times,and favorable generalization performance compared to existingmethods.Moreover,it is shown that IMTNet significantly outperforms fine-tuning based approaches in terms of accuracy and F_(1). 展开更多
关键词 image copy-move detection feature decoupling multi-scale feature pyramids passive forensics
下载PDF
基于“十字”标志物的红外图像与三维点云融合方法
8
作者 郑叶龙 李长勇 +3 位作者 夏宁宁 李玲一 张国民 赵美蓉 《天津大学学报(自然科学与工程技术版)》 EI CAS CSCD 北大核心 2024年第10期1090-1099,共10页
红外热成像技术广泛应用于多个领域,建立含有空间和温度信息的三维温度场模型具有十分重要的意义,可以将该技术扩展到更多应用领域.为此,本文提出一种异源空间数据融合方法,融合红外图像和三维点云,得到三维温度场模型.针对红外相机与... 红外热成像技术广泛应用于多个领域,建立含有空间和温度信息的三维温度场模型具有十分重要的意义,可以将该技术扩展到更多应用领域.为此,本文提出一种异源空间数据融合方法,融合红外图像和三维点云,得到三维温度场模型.针对红外相机与可见光相机成像原理存在差异,难以通过常用标定板进行内参标定的问题,基于红外相机成像特性设计并制作镂空圆孔标定板用于内参标定,所得内参平均重投影误差为0.03像素.针对红外相机与结构光相机的成像原理不同,现有标志物制作复杂、外参精度低的问题,基于不同材料的辐射度差异,设计制作“十字”标志物并将其用于联合标定.为解决同名特征点难以识别的问题,针对红外图像和三维点云分别设计了同名特征点提取方法,配合“十字”标志物进行同名特征点提取.红外图像和三维点云特征点提取方法的检测重复率分别为75%和92%,与传统方法相比两者的检测重复率均有所提升.利用该方法建立纸杯、工件和人脸的三维温度场模型.实验结果表明,使用镂空圆孔标定板能实现红外相机的内参标定,对“十字”标志物采用同名特征点提取方法能完成红外相机与结构光相机的联合标定.最终所得三维温度场模型的平均重投影误差为1.70像素,与现有方法相比模型精度有所提升. 展开更多
关键词 红外图像 三维点云 标志物 同名特征点 系统标定 异源空间数据融合
下载PDF
基于空间异质运算的结构信息提取辅助遥感影像分类研究
9
作者 裴晨阳 张廷龙 +1 位作者 高焕霖 张青峰 《西北林学院学报》 CSCD 北大核心 2024年第3期171-178,共8页
以Landsat-8和高分一号数据为例,采用仅有光谱特征、3种纹理特征(概率统计、灰度共生矩阵、空间异质运算)辅助光谱特征的方法提取影像空-谱信息,并通过支持向量机分类器进行基于像元的地物分类。结果表明:1)纹理特征辅助光谱特征的地物... 以Landsat-8和高分一号数据为例,采用仅有光谱特征、3种纹理特征(概率统计、灰度共生矩阵、空间异质运算)辅助光谱特征的方法提取影像空-谱信息,并通过支持向量机分类器进行基于像元的地物分类。结果表明:1)纹理特征辅助光谱特征的地物分类精度明显优于仅使用光谱特征的分类,可提高8.62%~24.36%;2)相较于概率统计、灰度共生矩阵方法结果,空间异质运算结果分类精度在GF-1影像中分别提高了13.31%和2.03%,在Landsat-8影像中分别提高了11.62%和7.79%;3)对于线状地物,相较于概率统计、灰度共生矩阵方法结果,空间异质运算结果分类精度在GF-1数据中分别提高了29.31%和0.80%,在Landsat-8数据中分别提高了11.90%和6.64%,有效减小了分类误差。因此,空间异质运算提取的空间结构信息辅助光谱特征的分类方法能显著改善遥感图像的分类精度,为空间结构信息辅助遥感影像地物分类及线状地物的提取提供一种新的思路和方法。 展开更多
关键词 线状地物 空间异质运算 纹理特征 地物分类 遥感影像
下载PDF
基于多元特征异构集成深度学习的图像识别模型及其应用 被引量:2
10
作者 汤健 田昊 +3 位作者 夏恒 王子轩 徐喆 韩红桂 《北京工业大学学报》 CAS CSCD 北大核心 2024年第1期27-37,共11页
随着城市矿产资源循环利用技术的不断发展,废旧手机回收已成为当前研究热点。受限于计算资源和数据资源的相对缺乏,目前基于线下智能回收装备的废旧手机识别精度难以达到实际应用。针对上述问题,提出一种基于多元特征异构集成深度学习... 随着城市矿产资源循环利用技术的不断发展,废旧手机回收已成为当前研究热点。受限于计算资源和数据资源的相对缺乏,目前基于线下智能回收装备的废旧手机识别精度难以达到实际应用。针对上述问题,提出一种基于多元特征异构集成深度学习的图像识别模型。首先,利用字符级文本检测算法(character region awareness for text detection,CRAFT)提取手机背部字符区域,再利用ImageNet预训练的VGG19模型作为图像特征嵌入模型,利用迁移学习理念提取待回收手机的局部字符特征和全局图像特征;然后,利用局部特征构建神经网络模式光学字符识别(optical character recognition,OCR)模型,利用全局和局部特征构建非神经网络模式深度森林分类(deep forest classification,DFC)模型;最后,将异构OCR和DFC识别模型输出的结果与向量组合后输入Softmax进行集成,基于权重向量得分最大准则获取最终识别结果。基于废旧手机回收装备的真实图像验证了所提方法的有效性。 展开更多
关键词 废旧手机 图像识别 迁移学习 多元特征 OCR 深度森林 异构集成
下载PDF
基于显著特征分类的立体图像重定向方法
11
作者 黄悦铭 唐振华 《无线电工程》 2024年第2期267-275,共9页
现有立体图像重定向方法对不同特征图像均采用相同的策略进行重定向操作,导致一些立体重定向图像出现信息丢失、形变扭曲或深度改变的情况。影响立体图像重定向结果质量的因素主要包括显著区域形状和可视深度的改变等。为了解决这些问题... 现有立体图像重定向方法对不同特征图像均采用相同的策略进行重定向操作,导致一些立体重定向图像出现信息丢失、形变扭曲或深度改变的情况。影响立体图像重定向结果质量的因素主要包括显著区域形状和可视深度的改变等。为了解决这些问题,提出一种基于图像显著特征分类的立体图像重定向方法,将图像分为无显著及有显著2类图像,结合立体智能剪裁方法及立体非均匀映射方法对不同特征图像采用不同的重定向策略以减少信息丢失及几何失真。通过利用显著区域与非显著区域的深度信息差异可以更好地保持显著图像的深度感。实验结果表明,提出方法在主观对比及客观指标评价中均取得了优于其他算法的效果。 展开更多
关键词 立体图像重定向 显著特征分类 立体智能剪裁 立体非均匀映射 深度信息差异
下载PDF
T2 mapping影像组学特征鉴别乳腺病灶良恶性的临床应用初探 被引量:3
12
作者 黄文平 王芬 +5 位作者 刘鸿利 余雅丽 娄鉴娟 邹启桂 王思奇 蒋燕妮 《磁共振成像》 CAS CSCD 北大核心 2023年第2期50-55,共6页
目的探讨基于磁共振T2 mapping的影像组学特征在预测乳腺病灶良恶性中的应用价值。材料与方法回顾性分析经病理证实的113例患者(良性51例,恶性62例)乳腺磁共振T2 mapping图像,应用ITK-SNAP软件手动勾画磁共振T2 mapping图像中的病灶感... 目的探讨基于磁共振T2 mapping的影像组学特征在预测乳腺病灶良恶性中的应用价值。材料与方法回顾性分析经病理证实的113例患者(良性51例,恶性62例)乳腺磁共振T2 mapping图像,应用ITK-SNAP软件手动勾画磁共振T2 mapping图像中的病灶感兴趣区,利用A.K.软件(AnalysisKit,GE Healthcare)对影像组学特征进行提取。根据病理结果,将其分为两组。通过组内相关系数进行一致性检验。在良性组、恶性组中通过7∶3的比例随机分割训练集与测试集。通过Z-score标准化处理、Pearson相关系数法、递归特征消除法对训练集进行特征降维及选择,逻辑回归分类器进行分类建模,并进行5折交叉验证。分别在训练集及测试集中绘制受试者工作特征(receiver operating characteristic,ROC)曲线,以评估模型的诊断效能,通过临床决策曲线分析(decision curve analysis,DCA)评价其临床有效性。结果通过特征提取获得107个定量影像特征参数,通过特征降维及筛选最终保留6个特征参数,分别为original_shape_Sphericity、original_glcm_InverseVariance、original_glrlm_GrayLevelNonUniformityNormalized、original_glrlm_ShortRunEmphasis、original_glszm_GrayLevelNonUniformity Normalized以及original_ngtdm_Coarseness。ROC曲线在测试集的曲线下面积为0.895(95%可信区间:0.768~0.990),敏感度为94.7%,特异度为80.0%,准确度为88.2%。结论基于磁共振T2 mapping的影像组学特征可用于术前预测乳腺病灶的良恶性,且具有较高的准确度。 展开更多
关键词 乳腺 T2 mapping 磁共振成像 影像组学 纹理特征 异质性
下载PDF
磁共振成像纹理分析技术评价宫颈癌病理异质性的价值研究
13
作者 林宇宁 李华灿 +3 位作者 唐劲松 张玉琴 郑春红 李辉 《医疗卫生装备》 CAS 2023年第5期54-58,共5页
目的:探讨磁共振成像纹理分析技术评价宫颈癌病理异质性的价值。方法:回顾性分析2019年1月至2022年3月经手术或活检病理确诊的85例宫颈癌患者治疗前的磁共振成像数据,以国际妇产科联合会(International Federation of Gynecology and Ob... 目的:探讨磁共振成像纹理分析技术评价宫颈癌病理异质性的价值。方法:回顾性分析2019年1月至2022年3月经手术或活检病理确诊的85例宫颈癌患者治疗前的磁共振成像数据,以国际妇产科联合会(International Federation of Gynecology and Obstetrics,FIGO)分期、病理类型、分化程度、有无脉管/神经侵犯和Ki-67表达等病理特征进行分组,采用t检验或Mann-Whitney U检验比较各组T2WI图及表观扩散系数(apparent diffusion coefficient,ADC)图的一阶纹理参数,采用ROC曲线分析评价纹理参数区分宫颈癌病理特征的能力。结果:除了有脉管/神经侵犯组与无脉管/神经侵犯组的T2WI图及ADC图的纹理参数无明显差异(P>0.05),T2WI图及ADC图的部分纹理参数在其他病理特征分组中均表现出显著差异(P<0.05)。ROC曲线分析结果表明ADC图的一阶纹理参数区分宫颈癌的病理特征的AUC值较T2WI图更大。结论:T2WI图及ADC图的一阶纹理特征均可体现宫颈癌病理异质性,ADC图纹理参数区分宫颈癌病理特征的准确率较T2WI图更高。 展开更多
关键词 宫颈癌 纹理特征 病理特征 磁共振成像 扩散加权成像 纹理分析 肿瘤异质性
下载PDF
Multi-source Remote Sensing Image Registration Based on Contourlet Transform and Multiple Feature Fusion 被引量:6
14
作者 Huan Liu Gen-Fu Xiao +1 位作者 Yun-Lan Tan Chun-Juan Ouyang 《International Journal of Automation and computing》 EI CSCD 2019年第5期575-588,共14页
Image registration is an indispensable component in multi-source remote sensing image processing. In this paper, we put forward a remote sensing image registration method by including an improved multi-scale and multi... Image registration is an indispensable component in multi-source remote sensing image processing. In this paper, we put forward a remote sensing image registration method by including an improved multi-scale and multi-direction Harris algorithm and a novel compound feature. Multi-scale circle Gaussian combined invariant moments and multi-direction gray level co-occurrence matrix are extracted as features for image matching. The proposed algorithm is evaluated on numerous multi-source remote sensor images with noise and illumination changes. Extensive experimental studies prove that our proposed method is capable of receiving stable and even distribution of key points as well as obtaining robust and accurate correspondence matches. It is a promising scheme in multi-source remote sensing image registration. 展开更多
关键词 feature fusion multi-scale circle Gaussian combined invariant MOMENT multi-direction GRAY level CO-OCCURRENCE matrix MULTI-SOURCE remote sensing image registration CONTOURLET transform
原文传递
基于图像和特征联合优化的跨模态行人重识别研究
15
作者 张辉 刘世洪 钟武 《荆楚理工学院学报》 2023年第2期9-17,共9页
跨模态行人重识别(VI-ReID)旨在匹配可见光和红外摄像头下捕获的行人图像,十分具有挑战性。为减小可见光图像和红外图像之间的模态差异,本文提出了异质图像增广方法和跨模态特征对齐方法来优化跨模态行人重识别网络,利用轻量级异质图像... 跨模态行人重识别(VI-ReID)旨在匹配可见光和红外摄像头下捕获的行人图像,十分具有挑战性。为减小可见光图像和红外图像之间的模态差异,本文提出了异质图像增广方法和跨模态特征对齐方法来优化跨模态行人重识别网络,利用轻量级异质图像卷积生成器对可见光图像进行增广,采用色彩抖动方式对红外图像进行增广,并使用正样本优化轻量级异构图像卷积生成器来约束损失。在此基础上,使用两个模态分类器和跨模态特征对齐损失作为指导,不断学习获得模态共享的特征。在两个数据集上的大量实验表明,我们的方法具有优异的性能,在SYSU-MM01和RegDB数据集上分别达到了rank1/mAP 57.82%/54.35%和80.39%/75.05%的精度。 展开更多
关键词 跨模态行人重识别 模态差异 异质图像增广 跨模态特征对齐
下载PDF
增强的典型相关分析及其在人脸识别特征融合中的应用 被引量:16
16
作者 赵松 张志坚 张培仁 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2009年第3期394-399,共6页
在传统的典型相关分析(CCA)基础上,定义了类别相关性,提出了增强典型相关分析(ECCA)方法.对于一个模式空间的2个观测空间(对任意模式都有2种观测向量),ECCA能够找到这2个观测空间对类别而言更有意义的相关子空间,且同时保持了投影分量... 在传统的典型相关分析(CCA)基础上,定义了类别相关性,提出了增强典型相关分析(ECCA)方法.对于一个模式空间的2个观测空间(对任意模式都有2种观测向量),ECCA能够找到这2个观测空间对类别而言更有意义的相关子空间,且同时保持了投影分量的无关性.实验结果表明,ECCA优于CCA,GCCA融合方法. 展开更多
关键词 增强典型相关分析 人脸识别 特征融合 异质图像融合
下载PDF
异质影像融合研究现状及趋势 被引量:3
17
作者 石强 张斌 +3 位作者 陈喆 时公涛 陈东 秦前清 《自动化学报》 EI CSCD 北大核心 2014年第3期385-396,共12页
成像机理上的差异导致了异质影像数据之间存在着本质区别,这使得其在像素级融合存在很大困难,因此异质影像融合主要集中于特征级和决策级.本文从信息融合的基本原理出发,详细论述了异质影像融合结构、特征级融合算法、决策级融合算法的... 成像机理上的差异导致了异质影像数据之间存在着本质区别,这使得其在像素级融合存在很大困难,因此异质影像融合主要集中于特征级和决策级.本文从信息融合的基本原理出发,详细论述了异质影像融合结构、特征级融合算法、决策级融合算法的研究现状.同时,深入分析了异质影像融合中存在的问题,并指出了未来的发展方向. 展开更多
关键词 图像融合 异质影像 特征层融合 决策层融合
下载PDF
基于CycleGAN-SIFT的可见光和红外图像匹配 被引量:9
18
作者 郝帅 吴瑛琦 +3 位作者 马旭 何田 文虎 王峰 《光学精密工程》 EI CAS CSCD 北大核心 2022年第5期602-614,共13页
针对红外图像和可见光图像因成像机理不同导致传统匹配算法匹配精度不高、鲁棒性差的问题,提出一种基于CycleGAN-SIFT的可见光和红外图像匹配算法。为了减小可见光图像与红外图像之间特征差异对匹配结果造成的影响,通过迁移学习共享权... 针对红外图像和可见光图像因成像机理不同导致传统匹配算法匹配精度不高、鲁棒性差的问题,提出一种基于CycleGAN-SIFT的可见光和红外图像匹配算法。为了减小可见光图像与红外图像之间特征差异对匹配结果造成的影响,通过迁移学习共享权重的方式在可见光图像和红外图像基础上利用CycleGAN生成伪红外图像,利用SIFT特征提取算法分别提取伪红外图像和红外图像的特征点并进行匹配。为了降低错误匹配率,利用RANSAC剔除误匹配点对。最后,将伪红外图像上的特征点映射至可见光图像,从而实现可见光图像与红外图像的匹配。为了验证所提出算法的有效性,从OTCBVS和TNOImageFusionDataset数据集中任选4组异源图像,并分别在无噪声、有噪声以及存在角度畸变3种情况下与SIFT、Canny-SIFT、SURF以及CMM-Net 4种经典算法进行比较。实验结果表明,在不考虑角度畸变和噪声干扰的条件下,所提出算法的匹配正确率可达95%以上;当存在角度畸变和噪声干扰情况时,本文算法的匹配正确率依然在95%以上,具有匹配精度高、鲁棒性强的优点。 展开更多
关键词 图像匹配 异源图像 Cycle生成对抗网络 尺度不变特征 随机抽样一致算法
下载PDF
基于联合特征与中心方向信息的图像哈希算法 被引量:9
19
作者 王彦超 《西南大学学报(自然科学版)》 CAS CSCD 北大核心 2018年第2期113-124,共12页
为了提高哈希技术对旋转操作的识别能力,提出了全局-局部联合特征耦合中心方向信息估计的图像哈希认证技术.首先,引入2D线性插值技术,对输入的图像进行预处理,使其对任意的缩放操作都具有固定尺寸的哈希序列;然后,将预处理图像转变为HS... 为了提高哈希技术对旋转操作的识别能力,提出了全局-局部联合特征耦合中心方向信息估计的图像哈希认证技术.首先,引入2D线性插值技术,对输入的图像进行预处理,使其对任意的缩放操作都具有固定尺寸的哈希序列;然后,将预处理图像转变为HSV彩色空间,借助二维离散小波变换(Discrete Wave Transform,DWT)处理V分量,利用其低频系数形成二次图像;再引入奇异值分解(Singular Value Decomposition,SVD)处理二次图像,提取其全局特征,将其作为第一个中间哈希序列;基于Fourier机制,借助残差方法,确定图像的显著区域,获取其位置与纹理的局部特征,作为第二个中间哈希序列;随后,引入Radon变换,通过计算图像的中心方向信息,将其与2个中间哈希序列组合,形成过渡哈希数组;借助Logistic映射,定义动态引擎参数,从而设计了分段异扩散技术,对过渡哈希数组进行加密,输出最终的哈希序列;最后,通过估算原始哈希序列与待检测哈希序列的Hamming距离,将其与用户阈值进行比较,完成图像认证.实验结果显示:与当前的图像哈希技术相比,所提算法具有更高的鲁棒性与安全性,对旋转攻击能力具有更好的识别能力. 展开更多
关键词 图像哈希 HSV彩色空间 全局-局部联合特征 频谱残差 显著区域 中心方向信息 分段异扩散技术
下载PDF
结合并行CNN与极限学习机的高光谱图像分类 被引量:4
20
作者 任彦 高晓文 +2 位作者 杨静 叶玉伟 王佳鑫 《遥感信息》 CSCD 北大核心 2022年第3期34-41,共8页
针对传统方法不能充分利用高光谱图像的空间和光谱信息,无法进一步提升高光谱图像分类精度的问题,提出了一种结合卷积神经网络的并行异构极限学习机(spatial-spectral convolutional neural network and parallel extreme learning mach... 针对传统方法不能充分利用高光谱图像的空间和光谱信息,无法进一步提升高光谱图像分类精度的问题,提出了一种结合卷积神经网络的并行异构极限学习机(spatial-spectral convolutional neural network and parallel extreme learning machine,SSCNN-PELM)分类模型。卷积神经网络(convolutional neural network,CNN)由并行的二维卷积(two-dimensional convolution,2D-CNN)和一维卷积(one-dimensional convolution,1D-CNN)构成,其中2D-CNN提取空间信息和部分光谱信息,1D-CNN补偿损失的光谱信息;并行异构极限学习机(parallel extreme learning machine,PELM)将输入层数据并行映射到隐藏层,同时求解出并行隐藏层的连接权重,进而实现特征融合及分类任务。实验结果表明,SSCNN-PELM模型在Indian pines、Pavia university数据集上总体分类精度分别为99.07%、99.51%,对比了支持向量机(support vector machine,SVM)、CNN等分类方法,SSCNN-PELM模型在提高分类精度的同时兼顾了分类速度。 展开更多
关键词 高光谱图像分类 卷积神经网络 特征提取 并行异构极限学习机 空-谱联合
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部