In the daily application of an iris-recognition-at-a-distance(IAAD)system,many ocular images of low quality are acquired.As the iris part of these images is often not qualified for the recognition requirements,the mor...In the daily application of an iris-recognition-at-a-distance(IAAD)system,many ocular images of low quality are acquired.As the iris part of these images is often not qualified for the recognition requirements,the more accessible periocular regions are a good complement for recognition.To further boost the performance of IAAD systems,a novel end-to-end framework for multi-modal ocular recognition is proposed.The proposed framework mainly consists of iris/periocular feature extraction and matching,unsupervised iris quality assessment,and a score-level adaptive weighted fusion strategy.First,ocular feature reconstruction(OFR)is proposed to sparsely reconstruct each probe image by high-quality gallery images based on proper feature maps.Next,a brand new unsupervised iris quality assessment method based on random multiscale embedding robustness is proposed.Different from the existing iris quality assess-ment methods,the quality of an iris image is measured by its robustness in the embedding space.At last,the fusion strategy exploits the iris quality score as the fusion weight to coalesce the complementary information from the iris and periocular regions.Extensive experi-mental results on ocular datasets prove that the proposed method is obviously better than unimodal biometrics,and the fusion strategy can significantly improve therecognition performance.展开更多
Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods fo...Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods for interpreting remote-sensing images has matured.Existing neural networks disregard the spatial relationship between two targets in remote sensing images.Semantic segmentation models that combine convolutional neural networks(CNNs)and graph convolutional neural networks(GCNs)cause a lack of feature boundaries,which leads to the unsatisfactory segmentation of various target feature boundaries.In this paper,we propose a new semantic segmentation model for remote sensing images(called DGCN hereinafter),which combines deep semantic segmentation networks(DSSN)and GCNs.In the GCN module,a loss function for boundary information is employed to optimize the learning of spatial relationship features between the target features and their relationships.A hierarchical fusion method is utilized for feature fusion and classification to optimize the spatial relationship informa-tion in the original feature information.Extensive experiments on ISPRS 2D and DeepGlobe semantic segmentation datasets show that compared with the existing semantic segmentation models of remote sensing images,the DGCN significantly optimizes the segmentation effect of feature boundaries,effectively reduces the noise in the segmentation results and improves the segmentation accuracy,which demonstrates the advancements of our model.展开更多
基金This work was supported by National Natural Science Foundation of China(Nos.62006225,61906199 and 62071468)the Strategic Priority Research Program of Chinese Academy of Sciences(CAS),China(No.XDA 27040700)sponsored by The Beijing Nova Program,China(Nos.Z201100006820050 and Z211100002121010).
文摘In the daily application of an iris-recognition-at-a-distance(IAAD)system,many ocular images of low quality are acquired.As the iris part of these images is often not qualified for the recognition requirements,the more accessible periocular regions are a good complement for recognition.To further boost the performance of IAAD systems,a novel end-to-end framework for multi-modal ocular recognition is proposed.The proposed framework mainly consists of iris/periocular feature extraction and matching,unsupervised iris quality assessment,and a score-level adaptive weighted fusion strategy.First,ocular feature reconstruction(OFR)is proposed to sparsely reconstruct each probe image by high-quality gallery images based on proper feature maps.Next,a brand new unsupervised iris quality assessment method based on random multiscale embedding robustness is proposed.Different from the existing iris quality assess-ment methods,the quality of an iris image is measured by its robustness in the embedding space.At last,the fusion strategy exploits the iris quality score as the fusion weight to coalesce the complementary information from the iris and periocular regions.Extensive experi-mental results on ocular datasets prove that the proposed method is obviously better than unimodal biometrics,and the fusion strategy can significantly improve therecognition performance.
基金funded by the Major Scientific and Technological Innovation Project of Shandong Province,Grant No.2022CXGC010609.
文摘Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods for interpreting remote-sensing images has matured.Existing neural networks disregard the spatial relationship between two targets in remote sensing images.Semantic segmentation models that combine convolutional neural networks(CNNs)and graph convolutional neural networks(GCNs)cause a lack of feature boundaries,which leads to the unsatisfactory segmentation of various target feature boundaries.In this paper,we propose a new semantic segmentation model for remote sensing images(called DGCN hereinafter),which combines deep semantic segmentation networks(DSSN)and GCNs.In the GCN module,a loss function for boundary information is employed to optimize the learning of spatial relationship features between the target features and their relationships.A hierarchical fusion method is utilized for feature fusion and classification to optimize the spatial relationship informa-tion in the original feature information.Extensive experiments on ISPRS 2D and DeepGlobe semantic segmentation datasets show that compared with the existing semantic segmentation models of remote sensing images,the DGCN significantly optimizes the segmentation effect of feature boundaries,effectively reduces the noise in the segmentation results and improves the segmentation accuracy,which demonstrates the advancements of our model.
文摘由于低照度图像具有对比度低、细节丢失严重、噪声大等缺点,现有的目标检测算法对低照度图像的检测效果不理想.为此,本文提出一种结合空间感知注意力机制和多尺度特征融合(Spatial-aware Attention Mechanism and Multi-Scale Feature Fusion,SAM-MSFF)的低照度目标检测方法 .该方法首先通过多尺度交互内存金字塔融合多尺度特征,增强低照度图像特征中的有效信息,并设置内存向量存储样本的特征,捕获样本之间的潜在关联性;然后,引入空间感知注意力机制获取特征在空间域的长距离上下文信息和局部信息,从而增强低照度图像中的目标特征,抑制背景信息和噪声的干扰;最后,利用多感受野增强模块扩张特征的感受野,对具有不同感受野的特征进行分组重加权计算,使检测网络根据输入的多尺度信息自适应地调整感受野的大小.在ExDark数据集上进行实验,本文方法的平均精度(mean Average Precision,mAP)达到77.04%,比现有的主流目标检测方法提高2.6%~14.34%.