期刊文献+
共找到423篇文章
< 1 2 22 >
每页显示 20 50 100
CMMCAN:Lightweight Feature Extraction and Matching Network for Endoscopic Images Based on Adaptive Attention
1
作者 Nannan Chong Fan Yang 《Computers, Materials & Continua》 SCIE EI 2024年第8期2761-2783,共23页
In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clini... In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clinical operating environments,endoscopic images often suffer from challenges such as low texture,uneven illumination,and non-rigid structures,which affect feature observation and extraction.This can severely impact surgical navigation or clinical diagnosis due to missing feature points in endoscopic images,leading to treatment and postoperative recovery issues for patients.To address these challenges,this paper introduces,for the first time,a Cross-Channel Multi-Modal Adaptive Spatial Feature Fusion(ASFF)module based on the lightweight architecture of EfficientViT.Additionally,a novel lightweight feature extraction and matching network based on attention mechanism is proposed.This network dynamically adjusts attention weights for cross-modal information from grayscale images and optical flow images through a dual-branch Siamese network.It extracts static and dynamic information features ranging from low-level to high-level,and from local to global,ensuring robust feature extraction across different widths,noise levels,and blur scenarios.Global and local matching are performed through a multi-level cascaded attention mechanism,with cross-channel attention introduced to simultaneously extract low-level and high-level features.Extensive ablation experiments and comparative studies are conducted on the HyperKvasir,EAD,M2caiSeg,CVC-ClinicDB,and UCL synthetic datasets.Experimental results demonstrate that the proposed network improves upon the baseline EfficientViT-B3 model by 75.4%in accuracy(Acc),while also enhancing runtime performance and storage efficiency.When compared with the complex DenseDescriptor feature extraction network,the difference in Acc is less than 7.22%,and IoU calculation results on specific datasets outperform complex dense models.Furthermore,this method increases the F1 score by 33.2%and accelerates runtime by 70.2%.It is noteworthy that the speed of CMMCAN surpasses that of comparative lightweight models,with feature extraction and matching performance comparable to existing complex models but with faster speed and higher cost-effectiveness. 展开更多
关键词 feature extraction and matching lightweighted network medical images ENDOSCOPIC ATTENTION
下载PDF
Image Feature Extraction and Matching of Augmented Solar Images in Space Weather
2
作者 WANG Rui BAO Lili CAI Yanxia 《空间科学学报》 CAS CSCD 北大核心 2023年第5期840-852,共13页
Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speed... Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speeded-up robust features algorithm,binary robust invariant scalable keypoints algorithm,and oriented fast and rotated brief algorithm.The performance of these algorithms was estimated in terms of matching accuracy,feature point richness,and running time.The experiment result showed that no algorithm achieved high accuracy while keeping low running time,and all algorithms are not suitable for image feature extraction and matching of augmented solar images.To solve this problem,an improved method was proposed by using two-frame matching to utilize the accuracy advantage of the scale-invariant feature transform algorithm and the speed advantage of the oriented fast and rotated brief algorithm.Furthermore,our method and the four representative algorithms were applied to augmented solar images.Our application experiments proved that our method achieved a similar high recognition rate to the scale-invariant feature transform algorithm which is significantly higher than other algorithms.Our method also obtained a similar low running time to the oriented fast and rotated brief algorithm,which is significantly lower than other algorithms. 展开更多
关键词 Augmented reality Augmented image image feature point extraction and matching Space weather Solar image
下载PDF
A Template Matching Based Feature Extraction for Activity Recognition
3
作者 Muhammad Hameed Siddiqi Helal Alshammari +4 位作者 Amjad Ali Madallah Alruwaili Yousef Alhwaiti Saad Alanazi M.M.Kamruzzaman 《Computers, Materials & Continua》 SCIE EI 2022年第7期611-634,共24页
Human activity recognition(HAR)can play a vital role in the monitoring of human activities,particularly for healthcare conscious individuals.The accuracy of HAR systems is completely reliant on the extraction of promi... Human activity recognition(HAR)can play a vital role in the monitoring of human activities,particularly for healthcare conscious individuals.The accuracy of HAR systems is completely reliant on the extraction of prominent features.Existing methods find it very challenging to extract optimal features due to the dynamic nature of activities,thereby reducing recognition performance.In this paper,we propose a robust feature extraction method for HAR systems based on template matching.Essentially,in this method,we want to associate a template of an activity frame or sub-frame comprising the corresponding silhouette.In this regard,the template is placed on the frame pixels to calculate the equivalent number of pixels in the template correspondent those in the frame.This process is replicated for the whole frame,and the pixel is directed to the optimum match.The best count is estimated to be the pixel where the silhouette(provided via the template)presented inside the frame.In this way,the feature vector is generated.After feature vector generation,the hiddenMarkovmodel(HMM)has been utilized to label the incoming activity.We utilized different publicly available standard datasets for experiments.The proposed method achieved the best accuracy against existing state-of-the-art systems. 展开更多
关键词 Activity recognition feature extraction template matching video surveillance
下载PDF
Modified SIFT descriptor and key-point matching for fast and robust image mosaic 被引量:2
4
作者 何玉青 王雪 +3 位作者 王思远 刘明奇 诸加丹 金伟其 《Journal of Beijing Institute of Technology》 EI CAS 2016年第4期562-570,共9页
To improve the performance of the scale invariant feature transform ( SIFT), a modified SIFT (M-SIFT) descriptor is proposed to realize fast and robust key-point extraction and matching. In descriptor generation, ... To improve the performance of the scale invariant feature transform ( SIFT), a modified SIFT (M-SIFT) descriptor is proposed to realize fast and robust key-point extraction and matching. In descriptor generation, 3 rotation-invariant concentric-ring grids around the key-point location are used instead of 16 square grids used in the original SIFT. Then, 10 orientations are accumulated for each grid, which results in a 30-dimension descriptor. In descriptor matching, rough rejection mismatches is proposed based on the difference of grey information between matching points. The per- formance of the proposed method is tested for image mosaic on simulated and real-worid images. Experimental results show that the M-SIFT descriptor inherits the SIFT' s ability of being invariant to image scale and rotation, illumination change and affine distortion. Besides the time cost of feature extraction is reduced by 50% compared with the original SIFT. And the rough rejection mismatches can reject at least 70% of mismatches. The results also demonstrate that the performance of the pro- posed M-SIFT method is superior to other improved SIFT methods in speed and robustness. 展开更多
关键词 modified scale invariant feature transform (SIFT) image mosaic feature extraction key-point matching
下载PDF
Content-Based Lace Image Retrieval System Using a Hierarchical Multifeature Scheme
5
作者 曹霞 李岳阳 +2 位作者 罗海驰 蒋高明 丛洪莲 《Journal of Donghua University(English Edition)》 EI CAS 2016年第4期562-565,568,共5页
An android-based lace image retrieval system based on content-based image retrieval (CBIR) technique is presented. This paper applies shape and texture features of lace image in our system and proposes a hierarchical ... An android-based lace image retrieval system based on content-based image retrieval (CBIR) technique is presented. This paper applies shape and texture features of lace image in our system and proposes a hierarchical multifeature scheme to facilitate coarseto-fine matching for efficient lace image retrieval in a large database. Experimental results demonstrate the feasibility and effectiveness of the proposed system meet the requirements of realtime. 展开更多
关键词 Retrieval retrieval matching hierarchical texture CBIR Hierarchical registration facilitate preprocessing
下载PDF
Video learning based image classification method for object recognition
6
作者 LEE Hong-ro SHIN Yong-ju 《Journal of Central South University》 SCIE EI CAS 2013年第9期2399-2406,共8页
Automatic image classification is the first step toward semantic understanding of an object in the computer vision area.The key challenge of problem for accurate object recognition is the ability to extract the robust... Automatic image classification is the first step toward semantic understanding of an object in the computer vision area.The key challenge of problem for accurate object recognition is the ability to extract the robust features from various viewpoint images and rapidly calculate similarity between features in the image database or video stream.In order to solve these problems,an effective and rapid image classification method was presented for the object recognition based on the video learning technique.The optical-flow and RANSAC algorithm were used to acquire scene images from each video sequence.After the selection of scene images,the local maximum points on comer of object around local area were found using the Harris comer detection algorithm and the several attributes from local block around each feature point were calculated by using scale invariant feature transform (SIFT) for extracting local descriptor.Finally,the extracted local descriptor was learned to the three-dimensional pyramid match kernel.Experimental results show that our method can extract features in various multi-viewpoint images from query video and calculate a similarity between a query image and images in the database. 展开更多
关键词 image classification multi-viewpoint image feature extraction video learning
下载PDF
Precise vehicle ego-localization using feature matching of pavement images
7
作者 Zijun Jiang Zhigang Xu +2 位作者 Yunchao Li Haigen Min Jingmei Zhou 《Journal of Intelligent and Connected Vehicles》 2020年第2期37-47,共11页
Purpose–Precise vehicle localization is a basic and critical technique for various intelligent transportation system(ITS)applications.It also needs to adapt to the complex road environments in real-time.The global po... Purpose–Precise vehicle localization is a basic and critical technique for various intelligent transportation system(ITS)applications.It also needs to adapt to the complex road environments in real-time.The global positioning system and the strap-down inertial navigation system are two common techniques in thefield of vehicle localization.However,the localization accuracy,reliability and real-time performance of these two techniques can not satisfy the requirement of some critical ITS applications such as collision avoiding,vision enhancement and automatic parking.Aiming at the problems above,this paper aims to propose a precise vehicle ego-localization method based on image matching.Design/methodology/approach–This study included three steps,Step 1,extraction of feature points.After getting the image,the local features in the pavement images were extracted using an improved speeded up robust features algorithm.Step 2,eliminate mismatch points.Using a random sample consensus algorithm to eliminate mismatched points of road image and make match point pairs more robust.Step 3,matching of feature points and trajectory generation.Findings–Through the matching and validation of the extracted local feature points,the relative translation and rotation offsets between two consecutive pavement images were calculated,eventually,the trajectory of the vehicle was generated.Originality/value–The experimental results show that the studied algorithm has an accuracy at decimeter-level and it fully meets the demand of the lane-level positioning in some critical ITS applications. 展开更多
关键词 feature extraction image matching Intelligent transportation systems Intelligent vehicles Position measurement
原文传递
Video expression recognition based on frame-level attention mechanism
8
作者 陈瑞 TONG Ying +1 位作者 ZHANG Yiye XU Bo 《High Technology Letters》 EI CAS 2023年第2期130-139,共10页
Facial expression recognition(FER) in video has attracted the increasing interest and many approaches have been made.The crucial problem of classifying a given video sequence into several basic emotions is how to fuse... Facial expression recognition(FER) in video has attracted the increasing interest and many approaches have been made.The crucial problem of classifying a given video sequence into several basic emotions is how to fuse facial features of individual frames.In this paper, a frame-level attention module is integrated into an improved VGG-based frame work and a lightweight facial expression recognition method is proposed.The proposed network takes a sub video cut from an experimental video sequence as its input and generates a fixed-dimension representation.The VGG-based network with an enhanced branch embeds face images into feature vectors.The frame-level attention module learns weights which are used to adaptively aggregate the feature vectors to form a single discriminative video representation.Finally, a regression module outputs the classification results.The experimental results on CK+and AFEW databases show that the recognition rates of the proposed method can achieve the state-of-the-art performance. 展开更多
关键词 facial expression recognition(FER) video sequence attention mechanism feature extraction enhanced feature VGG network image classification neural network
下载PDF
基于视频图像驱动的驾驶人注意力估计方法
9
作者 赵栓峰 李小雨 +3 位作者 罗志健 唐增辉 王梦维 王力 《现代电子技术》 北大核心 2024年第22期179-186,共8页
驾驶人视觉注意力的深入研究对于预测不安全驾驶行为和理解驾驶行为具有重要意义。为此,提出一种基于视频图像驱动的驾驶人注意力估计方法,以估计驾驶人在行车时注意到视域内的行人或车辆等各种对象。该方法利用深度神经网络学习交通场... 驾驶人视觉注意力的深入研究对于预测不安全驾驶行为和理解驾驶行为具有重要意义。为此,提出一种基于视频图像驱动的驾驶人注意力估计方法,以估计驾驶人在行车时注意到视域内的行人或车辆等各种对象。该方法利用深度神经网络学习交通场景视频与驾驶员注意力特征之间的映射关系,并融入引导学习模块来提取与驾驶员注意力最相关的特征。考虑到驾驶的动态性,使用动态交通场景视频作为模型输入,设计时空特征提取模块。在稀疏、密集、低照度等常见的交通场景中,将估计的驾驶员注意力模型与收集的驾驶员注意力数据点进行对比。实验结果表明,所提方法能够准确估计驾驶员在驾驶过程中的注意力,对于预测不安全驾驶行为以及促进人们更好地理解驾驶行为具有重要的理论和实用价值。 展开更多
关键词 驾驶人注意力估计 深度学习 视频图像驱动 引导学习 动态交通场景 时空特征提取
下载PDF
提取多场景视频关键帧的复合HOG特征聚类方法
10
作者 魏英姿 尹苏渝 张宇恒 《软件导刊》 2024年第9期187-192,共6页
由于直接利用帧差数据提取动态多场景视频关键帧往往会产生过多冗余帧,方向梯度直方图(HOG)特征对图像亮度、场景变化具有较好的稳定性。为此,提出了用于提取多场景视频关键帧的复合HOG特征聚类方法来提升关键帧提取效率。首先,通过提... 由于直接利用帧差数据提取动态多场景视频关键帧往往会产生过多冗余帧,方向梯度直方图(HOG)特征对图像亮度、场景变化具有较好的稳定性。为此,提出了用于提取多场景视频关键帧的复合HOG特征聚类方法来提升关键帧提取效率。首先,通过提取视频帧的HOG特征引入图像信息熵构成复合特征矢量,以保持数据特征相关性。其次,根据复合特征矢量统计视频帧间差异数据确定视频分割镜头、关键帧提取个数;再次,分别考虑镜头内帧集合和完整视频帧集合,无重复地将信息熵较大的视频帧选为初始聚类中心以引导聚类算法搜索方向,并通过K均值聚类抽取视频关键帧。与传统K均值聚类方法比较后发现,所提算法冗余度降低0.003~0.015,查准率提高了0.14~0.21,聚类时间得到下降,精度和效率较优。 展开更多
关键词 关键帧提取 视频分割 HOG特征 复合特征矢量 K均值聚类 图像熵
下载PDF
利用改进ORB算法的无人机影像匹配 被引量:2
11
作者 李冰 赖祖龙 +1 位作者 孙杰 丁开华 《测绘通报》 CSCD 北大核心 2024年第1期126-130,149,共6页
针对ORB算法面对光照变化时提取特征点数量不稳定及特征点定位精度仅有像素级的问题,本文设计了一种基于前背景对比的自适应阈值方法,且结合现有的亚像素定位方法对ORB算法进行了改进。同时为了避免RANSAC算法因需人工设置阈值导致误差... 针对ORB算法面对光照变化时提取特征点数量不稳定及特征点定位精度仅有像素级的问题,本文设计了一种基于前背景对比的自适应阈值方法,且结合现有的亚像素定位方法对ORB算法进行了改进。同时为了避免RANSAC算法因需人工设置阈值导致误差的问题,将MAGSAC++算法引入特征匹配过程,用于误匹配剔除。试验结果表明,改进算法能够获取数量较多的匹配数目,对光照变化具有更好的稳健性,且匹配精度提高了7%以上。 展开更多
关键词 ORB算法 特征提取 自适应阈值 影像匹配 误匹配剔除
下载PDF
基于多级语义对齐的图像-文本匹配算法
12
作者 李艺茹 姚涛 +2 位作者 张林梁 孙玉娟 付海燕 《北京航空航天大学学报》 EI CAS CSCD 北大核心 2024年第2期551-558,共8页
图像中的区域特征更关注于图像中的前景信息,背景信息往往被忽略,如何有效的联合局部特征和全局特征还没有得到充分地研究。为解决上述问题,加强全局概念和局部概念之间的关联得到更准确的视觉特征,提出一种基于多级语义对齐的图像-文... 图像中的区域特征更关注于图像中的前景信息,背景信息往往被忽略,如何有效的联合局部特征和全局特征还没有得到充分地研究。为解决上述问题,加强全局概念和局部概念之间的关联得到更准确的视觉特征,提出一种基于多级语义对齐的图像-文本匹配算法。提取局部图像特征,得到图像中的细粒度信息;提取全局图像特征,将环境信息引入到网络的学习中,从而得到不同的视觉关系层次,为联合的视觉特征提供更多的信息;将全局-局部图像特征进行联合,将联合后的视觉特征和文本特征进行全局-局部对齐得到更加精准的相似度表示。通过大量的实验和分析表明:所提算法在2个公共数据集上具有有效性。 展开更多
关键词 图像-文本匹配 跨模态信息处理 特征提取 神经网络 特征融合
下载PDF
仿人眼双目图像特征点提取与匹配方法 被引量:1
13
作者 向浩鸣 夏晓华 +1 位作者 葛兆凯 曹雨松 《哈尔滨工业大学学报》 EI CAS CSCD 北大核心 2024年第4期92-100,共9页
模仿人眼的视觉特性已成为机器迈向智能感知、智能认知的研究热点和难点。图像边缘蕴含着丰富的信息,因此人眼对场景中物体的边缘更加敏感。为在机器上实现这一视觉特性,提出了一种仿人眼双目图像特征点提取与匹配方法。首先选择边缘特... 模仿人眼的视觉特性已成为机器迈向智能感知、智能认知的研究热点和难点。图像边缘蕴含着丰富的信息,因此人眼对场景中物体的边缘更加敏感。为在机器上实现这一视觉特性,提出了一种仿人眼双目图像特征点提取与匹配方法。首先选择边缘特征提取能力突出的SUSAN(small univalue segment assimilating nucleus)算子作为特征检测器;然后改进尺度不变特征变换(scale-invariant feature transform,SIFT)描述子的采样邻域,减少远离特征点的梯度信息因视点和视角差异带来的匹配误差,保留靠近特征点的主要梯度信息;随后对输入图像建立多尺度结构,在不同尺度上计算同一特征的主要梯度信息;最后利用平方根核比较梯度信息的相似性,生成多尺度描述子,增强描述向量的独特性。实验采用多种评价指标分别对提出的多尺度描述子和整体算法进行评估,并与经典的SIFT、SURF(speeded up robust features)、Root-SIFT等算法和先进的BEBLID (boosted efficient binary local image descriptor)、SuperGlue、DFM等算法进行对比。结果表明:提出的多尺度描述子能够提高边缘特征点的匹配准确率,并对光照变化具有更强的适应能力,体现出更好的匹配稳定性;与其他算法相比,本算法具有更高的匹配准确性。 展开更多
关键词 特征点提取 特征点匹配 仿人眼双目图像 多尺度结构 特征描述子
下载PDF
基于复合型2S网络的红外与可见光图像配准研究
14
作者 郑博文 王琢 曹昕宇 《科学技术与工程》 北大核心 2024年第16期6783-6791,共9页
针对传统图像配准方法在红外图像与可见光图像配准任务中效果较差的问题。提出一种基于超级点+超级匹配(Superpoint+Superglue, 2S)复合型网络的特征匹配法用于红外与可见光图像配准。方法中首先使用Superpoint独特的特征提取方法,充分... 针对传统图像配准方法在红外图像与可见光图像配准任务中效果较差的问题。提出一种基于超级点+超级匹配(Superpoint+Superglue, 2S)复合型网络的特征匹配法用于红外与可见光图像配准。方法中首先使用Superpoint独特的特征提取方法,充分提取红外图像与可见光图像之间的共性特征。其次利用Superglue特征匹配方法中增加匹配约束和使用注意力机制的思想,发挥神经网络的优势,提高匹配效率。在训练阶段通过使用自建数据集的方法,以提高神经网络的泛化性与准确性。结果表明:传统配准方法在3组实验图像上的特征点提取重复性评分与准确性评分分别为:(0.006 7,0.006 1)、(0.001 0,0.000 8)、(0,0),特征点正确匹配对数为:7对、1对、0对,平均数量低于估计变换矩阵所需要的最少4对匹配点对。而基于Superpoint+Superglue的红外与可见光图像配准方法的各项评分为:(0.240 2,0.262 5)、(0.193 9,0.172 2)、(0.263 0,0.264 4),特征点正确匹配对数为:252对、165对、252对,特征点提取评价指标与特征点对正确匹配数量相较于传统方法均大幅度提升,可以较好地完成配准任务。 展开更多
关键词 图像配准 卷积神经网络(CNN) 特征提取 特征匹配
下载PDF
基于PCA算法的人脸匹配技术研究
15
作者 冯伟 杨春丽 +6 位作者 刘峰 刘光宇 程远 周豹 赵恩铭 周维云 赵继强 《漯河职业技术学院学报》 2024年第2期23-27,共5页
基于PCA算法的人脸匹配是一种常见的计算机视觉技术,主要应用于人脸图像的分类和匹配任务。利用PCA技术对人脸图像进行特征提取和降维处理,将人脸分为测试集和训练集,然后用欧式距离计算测试集中选择的图像和训练集中所有图像的距离,选... 基于PCA算法的人脸匹配是一种常见的计算机视觉技术,主要应用于人脸图像的分类和匹配任务。利用PCA技术对人脸图像进行特征提取和降维处理,将人脸分为测试集和训练集,然后用欧式距离计算测试集中选择的图像和训练集中所有图像的距离,选择距离最短的图像作为人脸匹配结果。实验采集了200张不同拍摄角度和不同表情的人脸图像,对待匹配的人脸图像加入不同程度噪声进行人脸匹配,实验结果显示基于PCA算法的人脸匹配技术,完成匹配的平均时间为1.2494s,人脸图像匹配准确率为97.25%。 展开更多
关键词 人脸匹配 PCA算法 特征提取 欧式距离 图像噪声
下载PDF
视频分割下风力发电出力数据采集与特性分析
16
作者 张二辉 徐兴朝 +1 位作者 李鹏飞 左蓬 《自动化仪表》 CAS 2024年第7期16-20,共5页
针对风力发电出力场景分析存在连续场景捕捉困难的问题,提出了视频分割下风力发电出力数据采集与特性分析方法。首先,搭建风力发电出力场景采集平台,以采集风力发电出力场景的视频数据。然后,对视频场景进行分割,并以镜头分割结果为基础... 针对风力发电出力场景分析存在连续场景捕捉困难的问题,提出了视频分割下风力发电出力数据采集与特性分析方法。首先,搭建风力发电出力场景采集平台,以采集风力发电出力场景的视频数据。然后,对视频场景进行分割,并以镜头分割结果为基础,对后向镜头一致特征进行计算。接着,通过改进基于图像特征变化的关键帧提取算法提取风力发电出力场景关键帧,并分析风力发电出力场景。最后,采用具体实例进行性能测试。测试结果表明,该方法能够获得被研究地区当年夏季和冬季的日、月出力场景特性。该方法具有较好的性能。 展开更多
关键词 视频分割 关键帧提取 风力发电 场景分析 场景分割 图像特征 数据采集平台
下载PDF
基于SE-Hardnet网络的无人机图像目标匹配算法
17
作者 苏文博 房群忠 +1 位作者 徐保树 张程硕 《沈阳工业大学学报》 CAS 北大核心 2024年第5期693-701,共9页
针对无人机对目标进行匹配定位过程中,面临图像旋转变化及视角尺寸过小导致的图像特征提取困难等问题,提出了一种融合候选区域检测与SE-Hardnet特征提取网络的无人机目标图像匹配算法。通过Edge Boxes算法检测候选区域,结合SE-Hardnet... 针对无人机对目标进行匹配定位过程中,面临图像旋转变化及视角尺寸过小导致的图像特征提取困难等问题,提出了一种融合候选区域检测与SE-Hardnet特征提取网络的无人机目标图像匹配算法。通过Edge Boxes算法检测候选区域,结合SE-Hardnet网络进行特征提取,实现了目标图像的精确匹配。实验结果表明,所提算法在图像发生角度、尺寸变化时,具有更高的匹配正确率和鲁棒性,在近距离条件下图片数据集中的匹配正确率比现阶段图像匹配算法高8%~11%。为无人机目标定位提供了一种可行和有效的手段。 展开更多
关键词 图像匹配 候选区域检测 Edge Boxes算法 特征提取 注意力机制 SE-Hardnet网络 相似性度量 无人机目标定位
下载PDF
基于Gabor变换的多角度人脸表情识别方法
18
作者 王康毅 邵苏杰 《计算机仿真》 2024年第4期233-236,526,共5页
由于人脸外形的不稳定性,可通过人脸变化产生多种表情,在不同的观察角度上人脸视觉图像存在较大差异。且在光照变化、面部表情姿态以及遮挡等因素的影响下,难以准确提取人脸表情特征,导致识别准确率偏低。为此,提出基于Gabor变换的多角... 由于人脸外形的不稳定性,可通过人脸变化产生多种表情,在不同的观察角度上人脸视觉图像存在较大差异。且在光照变化、面部表情姿态以及遮挡等因素的影响下,难以准确提取人脸表情特征,导致识别准确率偏低。为此,提出基于Gabor变换的多角度人脸表情识别方法。通过人眼定位,对多角度人脸表情图像完成几何预处理,提升人脸表情识别精度。采用Gabor变换方法提取多角度人脸表情图像特征。利用弹性模板匹配方法对特征关键点开展弹性网格匹配,计算出图像的代价函数。采用K-近邻分类策略匹配评估多角度人脸表情图像,完成多角度人脸表情识别。实验结果表明,以上方法的识别时间在2s内,识别准确率接近100%,应用性能优于已有方法,验证了研究方法有效性更强、精准性更高。 展开更多
关键词 图像特征提取 图像预处理 弹性模板匹配 近邻分类策略
下载PDF
基于图像拼接技术的动漫视频生成方法
19
作者 马景雯 吴颖 《长春工程学院学报(自然科学版)》 2024年第1期96-101,共6页
针对在动漫图像拼接过程中出现的运算量大、速度慢、特征点提取精度低、维度高以及图像拼接效果差等问题,研究基于图像拼接技术的动漫视频生成方法。通过均值滤波预处理动漫图像,去除图像中的噪声和干扰后,采用加速稳健特征(SURF)算法... 针对在动漫图像拼接过程中出现的运算量大、速度慢、特征点提取精度低、维度高以及图像拼接效果差等问题,研究基于图像拼接技术的动漫视频生成方法。通过均值滤波预处理动漫图像,去除图像中的噪声和干扰后,采用加速稳健特征(SURF)算法提取滤波后动漫图像的特征点,经边缘相似匹配法完成图像特征点匹配后,利用加权平滑算法融合匹配后的图像,获得全景图,并将其导入Maya软件生成动漫视频。试验表明:该方法提取特征点精度高、维度低、运算量小、速度快,图像拼接效果清晰、连接紧密、过渡自然、没有色差,生成的动漫视频可以达到更加精美逼真的视觉效果。 展开更多
关键词 图像拼接技术 动漫视频 动画制作 特征点提取 图像匹配 图像融合
下载PDF
基于OFAST和BRISK特征耦合三重过滤策略的图像匹配算法
20
作者 刘爽 徐长波 于青峰 《工业控制计算机》 2024年第2期99-100,103,共3页
为了在不牺牲性能的前提下提高图像匹配算法的检测速度,提出一种组合式的OFAST和BRISK耦合三重过滤策略的图像特征点匹配算法。首先利用OFAST算法提取特征点,采用BRISK特征描述算法计算描述子,之后使用暴力匹配方法计算汉明距离,结合最... 为了在不牺牲性能的前提下提高图像匹配算法的检测速度,提出一种组合式的OFAST和BRISK耦合三重过滤策略的图像特征点匹配算法。首先利用OFAST算法提取特征点,采用BRISK特征描述算法计算描述子,之后使用暴力匹配方法计算汉明距离,结合最小距离过滤法对匹配点对进行预筛选,在使用PROSAC算法前通过向量的余弦相似度消除误匹配特征点,优化匹配结果实现图像的准确匹配。反复实验结果证明,该算法能够很好地适应图像的旋转、模糊、尺度变换,保证了匹配过程的运算开销,具有较好的实时性和准确性,解决了误匹配率高和鲁棒性差的问题。 展开更多
关键词 图像匹配 特征提取 特征描述 三重过滤策略 误匹配消除
下载PDF
上一页 1 2 22 下一页 到第
使用帮助 返回顶部