期刊文献+

融合视觉特征增强机制的机器人弱光环境抓取检测

Robotic grasp detection in low-light environment by incorporating visual feature enhancement mechanism
下载PDF
导出
摘要 现有的机器人抓取操作通常在良好光照条件下开展,此时目标细节清晰、区域对比度高,而在夜间、遮挡等弱光环境下目标的视觉特征微弱,会导致现有的机器人抓取检测模型的检测准确率急剧下降。为提高弱光场景下稀疏、微弱抓取特征的表征能力,提出一种融合视觉特征增强机制的抓取检测模型,通过视觉增强子任务为抓取检测施加特征增强约束。对于抓取检测模块,采用仿U-Net框架的编码器-解码器结构实现特征的高效融合;对于弱光增强模块,从局部、全局层面分别提取纹理、颜色信息,以实现兼顾目标细节与视觉效果的特征增强。此外,分别构建弱光Cornell数据集和弱光Jacquard数据集两个新的弱光抓取基准数据集,并基于上述数据集开展对比实验。实验结果表明,所提弱光抓取检测模型在基准数据集上的准确率分别达到了95.5%和87.4%,与生成抓取卷积神经网络(GGCNN)、生成残差卷积神经网络(GR-ConvNet)等现有抓取检测模型相比,准确率在弱光Cornell数据集提升11.1、1.2个百分点,在弱光Jacquard数据集上提升5.5、5.0个百分点,取得了较好的抓取检测效果。 Existing robotic grasping operations are usually performed under well-illuminated conditions with clear object details and high regional contrast.At the same time,for low-light conditions caused by night and occlusion,where the objects' visual features are weak,the detection accuracies of existing robotic grasp detection models decrease dramatically.In order to improve the representation ability of sparse and weak grasp features in low-light scenarios,a grasp detection model incorporating visual feature enhancement mechanism was proposed to use the visual enhancement sub-task to impose feature enhancement constraints on grasp detection.In grasp detection module,the U-Net like encoder-decoder structure was adopted to achieve efficient feature fusion.In low-light enhancement module,the texture and color information was respectively extracted from local and global level,thereby balancing the object details and visual effect in feature enhancement.In addition,two low-light grasp datasets called low-light Cornell dataset and low-light Jacquard dataset were constructed as new benchmark dataset of low-light grasp and used to conduct the comparative experiments.Experimental results show that the accuracies of the proposed low-light grasp detection model are 95.5% and 87.4% on the benchmark datasets respectively,which are 11.1,1.2 percentage points higher on low-light Cornell dataset and 5.5,5.0 percentage points higher on low-light Jacquard dataset than those of the existing grasp detection models,including Generative Grasping Convolutional Neural Network(GG-CNN),and Generative Residual Convolutional Neural Network(GR-ConvNet),indicating that the proposed model has good grasp detection performance.
作者 李淦 牛洺第 陈路 杨静 闫涛 陈斌 LI Gan;NIU Mingdi;CHEN Lu;YANG Jing;YAN Tao;CHEN Bin(School of Computer and Information Technology,Shanxi University,Taiyuan Shanxi 030006,China;Institute of Big Data Science and Industry,Shanxi University,Taiyuan Shanxi 030006,China;Technology Department,Taiyuan Satellite Launch Center,Taiyuan Shanxi 030027,China;School of Automation and Software Engineering,Shanxi University,Taiyuan Shanxi 030031,China;Chongqing Research Institute,Harbin Institute of Technology,Chongqing 401151,China;International Institute of Artificial Intelligence,Harbin Institute of Technology(Shenzhen),Shenzhen Guangdong 518055,China)
出处 《计算机应用》 CSCD 北大核心 2023年第8期2564-2571,共8页 journal of Computer Applications
基金 国家自然科学基金资助项目(62003200,62006146) 山西省基础研究计划项目(202203021222010) 山西省科技重大专项(202201020101006)。
关键词 机器人 抓取检测 弱光成像 深度神经网络 视觉增强 robot grasp detection low-light imaging deep neural network visual enhancement
  • 相关文献

参考文献4

二级参考文献18

共引文献18

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部