摘要
为了解决由于视觉传感器视角单一、光线条件复杂导致的空间机械臂作业中相似目标识别较差的问题,提出一种基于CNN-GRU的视触融合目标识别系统。系统由机械臂、灵巧手和视觉传感器构成,实现了对目标物视觉和触觉信息的自主采样,并通过CNN-GRU网络提取视觉信息的空间特征和触觉信息的时序特征,有效利用多模态信息,提高目标识别的准确率。实验结果表明:在14种物品分类实验中准确率为97.8%,对比单一视觉CNN-V和触觉GRU-T网络分别提升16.3%和15.8%;同时,CNN-GRU在准确率和预测速度上明显优于传统最邻近算法和支持向量机算法。
To solve the problem of poor recognition of similar targets due to the single viewing angle of the visual sensor and the complicated lighting conditions in the operation of the space manipulator,a CNN-GRU-based visual and tactile fusion target recognition system was proposed.The system was composed of the manipulator,the dexterous hand and the visual sensor.The autonomous sampling of the visual and tactile information of the target was realized and the spatial characteristics of the visual information and the temporal characteristics of the tactile information were extracted through the CNN-GRU network.The multi-modal information was effectively used to improve the accuracy of target recognition.The accuracy rate was 97.8% in the 14 kinds of item classification experiments,which was 16.2% and 15.8% higher than that of the single visual CNN-V and tactile GRU-T network respectively.Meanwhile,CNN-GRU was superior to the traditional K-Nearest Neighbor and Support Vector Machine algorithm in accuracy and prediction speed.
作者
沈书馨
宋爱国
阳雨妍
倪江生
SHEN Shuxin;SONG Aiguo;YANG Yuyan;NI Jiangsheng(School of Instrument Science and Engineering,Southeast University,Nanjing 210096,China)
出处
《载人航天》
CSCD
北大核心
2022年第2期213-222,共10页
Manned Spaceflight
基金
国家重点研发计划项目(2019YFC0119304)。
关键词
空间机械臂
视触融合
神经网络
目标识别
灵巧手
space manipulator
visual-tactile fusion
neural network
target recognition
dexterous hand