摘要
针对大型疾病患者术后恢复急需创新辅助技术,改善生活质量这一问题,提出了一种室内环境目标感知与语音协同控制方法,采用视觉感知技术和机械臂轨迹控制算法识别并抓取日用品。系统采用RGB-D深度相机结合添加了Ghost轻量化模块和RFA卷积模块的改进YOLOv5算法,以实现对目标物体位姿信息的精确识别;利用集成了Inception模块和SE模块的改进GGCNN网络得到抓取目标物体的位置角度;通过ROS操作系统中的Moveit进行运动学解算和运动规划。通过抓取实物实验验证了所提出的改进识别算法,识别率达到90%。此外,语音抓取目标物体成功次数超过80%,系统能够高效地完成如抓取物品和倒水等基本动作,表现出良好的性能和实用性。
This paper proposes an indoor environment target sensing and speech cooperative control method to help patients with serious diseases in an urgent need for innovative assistive technologies during postoperative recovery.The advanced visual perception technology and the robotic arm trajectory control algorithm are adopted in the proposed method to recognize and grasp objects.Firstly,in order to accurately recognize the position information of target objects,a RGB D depth sensing camera is combined with the improved YOLOv5 algorithm into which a Ghost module and an RFA module are integrated.Secondly,the position angles of the grasped target objects are obtained by using an improved GGCNN network that integrates Inception and SE modules.Finally,kinematic solution and motion planning are carried out by Moveit in the ROS operating system.The proposed algorithm is verified by physical object grasping experiments with the recognition rate up to 90%.In addition,the number of successful times of speech grasping target objects is over 80%,and the system can efficiently perform basic actions like grabbing objects and pouring water,exhibiting good performance and practicability.
作者
华瑾
张文悦
季鑫龙
王丽
王贶
HUA Jin;ZHANG Wenyue;JI Xinlong;WANG Li;WANG Kuang(School of Electronic Information Engineering,Xi’an Technological University,Xi’an 710021,China;National Research Center for Rehabilitation Technical Aids,Beijing 102600,China)
出处
《西安工业大学学报》
CAS
2024年第5期669-678,共10页
Journal of Xi’an Technological University
基金
国家自然科学基金项目(62303368)
陕西省科技厅项目(2022GY-242)。