As the agricultural internet of things(IoT)technology has evolved,smart agricultural robots needs to have both flexibility and adaptability when moving in complex field environments.In this paper,we propose the concep...As the agricultural internet of things(IoT)technology has evolved,smart agricultural robots needs to have both flexibility and adaptability when moving in complex field environments.In this paper,we propose the concept of a vision-based navigation system for the agricultural IoT and a binocular vision navigation algorithm for smart agricultural robots,which can fuse the edge contour and the height information of rows of crop in images to extract the navigation parameters.First,the speeded-up robust feature(SURF)extracting and matching algorithm is used to obtain featuring point pairs from the green crop row images observed by the binocular parallel vision system.Then the confidence density image is constructed by integrating the enhanced elevation image and the corresponding binarized crop row image,where the edge contour and the height information of crop row are fused to extract the navigation parameters(θ,d)based on the model of a smart agricultural robot.Finally,the five navigation network instruction sets are designed based on the navigation angleθand the lateral distance d,which represent the basic movements for a certain type of smart agricultural robot working in a field.Simulated experimental results in the laboratory show that the algorithm proposed in this study is effective with small turning errors and low standard deviations,and can provide a valuable reference for the further practical application of binocular vision navigation systems in smart agricultural robots in the agricultural IoT system.展开更多
In this paper,we present a robot vision based system for coordinate measurement of feature points on large scale automobile parts.Our system consists of an industrial 6-DOF robot mounted with a CCD camera and a PC.The...In this paper,we present a robot vision based system for coordinate measurement of feature points on large scale automobile parts.Our system consists of an industrial 6-DOF robot mounted with a CCD camera and a PC.The system controls the robot into the area of feature points.The images of measuring feature points are acquired by the camera mounted on the robot.3D positions of the feature points are obtained from a model based pose estimation that applies to the images.The measured positions of all feature points are then transformed to the reference coordinate of feature points whose positions are obtained from the coordinate measuring machine(CMM).Finally,the point-to-point distances between the measured feature points and the reference feature points are calculated and reported.The results show that the root mean square error(RMSE) of measure values obtained by our system is less than 0.5 mm.Our system is adequate for automobile assembly and can perform faster than conventional methods.展开更多
目的:评价3D手术视频系统在增生性糖尿病视网膜病变(PDR)合并牵拉性视网膜脱离(TRD)玻璃体切除术中的应用效果。方法:回顾性分析2018-08/2019-03于我院行25G微创玻璃体切除术的PDR合并局部TRD(无牵拉性视网膜裂孔)患者32例38眼的临床资...目的:评价3D手术视频系统在增生性糖尿病视网膜病变(PDR)合并牵拉性视网膜脱离(TRD)玻璃体切除术中的应用效果。方法:回顾性分析2018-08/2019-03于我院行25G微创玻璃体切除术的PDR合并局部TRD(无牵拉性视网膜裂孔)患者32例38眼的临床资料,根据术中采用的观察系统进行分组,试验组16例19眼采用3D手术视频系统手术,对照组16例19眼采用传统显微镜手术。记录两组患者手术时间、术中医源性视网膜裂孔和硅油注入情况。术后至少随访6mo,观察最佳矫正视力及术后并发症发生情况。结果:试验组术中发生医源性视网膜裂孔1眼,硅油注入1眼;术后视网膜均完全复位;术后1d玻璃体出血4眼,2~4wk后自行吸收;术后2wk内发生高眼压6眼,药物治疗均能控制;术后6wk后玻璃体再出血2眼;术后6mo最佳矫正视力0.3以上者15眼。对照组术中发生医源性视网膜裂孔4眼,硅油注入5眼;术后视网膜均完全复位;术后1d玻璃体出血6眼,2~4wk后自行吸收;术后2wk内发生高眼压5眼,药物治疗均能控制;术后6wk后玻璃体再出血3眼;术后6mo最佳矫正视力0.3以上者14眼。所有患者手术均顺利完成,均无眼内炎等严重并发症发生,但试验组手术时间明显短于对照组(37.3±4.8min vs 41.2±5.1min,P=0.020)。结论:3D手术视频系统在PDR合并TRD玻璃体切除术中的应用能够缩短手术时间,提高手术效率。展开更多
基金the National Natural Science Foundationof China(No.31760345).
文摘As the agricultural internet of things(IoT)technology has evolved,smart agricultural robots needs to have both flexibility and adaptability when moving in complex field environments.In this paper,we propose the concept of a vision-based navigation system for the agricultural IoT and a binocular vision navigation algorithm for smart agricultural robots,which can fuse the edge contour and the height information of rows of crop in images to extract the navigation parameters.First,the speeded-up robust feature(SURF)extracting and matching algorithm is used to obtain featuring point pairs from the green crop row images observed by the binocular parallel vision system.Then the confidence density image is constructed by integrating the enhanced elevation image and the corresponding binarized crop row image,where the edge contour and the height information of crop row are fused to extract the navigation parameters(θ,d)based on the model of a smart agricultural robot.Finally,the five navigation network instruction sets are designed based on the navigation angleθand the lateral distance d,which represent the basic movements for a certain type of smart agricultural robot working in a field.Simulated experimental results in the laboratory show that the algorithm proposed in this study is effective with small turning errors and low standard deviations,and can provide a valuable reference for the further practical application of binocular vision navigation systems in smart agricultural robots in the agricultural IoT system.
基金wsupported by the Thailand Research Fund and Solimac Automation Co.,Ltd.under the Research and Researchers for Industry Program(RRI)under Grant No.MSD56I0098Office of the Higher Education Commission under the National Research University Project of Thailand
文摘In this paper,we present a robot vision based system for coordinate measurement of feature points on large scale automobile parts.Our system consists of an industrial 6-DOF robot mounted with a CCD camera and a PC.The system controls the robot into the area of feature points.The images of measuring feature points are acquired by the camera mounted on the robot.3D positions of the feature points are obtained from a model based pose estimation that applies to the images.The measured positions of all feature points are then transformed to the reference coordinate of feature points whose positions are obtained from the coordinate measuring machine(CMM).Finally,the point-to-point distances between the measured feature points and the reference feature points are calculated and reported.The results show that the root mean square error(RMSE) of measure values obtained by our system is less than 0.5 mm.Our system is adequate for automobile assembly and can perform faster than conventional methods.
文摘目的:评价3D手术视频系统在增生性糖尿病视网膜病变(PDR)合并牵拉性视网膜脱离(TRD)玻璃体切除术中的应用效果。方法:回顾性分析2018-08/2019-03于我院行25G微创玻璃体切除术的PDR合并局部TRD(无牵拉性视网膜裂孔)患者32例38眼的临床资料,根据术中采用的观察系统进行分组,试验组16例19眼采用3D手术视频系统手术,对照组16例19眼采用传统显微镜手术。记录两组患者手术时间、术中医源性视网膜裂孔和硅油注入情况。术后至少随访6mo,观察最佳矫正视力及术后并发症发生情况。结果:试验组术中发生医源性视网膜裂孔1眼,硅油注入1眼;术后视网膜均完全复位;术后1d玻璃体出血4眼,2~4wk后自行吸收;术后2wk内发生高眼压6眼,药物治疗均能控制;术后6wk后玻璃体再出血2眼;术后6mo最佳矫正视力0.3以上者15眼。对照组术中发生医源性视网膜裂孔4眼,硅油注入5眼;术后视网膜均完全复位;术后1d玻璃体出血6眼,2~4wk后自行吸收;术后2wk内发生高眼压5眼,药物治疗均能控制;术后6wk后玻璃体再出血3眼;术后6mo最佳矫正视力0.3以上者14眼。所有患者手术均顺利完成,均无眼内炎等严重并发症发生,但试验组手术时间明显短于对照组(37.3±4.8min vs 41.2±5.1min,P=0.020)。结论:3D手术视频系统在PDR合并TRD玻璃体切除术中的应用能够缩短手术时间,提高手术效率。