The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We...The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.展开更多
Directly grasping the tightly stacked objects may cause collisions and result in failures,degenerating the functionality of robotic arms.Inspired by the observation that first pushing objects to a state of mutual sepa...Directly grasping the tightly stacked objects may cause collisions and result in failures,degenerating the functionality of robotic arms.Inspired by the observation that first pushing objects to a state of mutual separation and then grasping them individually can effectively increase the success rate,we devise a novel deep Q-learning framework to achieve collaborative pushing and grasping.Specifically,an efficient non-maximum suppression policy(PolicyNMS)is proposed to dynamically evaluate pushing and grasping actions by enforcing a suppression constraint on unreasonable actions.Moreover,a novel data-driven pushing reward network called PR-Net is designed to effectively assess the degree of separation or aggregation between objects.To benchmark the proposed method,we establish a dataset containing common household items dataset(CHID)in both simulation and real scenarios.Although trained using simulation data only,experiment results validate that our method generalizes well to real scenarios and achieves a 97%grasp success rate at a fast speed for object separation in the real-world environment.展开更多
Background Robot grasping encompasses a wide range of research areas;however, most studies have been focused on the grasping of only stationary objects in a scene;only a few studies on how to grasp objects from a user...Background Robot grasping encompasses a wide range of research areas;however, most studies have been focused on the grasping of only stationary objects in a scene;only a few studies on how to grasp objects from a user's hand have been conducted. In this paper, a robot grasping algorithm based on deep reinforcement learning (RGRL) is proposed. Methods The RGRL takes the relative positions of the robot and the object in a user's hand as input and outputs the best action of the robot in the current state. Thus, the proposed algorithm realizes the functions of autonomous path planning and grasping objects safely from the hands of users. A new method for improving the safety of human-robot cooperation is explored. To solve the problems of a low utilization rate and slow convergence of reinforcement learning algorithms, the RGRL is first trained in a simulation scene, and then, the model para-meters are applied to a real scene. To reduce the difference between the simulated and real scenes, domain randomization is applied to randomly change the positions and angles of objects in the simulated scenes at regular intervals, thereby improving the diversity of the training samples and robustness of the algorithm. Results The RGRL's effectiveness and accuracy are verified by evaluating it on both simulated and real scenes, and the results show that the RGRL can achieve an accuracy of more than 80% in both cases. Conclusions RGRL is a robot grasping algorithm that employs domain randomization and deep reinforcement learning for effective grasping in simulated and real scenes. However, it lacks flexibility in adapting to different grasping poses, prompting future research in achieving safe grasping for diverse user postures.展开更多
The application of deep learning to robotics over the past decade has led to a wave of research into deep artificial neural networks and to a very specific problems and questions that are not usually addressed by the ...The application of deep learning to robotics over the past decade has led to a wave of research into deep artificial neural networks and to a very specific problems and questions that are not usually addressed by the computer vision and machine learning communities. Robots have always faced many unique challenges as the robotic platforms move from the lab to the real world. Minutely, the sheer amount of diversity we encounter in real-world environments is a huge challenge to deal with today’s robotic control algorithms and this necessitates the use of machine learning algorithms that are able to learn the controls of a given data. However, deep learning algorithms are general non-linear models capable of learning features directly from data making them an excellent choice for such robotic applications. Indeed, robotics and artificial intelligence (AI) are increasing and amplifying human potential, enhancing productivity and moving from simple thinking towards human-like cognitive abilities. In this paper, <span style="font-family:Verdana;">lots of </span><span style="font-family:Verdana;">learning, thinking and incarnation challenges of deep learning robots were discussed. The problem addressed was robotic grasping and tracking motion planning for robots which was the most fundamental and formidable challenge of designing autonomous robots. This paper hope </span><span style="font-family:Verdana;">to </span><span style="font-family:Verdana;">provide the reader an overview </span><span style="font-family:Verdana;">of</span><span style="font-family:Verdana;"> DL and robotic grasping, also the problem of tracking and motion planning. The system is tested on simulated data and real experiments with success.</span>展开更多
This research characterizes grasping by multifingered robot hands through investiga- tion of the space of contact forces into four subspaces , a method is developed to determine the di- mensions of the subspaces with ...This research characterizes grasping by multifingered robot hands through investiga- tion of the space of contact forces into four subspaces , a method is developed to determine the di- mensions of the subspaces with respect to the connectivity of the object. The relationship reveals the differences between three types of grasps classified and indicates how the contact force can be decomposed corresponding to each type of grasp. The subspaces and the determination of their di- mensions are illlustrated by examples.展开更多
Visual tracking and grasping of moving object is a challenging task in the field of robotic manipulation,which also has great potential in applications such as human-robot collaboration.Based on the particle filtering...Visual tracking and grasping of moving object is a challenging task in the field of robotic manipulation,which also has great potential in applications such as human-robot collaboration.Based on the particle filtering framework and position-based visual servoing,this paper proposes a new method for visual tracking and grasping of randomly moving objects.A geometric particle filter tracker is established for visual tracking.In order to deal with the tracking efficiency issue for particle filter,edge detection and morphological dilation are employed to reduce the computation burden of geometric particle filtering.Meanwhile,the HSV image feature is employed instead of the grayscale feature to improve the tracking algorithm’s robustness to illumination change.A grasping strategy combining tracking and interception is adopted along with the position-based visual servoing(PBVS)method to achieve stable grasp of the target.Comprehensive comparisons on open source dataset and a large number of experiments on real robot system are conducted,which demonstrate the proposed method has competitive performance in random moving object tracking and grasping.展开更多
To balance the inference speed and detection accuracy of a grasp detection algorithm,which are both important for robot grasping tasks,we propose an encoder–decoder structured pixel-level grasp detection neural netwo...To balance the inference speed and detection accuracy of a grasp detection algorithm,which are both important for robot grasping tasks,we propose an encoder–decoder structured pixel-level grasp detection neural network named the attention-based efficient robot grasp detection network(AE-GDN).Three spatial attention modules are introduced in the encoder stages to enhance the detailed information,and three channel attention modules are introduced in the decoder stages to extract more semantic information.Several lightweight and efficient DenseBlocks are used to connect the encoder and decoder paths to improve the feature modeling capability of AE-GDN.A high intersection over union(IoU)value between the predicted grasp rectangle and the ground truth does not necessarily mean a high-quality grasp configuration,but might cause a collision.This is because traditional IoU loss calculation methods treat the center part of the predicted rectangle as having the same importance as the area around the grippers.We design a new IoU loss calculation method based on an hourglass box matching mechanism,which will create good correspondence between high IoUs and high-quality grasp configurations.AEGDN achieves the accuracy of 98.9%and 96.6%on the Cornell and Jacquard datasets,respectively.The inference speed reaches 43.5 frames per second with only about 1.2×10^(6)parameters.The proposed AE-GDN has also been deployed on a practical robotic arm grasping system and performs grasping well.Codes are available at https://github.com/robvincen/robot_gradet.展开更多
Grasp detection is a visual recognition task where the robot makes use of its sensors to detect graspable objects in its environment.Despite the steady progress in robotic grasping,it is still difficult to achieve bot...Grasp detection is a visual recognition task where the robot makes use of its sensors to detect graspable objects in its environment.Despite the steady progress in robotic grasping,it is still difficult to achieve both real-time and high accuracy grasping detection.In this paper,we propose a real-time robotic grasp detection method,which can accurately predict potential grasp for parallel-plate robotic grippers using RGB images.Our work employs an end-to-end convolutional neural network which consists of a feature descriptor and a grasp detector.And for the first time,we add an attention mechanism to the grasp detection task,which enables the network to focus on grasp regions rather than background.Specifically,we present an angular label smoothing strategy in our grasp detection method to enhance the fault tolerance of the network.We quantitatively and qualitatively evaluate our grasp detection method from different aspects on the public Cornell dataset and Jacquard dataset.Extensive experiments demonstrate that our grasp detection method achieves superior performance to the state-of-the-art methods.In particular,our grasp detection method ranked first on both the Cornell dataset and the Jacquard dataset,giving rise to the accuracy of 98.9%and 95.6%,respectively at realtime calculation speed.展开更多
Recently, soft grippers have garnered considerable interest in various fields, such as medical rehabilitation, due to their high compliance. However, the traditional PneuNet only reliably grasps medium and largeobjects...Recently, soft grippers have garnered considerable interest in various fields, such as medical rehabilitation, due to their high compliance. However, the traditional PneuNet only reliably grasps medium and largeobjects via enveloping grasping (EG), and cannot realize pinching grasping (PG) to stably grasp small and thinobjects as EG requires a large bending angle whereas PG requires a much smaller one. Therefore, we proposeda multi-structure soft gripper (MSSG) with only one vent per finger which combines the PneuNet in the proximal segment with the normal soft pneumatic actuator (NSPA) in the distal segment, allowing PG to be realizedwithout a loss in EG and enhancing the robustness of PG due to the height difference between the distal andproximal segments. Grasping was characterized on the basis of the stability (finger bending angle describes) androbustness (pull-out force describes), and the bending angle and pull-out force of MSSG were analyzed using thefinite element method. Furthermore, the grasping performance was validated using experiments, and the resultsdemonstrated that the MSSG with one vent per finger was able to realize PG without a loss in EG and effectivelyenhance the PG robustness.展开更多
文摘The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.
基金This work was supported by the National Natural Science Foundation of China(61873077,61806062)Zhejiang Provincial Major Research and Development Project of China(2020C01110)Zhejiang Provincial Key Laboratory of Equipment Electronics.
文摘Directly grasping the tightly stacked objects may cause collisions and result in failures,degenerating the functionality of robotic arms.Inspired by the observation that first pushing objects to a state of mutual separation and then grasping them individually can effectively increase the success rate,we devise a novel deep Q-learning framework to achieve collaborative pushing and grasping.Specifically,an efficient non-maximum suppression policy(PolicyNMS)is proposed to dynamically evaluate pushing and grasping actions by enforcing a suppression constraint on unreasonable actions.Moreover,a novel data-driven pushing reward network called PR-Net is designed to effectively assess the degree of separation or aggregation between objects.To benchmark the proposed method,we establish a dataset containing common household items dataset(CHID)in both simulation and real scenarios.Although trained using simulation data only,experiment results validate that our method generalizes well to real scenarios and achieves a 97%grasp success rate at a fast speed for object separation in the real-world environment.
文摘Background Robot grasping encompasses a wide range of research areas;however, most studies have been focused on the grasping of only stationary objects in a scene;only a few studies on how to grasp objects from a user's hand have been conducted. In this paper, a robot grasping algorithm based on deep reinforcement learning (RGRL) is proposed. Methods The RGRL takes the relative positions of the robot and the object in a user's hand as input and outputs the best action of the robot in the current state. Thus, the proposed algorithm realizes the functions of autonomous path planning and grasping objects safely from the hands of users. A new method for improving the safety of human-robot cooperation is explored. To solve the problems of a low utilization rate and slow convergence of reinforcement learning algorithms, the RGRL is first trained in a simulation scene, and then, the model para-meters are applied to a real scene. To reduce the difference between the simulated and real scenes, domain randomization is applied to randomly change the positions and angles of objects in the simulated scenes at regular intervals, thereby improving the diversity of the training samples and robustness of the algorithm. Results The RGRL's effectiveness and accuracy are verified by evaluating it on both simulated and real scenes, and the results show that the RGRL can achieve an accuracy of more than 80% in both cases. Conclusions RGRL is a robot grasping algorithm that employs domain randomization and deep reinforcement learning for effective grasping in simulated and real scenes. However, it lacks flexibility in adapting to different grasping poses, prompting future research in achieving safe grasping for diverse user postures.
文摘The application of deep learning to robotics over the past decade has led to a wave of research into deep artificial neural networks and to a very specific problems and questions that are not usually addressed by the computer vision and machine learning communities. Robots have always faced many unique challenges as the robotic platforms move from the lab to the real world. Minutely, the sheer amount of diversity we encounter in real-world environments is a huge challenge to deal with today’s robotic control algorithms and this necessitates the use of machine learning algorithms that are able to learn the controls of a given data. However, deep learning algorithms are general non-linear models capable of learning features directly from data making them an excellent choice for such robotic applications. Indeed, robotics and artificial intelligence (AI) are increasing and amplifying human potential, enhancing productivity and moving from simple thinking towards human-like cognitive abilities. In this paper, <span style="font-family:Verdana;">lots of </span><span style="font-family:Verdana;">learning, thinking and incarnation challenges of deep learning robots were discussed. The problem addressed was robotic grasping and tracking motion planning for robots which was the most fundamental and formidable challenge of designing autonomous robots. This paper hope </span><span style="font-family:Verdana;">to </span><span style="font-family:Verdana;">provide the reader an overview </span><span style="font-family:Verdana;">of</span><span style="font-family:Verdana;"> DL and robotic grasping, also the problem of tracking and motion planning. The system is tested on simulated data and real experiments with success.</span>
文摘This research characterizes grasping by multifingered robot hands through investiga- tion of the space of contact forces into four subspaces , a method is developed to determine the di- mensions of the subspaces with respect to the connectivity of the object. The relationship reveals the differences between three types of grasps classified and indicates how the contact force can be decomposed corresponding to each type of grasp. The subspaces and the determination of their di- mensions are illlustrated by examples.
基金supported in part by the National Natural Science Foundation of China(Grant Nos.91748204,51905183,91948301)China Postdoctoral Science Foundation(Grant No.2018M642820)。
文摘Visual tracking and grasping of moving object is a challenging task in the field of robotic manipulation,which also has great potential in applications such as human-robot collaboration.Based on the particle filtering framework and position-based visual servoing,this paper proposes a new method for visual tracking and grasping of randomly moving objects.A geometric particle filter tracker is established for visual tracking.In order to deal with the tracking efficiency issue for particle filter,edge detection and morphological dilation are employed to reduce the computation burden of geometric particle filtering.Meanwhile,the HSV image feature is employed instead of the grayscale feature to improve the tracking algorithm’s robustness to illumination change.A grasping strategy combining tracking and interception is adopted along with the position-based visual servoing(PBVS)method to achieve stable grasp of the target.Comprehensive comparisons on open source dataset and a large number of experiments on real robot system are conducted,which demonstrate the proposed method has competitive performance in random moving object tracking and grasping.
基金supported by the National Natural Science Foundation of China(No.92048205)the China Scholarship Council(No.202008310014)。
文摘To balance the inference speed and detection accuracy of a grasp detection algorithm,which are both important for robot grasping tasks,we propose an encoder–decoder structured pixel-level grasp detection neural network named the attention-based efficient robot grasp detection network(AE-GDN).Three spatial attention modules are introduced in the encoder stages to enhance the detailed information,and three channel attention modules are introduced in the decoder stages to extract more semantic information.Several lightweight and efficient DenseBlocks are used to connect the encoder and decoder paths to improve the feature modeling capability of AE-GDN.A high intersection over union(IoU)value between the predicted grasp rectangle and the ground truth does not necessarily mean a high-quality grasp configuration,but might cause a collision.This is because traditional IoU loss calculation methods treat the center part of the predicted rectangle as having the same importance as the area around the grippers.We design a new IoU loss calculation method based on an hourglass box matching mechanism,which will create good correspondence between high IoUs and high-quality grasp configurations.AEGDN achieves the accuracy of 98.9%and 96.6%on the Cornell and Jacquard datasets,respectively.The inference speed reaches 43.5 frames per second with only about 1.2×10^(6)parameters.The proposed AE-GDN has also been deployed on a practical robotic arm grasping system and performs grasping well.Codes are available at https://github.com/robvincen/robot_gradet.
基金supported by the National Key Research and Development Program of China under Grant No.2018AAA010-3002the National Natural Science Foundation of China under Grant Nos.62172392,61702482 and 61972379.
文摘Grasp detection is a visual recognition task where the robot makes use of its sensors to detect graspable objects in its environment.Despite the steady progress in robotic grasping,it is still difficult to achieve both real-time and high accuracy grasping detection.In this paper,we propose a real-time robotic grasp detection method,which can accurately predict potential grasp for parallel-plate robotic grippers using RGB images.Our work employs an end-to-end convolutional neural network which consists of a feature descriptor and a grasp detector.And for the first time,we add an attention mechanism to the grasp detection task,which enables the network to focus on grasp regions rather than background.Specifically,we present an angular label smoothing strategy in our grasp detection method to enhance the fault tolerance of the network.We quantitatively and qualitatively evaluate our grasp detection method from different aspects on the public Cornell dataset and Jacquard dataset.Extensive experiments demonstrate that our grasp detection method achieves superior performance to the state-of-the-art methods.In particular,our grasp detection method ranked first on both the Cornell dataset and the Jacquard dataset,giving rise to the accuracy of 98.9%and 95.6%,respectively at realtime calculation speed.
基金the National Key Research and Development Program of China(No.2020YFB1313100)。
文摘Recently, soft grippers have garnered considerable interest in various fields, such as medical rehabilitation, due to their high compliance. However, the traditional PneuNet only reliably grasps medium and largeobjects via enveloping grasping (EG), and cannot realize pinching grasping (PG) to stably grasp small and thinobjects as EG requires a large bending angle whereas PG requires a much smaller one. Therefore, we proposeda multi-structure soft gripper (MSSG) with only one vent per finger which combines the PneuNet in the proximal segment with the normal soft pneumatic actuator (NSPA) in the distal segment, allowing PG to be realizedwithout a loss in EG and enhancing the robustness of PG due to the height difference between the distal andproximal segments. Grasping was characterized on the basis of the stability (finger bending angle describes) androbustness (pull-out force describes), and the bending angle and pull-out force of MSSG were analyzed using thefinite element method. Furthermore, the grasping performance was validated using experiments, and the resultsdemonstrated that the MSSG with one vent per finger was able to realize PG without a loss in EG and effectivelyenhance the PG robustness.