Humans can perceive our complex world through multi-sensory fusion.Under limited visual conditions,people can sense a variety of tactile signals to identify objects accurately and rapidly.However,replicating this uniq...Humans can perceive our complex world through multi-sensory fusion.Under limited visual conditions,people can sense a variety of tactile signals to identify objects accurately and rapidly.However,replicating this unique capability in robots remains a significant challenge.Here,we present a new form of ultralight multifunctional tactile nano-layered carbon aerogel sensor that provides pressure,temperature,material recognition and 3D location capabilities,which is combined with multimodal supervised learning algorithms for object recognition.The sensor exhibits human-like pressure(0.04–100 kPa)and temperature(21.5–66.2℃)detection,millisecond response times(11 ms),a pressure sensitivity of 92.22 kPa^(−1)and triboelectric durability of over 6000 cycles.The devised algorithm has universality and can accommodate a range of application scenarios.The tactile system can identify common foods in a kitchen scene with 94.63%accuracy and explore the topographic and geomorphic features of a Mars scene with 100%accuracy.This sensing approach empowers robots with versatile tactile perception to advance future society toward heightened sensing,recognition and intelligence.展开更多
Generating ground truth data for developing object detection algorithms of intelligent surveillance systems is a considerably important yet time-consuming task; therefore, a user-friendly tool to annotate videos effic...Generating ground truth data for developing object detection algorithms of intelligent surveillance systems is a considerably important yet time-consuming task; therefore, a user-friendly tool to annotate videos efficiently and accurately is required. In this paper, the development of a semi-automatic video annotation tool is described. For efficiency, the developed tool can automatically generate the initial annotation data for the input videos utilizing automatic object detection modules, which are developed independently and registered in the tool. To guarantee the accuracy of the ground truth data, the system also has several user-friendly functions to help users check and edit the initial annotation data generated by the automatic object detection modules. According to the experiment's results, employing the developed annotation tool is considerably beneficial for reducing annotation time; when compared to manual annotation schemes, using the tool resulted in an annotation time reduction of up to 2.3 times.展开更多
基金the National Natural Science Foundation of China(Grant No.52072041)the Beijing Natural Science Foundation(Grant No.JQ21007)+2 种基金the University of Chinese Academy of Sciences(Grant No.Y8540XX2D2)the Robotics Rhino-Bird Focused Research Project(No.2020-01-002)the Tencent Robotics X Laboratory.
文摘Humans can perceive our complex world through multi-sensory fusion.Under limited visual conditions,people can sense a variety of tactile signals to identify objects accurately and rapidly.However,replicating this unique capability in robots remains a significant challenge.Here,we present a new form of ultralight multifunctional tactile nano-layered carbon aerogel sensor that provides pressure,temperature,material recognition and 3D location capabilities,which is combined with multimodal supervised learning algorithms for object recognition.The sensor exhibits human-like pressure(0.04–100 kPa)and temperature(21.5–66.2℃)detection,millisecond response times(11 ms),a pressure sensitivity of 92.22 kPa^(−1)and triboelectric durability of over 6000 cycles.The devised algorithm has universality and can accommodate a range of application scenarios.The tactile system can identify common foods in a kitchen scene with 94.63%accuracy and explore the topographic and geomorphic features of a Mars scene with 100%accuracy.This sensing approach empowers robots with versatile tactile perception to advance future society toward heightened sensing,recognition and intelligence.
文摘Generating ground truth data for developing object detection algorithms of intelligent surveillance systems is a considerably important yet time-consuming task; therefore, a user-friendly tool to annotate videos efficiently and accurately is required. In this paper, the development of a semi-automatic video annotation tool is described. For efficiency, the developed tool can automatically generate the initial annotation data for the input videos utilizing automatic object detection modules, which are developed independently and registered in the tool. To guarantee the accuracy of the ground truth data, the system also has several user-friendly functions to help users check and edit the initial annotation data generated by the automatic object detection modules. According to the experiment's results, employing the developed annotation tool is considerably beneficial for reducing annotation time; when compared to manual annotation schemes, using the tool resulted in an annotation time reduction of up to 2.3 times.