Real-time proprioception presents a significant challenge for soft robots due to their infinite degrees of freedom and intrinsic compliance.Previous studies mostly focused on specific sensors and actuators.There is st...Real-time proprioception presents a significant challenge for soft robots due to their infinite degrees of freedom and intrinsic compliance.Previous studies mostly focused on specific sensors and actuators.There is still a lack of generalizable technologies for integrating soft sensing elements into soft actuators and mapping sensor signals to proprioception parameters.To tackle this problem,we employed multi-material 3D printing technology to fabricate sensorized soft-bending actuators(SBAs)using plain and conductive thermoplastic polyurethane(TPU)filaments.We designed various geometric shapes for the sensors and investigated their strain-resistive performance during deformation.To address the nonlinear time-variant behavior of the sensors during dynamic modeling,we adopted a data-driven approach using different deep neural networks to learn the relationship between sensor signals and system states.A series of experiments in various actuation scenarios were conducted,and the results demonstrated the effectiveness of this approach.The sensing and shape prediction steps can run in real-time at a frequency of50 Hz on a consumer-level computer.Additionally,a method is proposed to enhance the robustness of the learning models using data augmentation to handle unexpected sensor failures.All the methods are efficient,not only for in-plane 2D shape estimation but also for out-of-plane 3D shape estimation.The aim of this study is to introduce a methodology for the proprioception of soft pneumatic actuators,including manufacturing and sensing modeling,that can be generalized to other soft robots.展开更多
Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classific...Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classification, which makes it difficult to classify mixed and confusable wastes. In these complex situations, using multiple modalities becomes necessary to achieve a high classification accuracy. Traditionally, the fusion of multiple modalities has been limited by fixed handcrafted features. In this study, the deep-learning approach was applied to the multimodal fusion at the feature level for municipal solid-waste sorting.More specifically, the pre-trained VGG16 and one-dimensional convolutional neural networks(1 D CNNs) were utilized to extract features from visual data and acoustic data, respectively. These deeply learned features were then fused in the fully connected layers for classification. The results of comparative experiments proved that the proposed method was superior to the single-modality methods. Additionally, the feature-based fusion strategy performed better than the decision-based strategy with deeply learned features.展开更多
基金supported by International Cooperation Program of the Natural Science Foundation of China(Grant No.52261135542)Zhejiang Provincial Natural Science Foundation of China(Grant No.LD22E050002)+1 种基金Zhejiang University Global Partnership Fundgrateful to the Russian Science Foundation(Grant No.23-43-00057)for financial support。
文摘Real-time proprioception presents a significant challenge for soft robots due to their infinite degrees of freedom and intrinsic compliance.Previous studies mostly focused on specific sensors and actuators.There is still a lack of generalizable technologies for integrating soft sensing elements into soft actuators and mapping sensor signals to proprioception parameters.To tackle this problem,we employed multi-material 3D printing technology to fabricate sensorized soft-bending actuators(SBAs)using plain and conductive thermoplastic polyurethane(TPU)filaments.We designed various geometric shapes for the sensors and investigated their strain-resistive performance during deformation.To address the nonlinear time-variant behavior of the sensors during dynamic modeling,we adopted a data-driven approach using different deep neural networks to learn the relationship between sensor signals and system states.A series of experiments in various actuation scenarios were conducted,and the results demonstrated the effectiveness of this approach.The sensing and shape prediction steps can run in real-time at a frequency of50 Hz on a consumer-level computer.Additionally,a method is proposed to enhance the robustness of the learning models using data augmentation to handle unexpected sensor failures.All the methods are efficient,not only for in-plane 2D shape estimation but also for out-of-plane 3D shape estimation.The aim of this study is to introduce a methodology for the proprioception of soft pneumatic actuators,including manufacturing and sensing modeling,that can be generalized to other soft robots.
基金supported by the National Natural Science Foundation of China(Grant Nos.51875507,52005439)the Key Research and Development Program of Zhejiang Province(Grant No.2021C01018)。
文摘Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classification, which makes it difficult to classify mixed and confusable wastes. In these complex situations, using multiple modalities becomes necessary to achieve a high classification accuracy. Traditionally, the fusion of multiple modalities has been limited by fixed handcrafted features. In this study, the deep-learning approach was applied to the multimodal fusion at the feature level for municipal solid-waste sorting.More specifically, the pre-trained VGG16 and one-dimensional convolutional neural networks(1 D CNNs) were utilized to extract features from visual data and acoustic data, respectively. These deeply learned features were then fused in the fully connected layers for classification. The results of comparative experiments proved that the proposed method was superior to the single-modality methods. Additionally, the feature-based fusion strategy performed better than the decision-based strategy with deeply learned features.