期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Shape estimation for a TPU-based multi-material 3D printed soft pneumatic actuator using deep learning models
1
作者 HU Yu TANG Wei +3 位作者 QU Yang xu huxiu KRAMARENKO Yu.Elena ZOU Jun 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2024年第5期1470-1481,共12页
Real-time proprioception presents a significant challenge for soft robots due to their infinite degrees of freedom and intrinsic compliance.Previous studies mostly focused on specific sensors and actuators.There is st... Real-time proprioception presents a significant challenge for soft robots due to their infinite degrees of freedom and intrinsic compliance.Previous studies mostly focused on specific sensors and actuators.There is still a lack of generalizable technologies for integrating soft sensing elements into soft actuators and mapping sensor signals to proprioception parameters.To tackle this problem,we employed multi-material 3D printing technology to fabricate sensorized soft-bending actuators(SBAs)using plain and conductive thermoplastic polyurethane(TPU)filaments.We designed various geometric shapes for the sensors and investigated their strain-resistive performance during deformation.To address the nonlinear time-variant behavior of the sensors during dynamic modeling,we adopted a data-driven approach using different deep neural networks to learn the relationship between sensor signals and system states.A series of experiments in various actuation scenarios were conducted,and the results demonstrated the effectiveness of this approach.The sensing and shape prediction steps can run in real-time at a frequency of50 Hz on a consumer-level computer.Additionally,a method is proposed to enhance the robustness of the learning models using data augmentation to handle unexpected sensor failures.All the methods are efficient,not only for in-plane 2D shape estimation but also for out-of-plane 3D shape estimation.The aim of this study is to introduce a methodology for the proprioception of soft pneumatic actuators,including manufacturing and sensing modeling,that can be generalized to other soft robots. 展开更多
关键词 shape estimation soft sensors and actuators 3D printing deep learning in robotics
原文传递
Deep multimodal learning for municipal solid waste sorting 被引量:2
2
作者 LU Gang WANG YuanBin +2 位作者 xu huxiu YANG HuaYong ZOU Jun 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2022年第2期324-335,共12页
Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classific... Automated waste sorting can dramatically increase waste sorting efficiency and reduce its regulation cost. Most of the current methods only use a single modality such as image data or acoustic data for waste classification, which makes it difficult to classify mixed and confusable wastes. In these complex situations, using multiple modalities becomes necessary to achieve a high classification accuracy. Traditionally, the fusion of multiple modalities has been limited by fixed handcrafted features. In this study, the deep-learning approach was applied to the multimodal fusion at the feature level for municipal solid-waste sorting.More specifically, the pre-trained VGG16 and one-dimensional convolutional neural networks(1 D CNNs) were utilized to extract features from visual data and acoustic data, respectively. These deeply learned features were then fused in the fully connected layers for classification. The results of comparative experiments proved that the proposed method was superior to the single-modality methods. Additionally, the feature-based fusion strategy performed better than the decision-based strategy with deeply learned features. 展开更多
关键词 deep multimodal learning municipal waste sorting multimodal fusion convolutional neural networks
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部