期刊文献+

Machine Learning-Based Multi-Modal Information Perception for Soft Robotic Hands 被引量:5

Machine Learning-Based Multi-Modal Information Perception for Soft Robotic Hands
原文传递
导出
摘要 This paper focuses on multi-modal Information Perception(IP)for Soft Robotic Hands(SRHs)using Machine Learning(ML)algorithms.A flexible Optical Fiber-based Curvature Sensor(OFCS)is fabricated,consisting of a Light-Emitting Diode(LED),photosensitive detector,and optical fiber.Bending the roughened optical fiber generates lower light intensity,which reflecting the curvature of the soft finger.Together with the curvature and pressure information,multi-modal IP is performed to improve the recognition accuracy.Recognitions of gesture,object shape,size,and weight are implemented with multiple ML approaches,including the Supervised Learning Algorithms(SLAs)of K-Nearest Neighbor(KNN),Support Vector Machine(SVM),Logistic Regression(LR),and the unSupervised Learning Algorithm(un-SLA)of K-Means Clustering(KMC).Moreover,Optical Sensor Information(OSI),Pressure Sensor Information(PSI),and Double-Sensor Information(DSI)are adopted to compare the recognition accuracies.The experiment results demonstrate that the proposed sensors and recognition approaches are feasible and effective.The recognition accuracies obtained using the above ML algorithms and three modes of sensor information are higer than 85 percent for almost all combinations.Moreover,DSI is more accurate when compared to single modal sensor information and the KNN algorithm with a DSI outperforms the other combinations in recognition accuracy. This paper focuses on multi-modal Information Perception(IP) for Soft Robotic Hands(SRHs) using Machine Learning(ML) algorithms.A flexible Optical Fiber-based Curvature Sensor(OFCS) is fabricated,consisting of a Light-Emitting Diode(LED),photosensitive detector,and optical fiber.Bending the roughened optical fiber generates lower light intensity,which reflecting the curvature of the soft finger.Together with the curvature and pressure information,multi-modal IP is performed to improve the recognition accuracy.Recognitions of gesture,object shape,size,and weight are implemented with multiple ML approaches,including the Supervised Learning Algorithms(SLAs) of K-Nearest Neighbor(KNN),Support Vector Machine(SVM),Logistic Regression(LR),and the unSupervised Learning Algorithm(un-SLA) of K-Means Clustering(KMC).Moreover,Optical Sensor Information(OSI),Pressure Sensor Information(PSI),and Double-Sensor Information(DSI) are adopted to compare the recognition accuracies.The experiment results demonstrate that the proposed sensors and recognition approaches are feasible and effective.The recognition accuracies obtained using the above ML algorithms and three modes of sensor information are higer than 85 percent for almost all combinations.Moreover,DSI is more accurate when compared to single modal sensor information and the KNN algorithm with a DSI outperforms the other combinations in recognition accuracy.
出处 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2020年第2期255-269,共15页 清华大学学报(自然科学版(英文版)
基金 support provided by the National Natural Science Foundation of China (Nos. 61803267 and 61572328) the China Postdoctoral Science Foundation (No.2017M622757) the Beijing Science and Technology program (No.Z171100000817007) the National Science Foundation of China (NSFC) and the German Re-search Foundation (DFG) in the project Cross Modal Learning,NSFC 61621136008/DFG TRR-169
关键词 multi-modal sensors optical fiber gesture recognition object recognition Soft Robotic Hands(SRHs) Machine Learning(ML) multi-modal sensors optical fiber gesture recognition object recognition Soft Robotic Hands(SRHs) Machine Learning(ML)
  • 相关文献

参考文献2

二级参考文献37

  • 1Gartner, Gartner says Android has surpassed a billion shipments of devices, http://www.gartner.com/ newsroongid/2954317, 2015.
  • 2T. Vidas, D. Votipka, and N. Christin, All your droid are belong to us: A survey of current Android attacks, inProceedings of the 5th USENIX Workshop on Offensive Technologies (WOOT), 2011, pp. 81-90.
  • 3A. P. Felt, M. Finifter, E. Chin, S. Hanna, and D. Wagner, A survey of mobile malware in the wild, in Proceedings of the 1st ACM Workshop on Security and Privacy in Smartphones and Mobile Devices (SPSM), 2011, pp. 3-14.
  • 4McAfee, McAfee labs threats report, http://www. mcafee.con-dus/resources/reports/rp-quarterly-threat-q4- 2013.pdf, 2015.
  • 5A. Mylonas, A. Kastania, and D. Gritzalis, Delegate the smartphone user? Security awareness in smartphone platforms, Computers & Security, vol. 34, pp. 47-66, 2013.
  • 6Z. Fang, W. Han, and Y. Li, Permission based Android security: Issues and countermeasures, Computers & Security, vol. 43, pp. 205-218, 2014.
  • 7J. Xu, Y.-T. Yu, Z. Chert, B. Cao, W. Dong, Y. Guo, and J. Cao, Mobsafe: Cloud computing based forensic analysis for massive mobile applications using data mining, Tsinghua Science and Technology, vol. 18, no. 4, pp. 418--427, 2013.
  • 8R. Pandita, X. Xiao, W. Yang, W. Enck, and T. Xie, Whyper: Towards automating risk assessment of mobile applications, in Proceedings of the 22nd USENIX Security Symposium (USENIX Security), 2013, pp. 527-542.
  • 9Z. Qu, V. Rastogi, X. Zhang, Y. Chen, T. Zhu, and Z. Chen, Autocog: Measuring the description-to-permission fidelity in Android applications, in Proceedings of the 21st ACM Conference on Computer and Communications Security (CCS), 2014, pp. 1354-1365.
  • 10D. Geneiatakis, I. N. Fovino, I. Kounelis, and P. Stirparo, A permission verification approach for Android mobile applications, Computers & Security, vol. 49, pp. 192-205, 2015.

共引文献41

同被引文献29

引证文献5

二级引证文献25

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部