A multidisciplinary approach for developing an intelligent sign multi-language recognition system to greatly enhance deaf-mute communication will be discussed and implemented. This involves designing a low-cost glove-...A multidisciplinary approach for developing an intelligent sign multi-language recognition system to greatly enhance deaf-mute communication will be discussed and implemented. This involves designing a low-cost glove-based sensing system, collecting large and diverse datasets, preprocessing the data, and using efficient machine learning models. Furthermore, the glove is integrated with a user-friendly mobile application called “Life-sign” for this system. The main goal of this work is to minimize the processing time of machine learning classifiers while maintaining higher accuracy performance. This is achieved by using effective preprocessing algorithms to handle noisy and inconsistent data. Testing and iterating approaches have been applied to various classifiers to refine and improve their accuracy in the recognition process. Additionally, the Extra Trees (ET) classifier has been identified as the best algorithm, with results proving successful gesture prediction at an average accuracy of about 99.54%. A smart optimization feature has been implemented to control the size of data transferred via Bluetooth, allowing for fast recognition of consecutive gestures. Real-time performance has been measured through extensive experimental testing on various consecutive gestures, specifically referring to Arabic Sign Language (ArSL). The results have demonstrated that the system guarantees consecutive gesture recognition with a lower delay of 50 milliseconds.展开更多
文摘A multidisciplinary approach for developing an intelligent sign multi-language recognition system to greatly enhance deaf-mute communication will be discussed and implemented. This involves designing a low-cost glove-based sensing system, collecting large and diverse datasets, preprocessing the data, and using efficient machine learning models. Furthermore, the glove is integrated with a user-friendly mobile application called “Life-sign” for this system. The main goal of this work is to minimize the processing time of machine learning classifiers while maintaining higher accuracy performance. This is achieved by using effective preprocessing algorithms to handle noisy and inconsistent data. Testing and iterating approaches have been applied to various classifiers to refine and improve their accuracy in the recognition process. Additionally, the Extra Trees (ET) classifier has been identified as the best algorithm, with results proving successful gesture prediction at an average accuracy of about 99.54%. A smart optimization feature has been implemented to control the size of data transferred via Bluetooth, allowing for fast recognition of consecutive gestures. Real-time performance has been measured through extensive experimental testing on various consecutive gestures, specifically referring to Arabic Sign Language (ArSL). The results have demonstrated that the system guarantees consecutive gesture recognition with a lower delay of 50 milliseconds.