In the digital age,non-touch communication technologies are reshaping human-device interactions and raising security concerns.A major challenge in current technology is the misinterpretation of gestures by sensors and...In the digital age,non-touch communication technologies are reshaping human-device interactions and raising security concerns.A major challenge in current technology is the misinterpretation of gestures by sensors and cameras,often caused by environmental factors.This issue has spurred the need for advanced data processing methods to achieve more accurate gesture recognition and predictions.Our study presents a novel virtual keyboard allowing character input via distinct hand gestures,focusing on two key aspects:hand gesture recognition and character input mechanisms.We developed a novel model with LSTM and fully connected layers for enhanced sequential data processing and hand gesture recognition.We also integrated CNN,max-pooling,and dropout layers for improved spatial feature extraction.This model architecture processes both temporal and spatial aspects of hand gestures,using LSTM to extract complex patterns from frame sequences for a comprehensive understanding of input data.Our unique dataset,essential for training the model,includes 1,662 landmarks from dynamic hand gestures,33 postures,and 468 face landmarks,all captured in real-time using advanced pose estimation.The model demonstrated high accuracy,achieving 98.52%in hand gesture recognition and over 97%in character input across different scenarios.Its excellent performance in real-time testing underlines its practicality and effectiveness,marking a significant advancement in enhancing human-device interactions in the digital age.展开更多
With technology advances and human requirements increasing, human-computer interaction plays an important role in our daily lives. Among these interactions, gesture-based recognition offers a natural and intuitive use...With technology advances and human requirements increasing, human-computer interaction plays an important role in our daily lives. Among these interactions, gesture-based recognition offers a natural and intuitive user experience that does not require physical contact and is becoming increasingly prevalent across various fields. Gesture recognition systems based on Frequency Modulated Continuous Wave (FMCW) millimeter-wave radar are receiving widespread attention due to their ability to operate without wearable sensors, their robustness to environmental factors, and the excellent penetrative ability of radar signals. This paper first reviews the current main gesture recognition applications. Subsequently, we introduce the system of gesture recognition based on FMCW radar and provide a general framework for gesture recognition, including gesture data acquisition, data preprocessing, and classification methods. We then discuss typical applications of gesture recognition systems and summarize the performance of these systems in terms of experimental environment, signal acquisition, signal processing, and classification methods. Specifically, we focus our study on four typical gesture recognition systems, including air-writing recognition, gesture command recognition, sign language recognition, and text input recognition. Finally, this paper addresses the challenges and unresolved problems in FMCW radar-based gesture recognition and provides insights into potential future research directions.展开更多
With the advancement of technology and the increase in user demands, gesture recognition played a pivotal role in the field of human-computer interaction. Among various sensing devices, Time-of-Flight (ToF) sensors we...With the advancement of technology and the increase in user demands, gesture recognition played a pivotal role in the field of human-computer interaction. Among various sensing devices, Time-of-Flight (ToF) sensors were widely applied due to their low cost. This paper explored the implementation of a human hand posture recognition system using ToF sensors and residual neural networks. Firstly, this paper reviewed the typical applications of human hand recognition. Secondly, this paper designed a hand gesture recognition system using a ToF sensor VL53L5. Subsequently, data preprocessing was conducted, followed by training the constructed residual neural network. Then, the recognition results were analyzed, indicating that gesture recognition based on the residual neural network achieved an accuracy of 98.5% in a 5-class classification scenario. Finally, the paper discussed existing issues and future research directions.展开更多
This paper proposes a method to recognize human-object interactions by modeling context between human actions and interacted objects.Human-object interaction recognition is a challenging task due to severe occlusion b...This paper proposes a method to recognize human-object interactions by modeling context between human actions and interacted objects.Human-object interaction recognition is a challenging task due to severe occlusion between human and objects during the interacting process.Since that human actions and interacted objects provide strong context information,i.e.some actions are usually related to some specific objects,the accuracy of recognition is significantly improved for both of them.Through the proposed method,both global and local temporal features from skeleton sequences are extracted to model human actions.In the meantime,kernel features are utilized to describe interacted objects.Finally,all possible solutions from actions and objects are optimized by modeling the context between them.The results of experiments demonstrate the effectiveness of our method.展开更多
Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addi...Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addition,the performance of a model decreases as the subject’s distance from the camera increases.This study proposes a 3D separable Convolutional Neural Network(CNN),considering the model’s computa-tional complexity and recognition accuracy.The 20BN-Jester dataset was used to train the model for six gesture classes.After achieving the best offline recognition accuracy of 94.39%,the model was deployed in real-time while considering the subject’s attention,the instant of performing a gesture,and the subject’s distance from the camera.Despite being discussed in numerous research articles,the distance factor remains unresolved in real-time deployment,which leads to degraded recognition results.In the proposed approach,the distance calculation substantially improves the classification performance by reducing the impact of the subject’s distance from the camera.Additionally,the capability of feature extraction,degree of relevance,and statistical significance of the proposed model against other state-of-the-art models were validated using t-distributed Stochastic Neighbor Embedding(t-SNE),Mathew’s Correlation Coefficient(MCC),and the McNemar test,respectively.We observed that the proposed model exhibits state-of-the-art outcomes and a comparatively high significance level.展开更多
Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient.The representation of hand gestures is critical for recognition.In this paper,we propose...Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient.The representation of hand gestures is critical for recognition.In this paper,we propose a new method to measure the similarity between hand gestures and exploit it for hand gesture recognition.The depth maps of hand gestures captured via the Kinect sensors are used in our method,where the 3D hand shapes can be segmented from the cluttered backgrounds.To extract the pattern of salient 3D shape features,we propose a new descriptor-3D Shape Context,for 3D hand gesture representation.The 3D Shape Context information of each 3D point is obtained in multiple scales because both local shape context and global shape distribution are necessary for recognition.The description of all the 3D points constructs the hand gesture representation,and hand gesture recognition is explored via dynamic time warping algorithm.Extensive experiments are conducted on multiple benchmark datasets.The experimental results verify that the proposed method is robust to noise,articulated variations,and rigid transformations.Our method outperforms state-of-the-art methods in the comparisons of accuracy and efficiency.展开更多
Recently,vision-based gesture recognition(VGR)has become a hot research spot in human-computer interaction(HCI).Unlike other gesture recognition methods with data gloves or other wearable sensors,vision-based gesture ...Recently,vision-based gesture recognition(VGR)has become a hot research spot in human-computer interaction(HCI).Unlike other gesture recognition methods with data gloves or other wearable sensors,vision-based gesture recognition could lead to more natural and intuitive HCI interactions.This paper reviews the state-of-the-art vision-based gestures recognition methods,from different stages of gesture recognition process,i.e.,(1)image acquisition and pre-processing,(2)gesture segmentation,(3)gesture tracking,(4)feature extraction,and(5)gesture classification.This paper also analyzes the advantages and disadvantages of these various methods in detail.Finally,the challenges of vision-based gesture recognition in haptic rendering and future research directions are discussed.展开更多
A hand gesture recognition method is presented for human-computer interaction, which is based on fingertip localization. First, hand gesture is segmented from the background based on skin color characteristics. Second...A hand gesture recognition method is presented for human-computer interaction, which is based on fingertip localization. First, hand gesture is segmented from the background based on skin color characteristics. Second, feature vectors are selected with equal intervals on the boundary of the gesture, and then gestures' length normalization is accomplished. Third, the fingertip positions are determined by the feature vectors' parameters, and angles of feature vectors are normalized. Finally the gestures are classified by support vector machine. The experimental results demonstrate that the proposed method can recognize 9 gestures with an accuracy of 94.1%.展开更多
Background One of the most critical issues in human-computer interaction applications is recognizing human emotions based on speech.In recent years,the challenging problem of cross-corpus speech emotion recognition(SE...Background One of the most critical issues in human-computer interaction applications is recognizing human emotions based on speech.In recent years,the challenging problem of cross-corpus speech emotion recognition(SER)has generated extensive research.Nevertheless,the domain discrepancy between training data and testing data remains a major challenge to achieving improved system performance.Methods This paper introduces a novel multi-scale discrepancy adversarial(MSDA)network for conducting multiple timescales domain adaptation for cross-corpus SER,i.e.,integrating domain discriminators of hierarchical levels into the emotion recognition framework to mitigate the gap between the source and target domains.Specifically,we extract two kinds of speech features,i.e.,handcraft features and deep features,from three timescales of global,local,and hybrid levels.In each timescale,the domain discriminator and the feature extrator compete against each other to learn features that minimize the discrepancy between the two domains by fooling the discriminator.Results Extensive experiments on cross-corpus and cross-language SER were conducted on a combination dataset that combines one Chinese dataset and two English datasets commonly used in SER.The MSDA is affected by the strong discriminate power provided by the adversarial process,where three discriminators are working in tandem with an emotion classifier.Accordingly,the MSDA achieves the best performance over all other baseline methods.Conclusions The proposed architecture was tested on a combination of one Chinese and two English datasets.The experimental results demonstrate the superiority of our powerful discriminative model for solving cross-corpus SER.展开更多
Artificial entities,such as virtual agents,have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from th...Artificial entities,such as virtual agents,have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from the users.Affective empathy involves behavioral mimicry,a synchronized co-movement between dyadic pairs.However,the characteristics of such synchrony between humans and virtual agents remain unclear in empathic interactions.Our study evaluates the participant’s behavioral synchronization when a virtual agent exhibits an emotional expression congruent with the emotional context through facial expressions,behavioral gestures,and voice.Participants viewed an emotion-eliciting video stimulus(negative or positive)with a virtual agent.The participants then conversed with the virtual agent about the video,such as how the participant felt about the content.The virtual agent expressed emotions congruent with the video or neutral emotion during the dialog.The participants’facial expressions,such as the facial expressive intensity and facial muscle movement,were measured during the dialog using a camera.The results showed the participants’significant behavioral synchronization(i.e.,cosine similarity≥.05)in both the negative and positive emotion conditions,evident in the participant’s facial mimicry with the virtual agent.Additionally,the participants’facial expressions,both movement and intensity,were significantly stronger in the emotional virtual agent than in the neutral virtual agent.In particular,we found that the facial muscle intensity of AU45(Blink)is an effective index to assess the participant’s synchronization that differs by the individual’s empathic capability(low,mid,high).Based on the results,we suggest an appraisal criterion to provide empirical conditions to validate empathic interaction based on the facial expression measures.展开更多
Gesture recognition is an important research in the field of human-computer interaction. Hand Gestures are strong variable and flexible, so the gesture recognition has always been an important challenge for the resear...Gesture recognition is an important research in the field of human-computer interaction. Hand Gestures are strong variable and flexible, so the gesture recognition has always been an important challenge for the researchers. In this paper, we first outlined the development of gestures recognition, and different classification of gestures based on different purposes. Then we respectively introduced common methods used in the process of gesture segmentation, feature extraction and recognition. Finally, the gesture recognition was summarized and the studying prospects were given.展开更多
Surface EMG contains a lot of physiological information reflecting the intention of human movement.Gesture recognition by surface EMG has been widely concerned in the field of human-computer interaction and rehabilita...Surface EMG contains a lot of physiological information reflecting the intention of human movement.Gesture recognition by surface EMG has been widely concerned in the field of human-computer interaction and rehabilitation.At present,most studies on gesture recognition based on surface EMG signal are obtained by discrete separation method,ignoring continuous natural motion.A gesture recognition method of surface EMG based on improved long short-term memory network is proposed.sEMG sensors are rationally arranged according to physiological structure and muscle function.In this paper,the finger curvature is used to describe the gesture state,and the gesture at every moment can be represented by the set of different finger curvature,so as to realize continuous gesture recognition.Finally,the proposed gesture recognition model is tested on Ninapro(a large gesture recognition database).The results show that the proposed method can effectively improve the representation mining ability of surface EMG signal,and provide reference for deep learning modeling of human gesture recognition.展开更多
Most of the intelligent surveillances in the industry only care about the safety of the workers.It is meaningful if the camera can know what,where and how the worker has performed the action in real time.In this paper...Most of the intelligent surveillances in the industry only care about the safety of the workers.It is meaningful if the camera can know what,where and how the worker has performed the action in real time.In this paper,we propose a light-weight and robust algorithm to meet these requirements.By only two hands'trajectories,our algorithm requires no Graphic Processing Unit(GPU)acceleration,which can be used in low-cost devices.In the training stage,in order to find potential topological structures of the training trajectories,spectral clustering with eigengap heuristic is applied to cluster trajectory points.A gradient descent based algorithm is proposed to find the topological structures,which reflects main representations for each cluster.In the fine-tuning stage,a topological optimization algorithm is proposed to fine-tune the parameters of topological structures in all training data.Finally,our method not only performs more robustly compared to some popular offline action detection methods,but also obtains better detection accuracy in an extended action sequence.展开更多
Human action recognition and posture prediction aim to recognize and predict respectively the action and postures of persons in videos.They are both active research topics in computer vision community,which have attra...Human action recognition and posture prediction aim to recognize and predict respectively the action and postures of persons in videos.They are both active research topics in computer vision community,which have attracted considerable attention from academia and industry.They are also the precondition for intelligent interaction and human-computer cooperation,and they help the machine perceive the external environment.In the past decade,tremendous progress has been made in the field,especially after the emergence of deep learning technologies.Hence,it is necessary to make a comprehensive review of recent developments.In this paper,firstly,we attempt to present the background,and then discuss research progresses.Secondly,we introduce datasets,various typical feature representation methods,and explore advanced human action recognition and posture prediction algorithms.Finally,facing the challenges in the field,this paper puts forward the research focus,and introduces the importance of action recognition and posture prediction by taking interactive cognition in self-driving vehicle as an example.展开更多
This study aims to reduce the interference of ambient noise in mobile communication,improve the accuracy and authenticity of information transmitted by sound,and guarantee the accuracy of voice information deliv-ered ...This study aims to reduce the interference of ambient noise in mobile communication,improve the accuracy and authenticity of information transmitted by sound,and guarantee the accuracy of voice information deliv-ered by mobile communication.First,the principles and techniques of speech enhancement are analyzed,and a fast lateral recursive least square method(FLRLS method)is adopted to process sound data.Then,the convolutional neural networks(CNNs)-based noise recognition CNN(NR-CNN)algorithm and speech enhancement model are proposed.Finally,related experiments are designed to verify the performance of the proposed algorithm and model.The experimental results show that the noise classification accuracy of the NR-CNN noise recognition algorithm is higher than 99.82%,and the recall rate and F1 value are also higher than 99.92.The proposed sound enhance-ment model can effectively enhance the original sound in the case of noise interference.After the CNN is incorporated,the average value of all noisy sound perception quality evaluation system values is improved by over 21%compared with that of the traditional noise reduction method.The proposed algorithm can adapt to a variety of voice environments and can simultaneously enhance and reduce noise processing on a variety of different types of voice signals,and the processing effect is better than that of traditional sound enhancement models.In addition,the sound distortion index of the proposed speech enhancement model is inferior to that of the control group,indicating that the addition of the CNN neural network is less likely to cause sound signal distortion in various sound environments and shows superior robustness.In summary,the proposed CNN-based speech enhancement model shows significant sound enhancement effects,stable performance,and strong adapt-ability.This study provides a reference and basis for research applying neural networks in speech enhancement.展开更多
文摘In the digital age,non-touch communication technologies are reshaping human-device interactions and raising security concerns.A major challenge in current technology is the misinterpretation of gestures by sensors and cameras,often caused by environmental factors.This issue has spurred the need for advanced data processing methods to achieve more accurate gesture recognition and predictions.Our study presents a novel virtual keyboard allowing character input via distinct hand gestures,focusing on two key aspects:hand gesture recognition and character input mechanisms.We developed a novel model with LSTM and fully connected layers for enhanced sequential data processing and hand gesture recognition.We also integrated CNN,max-pooling,and dropout layers for improved spatial feature extraction.This model architecture processes both temporal and spatial aspects of hand gestures,using LSTM to extract complex patterns from frame sequences for a comprehensive understanding of input data.Our unique dataset,essential for training the model,includes 1,662 landmarks from dynamic hand gestures,33 postures,and 468 face landmarks,all captured in real-time using advanced pose estimation.The model demonstrated high accuracy,achieving 98.52%in hand gesture recognition and over 97%in character input across different scenarios.Its excellent performance in real-time testing underlines its practicality and effectiveness,marking a significant advancement in enhancing human-device interactions in the digital age.
文摘With technology advances and human requirements increasing, human-computer interaction plays an important role in our daily lives. Among these interactions, gesture-based recognition offers a natural and intuitive user experience that does not require physical contact and is becoming increasingly prevalent across various fields. Gesture recognition systems based on Frequency Modulated Continuous Wave (FMCW) millimeter-wave radar are receiving widespread attention due to their ability to operate without wearable sensors, their robustness to environmental factors, and the excellent penetrative ability of radar signals. This paper first reviews the current main gesture recognition applications. Subsequently, we introduce the system of gesture recognition based on FMCW radar and provide a general framework for gesture recognition, including gesture data acquisition, data preprocessing, and classification methods. We then discuss typical applications of gesture recognition systems and summarize the performance of these systems in terms of experimental environment, signal acquisition, signal processing, and classification methods. Specifically, we focus our study on four typical gesture recognition systems, including air-writing recognition, gesture command recognition, sign language recognition, and text input recognition. Finally, this paper addresses the challenges and unresolved problems in FMCW radar-based gesture recognition and provides insights into potential future research directions.
文摘With the advancement of technology and the increase in user demands, gesture recognition played a pivotal role in the field of human-computer interaction. Among various sensing devices, Time-of-Flight (ToF) sensors were widely applied due to their low cost. This paper explored the implementation of a human hand posture recognition system using ToF sensors and residual neural networks. Firstly, this paper reviewed the typical applications of human hand recognition. Secondly, this paper designed a hand gesture recognition system using a ToF sensor VL53L5. Subsequently, data preprocessing was conducted, followed by training the constructed residual neural network. Then, the recognition results were analyzed, indicating that gesture recognition based on the residual neural network achieved an accuracy of 98.5% in a 5-class classification scenario. Finally, the paper discussed existing issues and future research directions.
文摘This paper proposes a method to recognize human-object interactions by modeling context between human actions and interacted objects.Human-object interaction recognition is a challenging task due to severe occlusion between human and objects during the interacting process.Since that human actions and interacted objects provide strong context information,i.e.some actions are usually related to some specific objects,the accuracy of recognition is significantly improved for both of them.Through the proposed method,both global and local temporal features from skeleton sequences are extracted to model human actions.In the meantime,kernel features are utilized to describe interacted objects.Finally,all possible solutions from actions and objects are optimized by modeling the context between them.The results of experiments demonstrate the effectiveness of our method.
文摘Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addition,the performance of a model decreases as the subject’s distance from the camera increases.This study proposes a 3D separable Convolutional Neural Network(CNN),considering the model’s computa-tional complexity and recognition accuracy.The 20BN-Jester dataset was used to train the model for six gesture classes.After achieving the best offline recognition accuracy of 94.39%,the model was deployed in real-time while considering the subject’s attention,the instant of performing a gesture,and the subject’s distance from the camera.Despite being discussed in numerous research articles,the distance factor remains unresolved in real-time deployment,which leads to degraded recognition results.In the proposed approach,the distance calculation substantially improves the classification performance by reducing the impact of the subject’s distance from the camera.Additionally,the capability of feature extraction,degree of relevance,and statistical significance of the proposed model against other state-of-the-art models were validated using t-distributed Stochastic Neighbor Embedding(t-SNE),Mathew’s Correlation Coefficient(MCC),and the McNemar test,respectively.We observed that the proposed model exhibits state-of-the-art outcomes and a comparatively high significance level.
基金supported by the National Natural Science Foundation of China(61773272,61976191)the Six Talent Peaks Project of Jiangsu Province,China(XYDXX-053)Suzhou Research Project of Technical Innovation,Jiangsu,China(SYG201711)。
文摘Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient.The representation of hand gestures is critical for recognition.In this paper,we propose a new method to measure the similarity between hand gestures and exploit it for hand gesture recognition.The depth maps of hand gestures captured via the Kinect sensors are used in our method,where the 3D hand shapes can be segmented from the cluttered backgrounds.To extract the pattern of salient 3D shape features,we propose a new descriptor-3D Shape Context,for 3D hand gesture representation.The 3D Shape Context information of each 3D point is obtained in multiple scales because both local shape context and global shape distribution are necessary for recognition.The description of all the 3D points constructs the hand gesture representation,and hand gesture recognition is explored via dynamic time warping algorithm.Extensive experiments are conducted on multiple benchmark datasets.The experimental results verify that the proposed method is robust to noise,articulated variations,and rigid transformations.Our method outperforms state-of-the-art methods in the comparisons of accuracy and efficiency.
基金Supported by the National Natural Science Foundation of China(61773205,61773219)the Fundamental Research Funds for the Central Universities(NS2016032,NS2019018,Nanjing University of Aeronautics and Astronautics)+1 种基金the Scholarship from China Scholarship Council(201906835020)the Fundamental Research Funds for the Central Universities(the Graduate Student Innovation Base Open Fund Project of NUAA,kfjj20190307)。
文摘Recently,vision-based gesture recognition(VGR)has become a hot research spot in human-computer interaction(HCI).Unlike other gesture recognition methods with data gloves or other wearable sensors,vision-based gesture recognition could lead to more natural and intuitive HCI interactions.This paper reviews the state-of-the-art vision-based gestures recognition methods,from different stages of gesture recognition process,i.e.,(1)image acquisition and pre-processing,(2)gesture segmentation,(3)gesture tracking,(4)feature extraction,and(5)gesture classification.This paper also analyzes the advantages and disadvantages of these various methods in detail.Finally,the challenges of vision-based gesture recognition in haptic rendering and future research directions are discussed.
基金Supported by the National Natural Science Foundation of China (60873269)
文摘A hand gesture recognition method is presented for human-computer interaction, which is based on fingertip localization. First, hand gesture is segmented from the background based on skin color characteristics. Second, feature vectors are selected with equal intervals on the boundary of the gesture, and then gestures' length normalization is accomplished. Third, the fingertip positions are determined by the feature vectors' parameters, and angles of feature vectors are normalized. Finally the gestures are classified by support vector machine. The experimental results demonstrate that the proposed method can recognize 9 gestures with an accuracy of 94.1%.
基金the National Nature Science Foundation of China(U2003207,61902064)the Jiangsu Frontier Technology Basic Research Project(BK20192004).
文摘Background One of the most critical issues in human-computer interaction applications is recognizing human emotions based on speech.In recent years,the challenging problem of cross-corpus speech emotion recognition(SER)has generated extensive research.Nevertheless,the domain discrepancy between training data and testing data remains a major challenge to achieving improved system performance.Methods This paper introduces a novel multi-scale discrepancy adversarial(MSDA)network for conducting multiple timescales domain adaptation for cross-corpus SER,i.e.,integrating domain discriminators of hierarchical levels into the emotion recognition framework to mitigate the gap between the source and target domains.Specifically,we extract two kinds of speech features,i.e.,handcraft features and deep features,from three timescales of global,local,and hybrid levels.In each timescale,the domain discriminator and the feature extrator compete against each other to learn features that minimize the discrepancy between the two domains by fooling the discriminator.Results Extensive experiments on cross-corpus and cross-language SER were conducted on a combination dataset that combines one Chinese dataset and two English datasets commonly used in SER.The MSDA is affected by the strong discriminate power provided by the adversarial process,where three discriminators are working in tandem with an emotion classifier.Accordingly,the MSDA achieves the best performance over all other baseline methods.Conclusions The proposed architecture was tested on a combination of one Chinese and two English datasets.The experimental results demonstrate the superiority of our powerful discriminative model for solving cross-corpus SER.
文摘Artificial entities,such as virtual agents,have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from the users.Affective empathy involves behavioral mimicry,a synchronized co-movement between dyadic pairs.However,the characteristics of such synchrony between humans and virtual agents remain unclear in empathic interactions.Our study evaluates the participant’s behavioral synchronization when a virtual agent exhibits an emotional expression congruent with the emotional context through facial expressions,behavioral gestures,and voice.Participants viewed an emotion-eliciting video stimulus(negative or positive)with a virtual agent.The participants then conversed with the virtual agent about the video,such as how the participant felt about the content.The virtual agent expressed emotions congruent with the video or neutral emotion during the dialog.The participants’facial expressions,such as the facial expressive intensity and facial muscle movement,were measured during the dialog using a camera.The results showed the participants’significant behavioral synchronization(i.e.,cosine similarity≥.05)in both the negative and positive emotion conditions,evident in the participant’s facial mimicry with the virtual agent.Additionally,the participants’facial expressions,both movement and intensity,were significantly stronger in the emotional virtual agent than in the neutral virtual agent.In particular,we found that the facial muscle intensity of AU45(Blink)is an effective index to assess the participant’s synchronization that differs by the individual’s empathic capability(low,mid,high).Based on the results,we suggest an appraisal criterion to provide empirical conditions to validate empathic interaction based on the facial expression measures.
文摘Gesture recognition is an important research in the field of human-computer interaction. Hand Gestures are strong variable and flexible, so the gesture recognition has always been an important challenge for the researchers. In this paper, we first outlined the development of gestures recognition, and different classification of gestures based on different purposes. Then we respectively introduced common methods used in the process of gesture segmentation, feature extraction and recognition. Finally, the gesture recognition was summarized and the studying prospects were given.
文摘Surface EMG contains a lot of physiological information reflecting the intention of human movement.Gesture recognition by surface EMG has been widely concerned in the field of human-computer interaction and rehabilitation.At present,most studies on gesture recognition based on surface EMG signal are obtained by discrete separation method,ignoring continuous natural motion.A gesture recognition method of surface EMG based on improved long short-term memory network is proposed.sEMG sensors are rationally arranged according to physiological structure and muscle function.In this paper,the finger curvature is used to describe the gesture state,and the gesture at every moment can be represented by the set of different finger curvature,so as to realize continuous gesture recognition.Finally,the proposed gesture recognition model is tested on Ninapro(a large gesture recognition database).The results show that the proposed method can effectively improve the representation mining ability of surface EMG signal,and provide reference for deep learning modeling of human gesture recognition.
基金Our research has been supported in part by National Natural Science Foundation of China under Grants 61673261 and 61703273.We gratefully acknowledge the support from some companies.
文摘Most of the intelligent surveillances in the industry only care about the safety of the workers.It is meaningful if the camera can know what,where and how the worker has performed the action in real time.In this paper,we propose a light-weight and robust algorithm to meet these requirements.By only two hands'trajectories,our algorithm requires no Graphic Processing Unit(GPU)acceleration,which can be used in low-cost devices.In the training stage,in order to find potential topological structures of the training trajectories,spectral clustering with eigengap heuristic is applied to cluster trajectory points.A gradient descent based algorithm is proposed to find the topological structures,which reflects main representations for each cluster.In the fine-tuning stage,a topological optimization algorithm is proposed to fine-tune the parameters of topological structures in all training data.Finally,our method not only performs more robustly compared to some popular offline action detection methods,but also obtains better detection accuracy in an extended action sequence.
基金supported by the National Natural Science Foundation of China(Nos.61871038 and 61931012)the Premium Funding Project for Academic Human Resources Development of Beijing Union University(No.BPHR2020AZ02)the Generic Pre-research Program of the Equipment Development Department in Military Commission(No.41412040302).
文摘Human action recognition and posture prediction aim to recognize and predict respectively the action and postures of persons in videos.They are both active research topics in computer vision community,which have attracted considerable attention from academia and industry.They are also the precondition for intelligent interaction and human-computer cooperation,and they help the machine perceive the external environment.In the past decade,tremendous progress has been made in the field,especially after the emergence of deep learning technologies.Hence,it is necessary to make a comprehensive review of recent developments.In this paper,firstly,we attempt to present the background,and then discuss research progresses.Secondly,we introduce datasets,various typical feature representation methods,and explore advanced human action recognition and posture prediction algorithms.Finally,facing the challenges in the field,this paper puts forward the research focus,and introduces the importance of action recognition and posture prediction by taking interactive cognition in self-driving vehicle as an example.
基金supported by General Project of Philosophy and Social Science Research in Colleges and Universities in Jiangsu Province(2022SJYB0712)Research Development Fund for Young Teachers of Chengxian College of Southeast University(z0037)Special Project of Ideological and Political Education Reform and Research Course(yjgsz2206).
文摘This study aims to reduce the interference of ambient noise in mobile communication,improve the accuracy and authenticity of information transmitted by sound,and guarantee the accuracy of voice information deliv-ered by mobile communication.First,the principles and techniques of speech enhancement are analyzed,and a fast lateral recursive least square method(FLRLS method)is adopted to process sound data.Then,the convolutional neural networks(CNNs)-based noise recognition CNN(NR-CNN)algorithm and speech enhancement model are proposed.Finally,related experiments are designed to verify the performance of the proposed algorithm and model.The experimental results show that the noise classification accuracy of the NR-CNN noise recognition algorithm is higher than 99.82%,and the recall rate and F1 value are also higher than 99.92.The proposed sound enhance-ment model can effectively enhance the original sound in the case of noise interference.After the CNN is incorporated,the average value of all noisy sound perception quality evaluation system values is improved by over 21%compared with that of the traditional noise reduction method.The proposed algorithm can adapt to a variety of voice environments and can simultaneously enhance and reduce noise processing on a variety of different types of voice signals,and the processing effect is better than that of traditional sound enhancement models.In addition,the sound distortion index of the proposed speech enhancement model is inferior to that of the control group,indicating that the addition of the CNN neural network is less likely to cause sound signal distortion in various sound environments and shows superior robustness.In summary,the proposed CNN-based speech enhancement model shows significant sound enhancement effects,stable performance,and strong adapt-ability.This study provides a reference and basis for research applying neural networks in speech enhancement.