Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the instal...Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.展开更多
Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient.The representation of hand gestures is critical for recognition.In this paper,we propose...Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient.The representation of hand gestures is critical for recognition.In this paper,we propose a new method to measure the similarity between hand gestures and exploit it for hand gesture recognition.The depth maps of hand gestures captured via the Kinect sensors are used in our method,where the 3D hand shapes can be segmented from the cluttered backgrounds.To extract the pattern of salient 3D shape features,we propose a new descriptor-3D Shape Context,for 3D hand gesture representation.The 3D Shape Context information of each 3D point is obtained in multiple scales because both local shape context and global shape distribution are necessary for recognition.The description of all the 3D points constructs the hand gesture representation,and hand gesture recognition is explored via dynamic time warping algorithm.Extensive experiments are conducted on multiple benchmark datasets.The experimental results verify that the proposed method is robust to noise,articulated variations,and rigid transformations.Our method outperforms state-of-the-art methods in the comparisons of accuracy and efficiency.展开更多
The field of vision-based human hand three-dimensional(3D)shape and pose estimation has attracted significant attention recently owing to its key role in various applications,such as natural human computer interaction...The field of vision-based human hand three-dimensional(3D)shape and pose estimation has attracted significant attention recently owing to its key role in various applications,such as natural human computer interactions.With the availability of large-scale annotated hand datasets and the rapid developments of deep neural networks(DNNs),numerous DNN-based data-driven methods have been proposed for accurate and rapid hand shape and pose estimation.Nonetheless,the existence of complicated hand articulation,depth and scale ambiguities,occlusions,and finger similarity remain challenging.In this study,we present a comprehensive survey of state-of-the-art 3D hand shape and pose estimation approaches using RGB-D cameras.Related RGB-D cameras,hand datasets,and a performance analysis are also discussed to provide a holistic view of recent achievements.We also discuss the research potential of this rapidly growing field.展开更多
In the majority of the interaction process, the operator often focuses on the tracked 3D hand gesture model at the "interaction points" in the collision detectionscene, such as "grasp" and "release" and objects ...In the majority of the interaction process, the operator often focuses on the tracked 3D hand gesture model at the "interaction points" in the collision detectionscene, such as "grasp" and "release" and objects in the scene, without paying attention to the tracked 3D hand gesture model in the total procedure. Thus in this paper, a visual attention distribution model of operator in the "grasp", "translation", "release" and other basic operation procedures is first studied and a 3D hand gesture tracking algorithm based on this distribution model is proposed. Utilizing the algorithm, in the period with a low degree of visual attention, a pre-stored 3D hand gesture animation can be used to directly visualise a 3D hand gesture model in the interactive scene; in the time period with a high degree of visual attention, an existing "frame-by-frame tracking" approach can be adopted to obtain a 3D gesture model. The results demonstrate that the proposed method can achieve real-time tracking of 3D hand gestures with an effective improvement on the efficiency, fluency, and availability of 3D hand gesture interaction.展开更多
Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addi...Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addition,the performance of a model decreases as the subject’s distance from the camera increases.This study proposes a 3D separable Convolutional Neural Network(CNN),considering the model’s computa-tional complexity and recognition accuracy.The 20BN-Jester dataset was used to train the model for six gesture classes.After achieving the best offline recognition accuracy of 94.39%,the model was deployed in real-time while considering the subject’s attention,the instant of performing a gesture,and the subject’s distance from the camera.Despite being discussed in numerous research articles,the distance factor remains unresolved in real-time deployment,which leads to degraded recognition results.In the proposed approach,the distance calculation substantially improves the classification performance by reducing the impact of the subject’s distance from the camera.Additionally,the capability of feature extraction,degree of relevance,and statistical significance of the proposed model against other state-of-the-art models were validated using t-distributed Stochastic Neighbor Embedding(t-SNE),Mathew’s Correlation Coefficient(MCC),and the McNemar test,respectively.We observed that the proposed model exhibits state-of-the-art outcomes and a comparatively high significance level.展开更多
文摘Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.
基金supported by the National Natural Science Foundation of China(61773272,61976191)the Six Talent Peaks Project of Jiangsu Province,China(XYDXX-053)Suzhou Research Project of Technical Innovation,Jiangsu,China(SYG201711)。
文摘Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient.The representation of hand gestures is critical for recognition.In this paper,we propose a new method to measure the similarity between hand gestures and exploit it for hand gesture recognition.The depth maps of hand gestures captured via the Kinect sensors are used in our method,where the 3D hand shapes can be segmented from the cluttered backgrounds.To extract the pattern of salient 3D shape features,we propose a new descriptor-3D Shape Context,for 3D hand gesture representation.The 3D Shape Context information of each 3D point is obtained in multiple scales because both local shape context and global shape distribution are necessary for recognition.The description of all the 3D points constructs the hand gesture representation,and hand gesture recognition is explored via dynamic time warping algorithm.Extensive experiments are conducted on multiple benchmark datasets.The experimental results verify that the proposed method is robust to noise,articulated variations,and rigid transformations.Our method outperforms state-of-the-art methods in the comparisons of accuracy and efficiency.
基金the National Key R&D Program of China(2018YFB1004600)the National Natural Science Foundation of China(61502187,61876211)the National Science Foundation Grant CNS(1951952).
文摘The field of vision-based human hand three-dimensional(3D)shape and pose estimation has attracted significant attention recently owing to its key role in various applications,such as natural human computer interactions.With the availability of large-scale annotated hand datasets and the rapid developments of deep neural networks(DNNs),numerous DNN-based data-driven methods have been proposed for accurate and rapid hand shape and pose estimation.Nonetheless,the existence of complicated hand articulation,depth and scale ambiguities,occlusions,and finger similarity remain challenging.In this study,we present a comprehensive survey of state-of-the-art 3D hand shape and pose estimation approaches using RGB-D cameras.Related RGB-D cameras,hand datasets,and a performance analysis are also discussed to provide a holistic view of recent achievements.We also discuss the research potential of this rapidly growing field.
基金Supported by the National Natural Science Foundation of China(61472163)the National Key Research&Development Plan of China(2016YFB1001403)the Science and Technology Project of Shandong Province(2015GGX101025)
文摘In the majority of the interaction process, the operator often focuses on the tracked 3D hand gesture model at the "interaction points" in the collision detectionscene, such as "grasp" and "release" and objects in the scene, without paying attention to the tracked 3D hand gesture model in the total procedure. Thus in this paper, a visual attention distribution model of operator in the "grasp", "translation", "release" and other basic operation procedures is first studied and a 3D hand gesture tracking algorithm based on this distribution model is proposed. Utilizing the algorithm, in the period with a low degree of visual attention, a pre-stored 3D hand gesture animation can be used to directly visualise a 3D hand gesture model in the interactive scene; in the time period with a high degree of visual attention, an existing "frame-by-frame tracking" approach can be adopted to obtain a 3D gesture model. The results demonstrate that the proposed method can achieve real-time tracking of 3D hand gestures with an effective improvement on the efficiency, fluency, and availability of 3D hand gesture interaction.
文摘Appearance-based dynamic Hand Gesture Recognition(HGR)remains a prominent area of research in Human-Computer Interaction(HCI).Numerous environmental and computational constraints limit its real-time deployment.In addition,the performance of a model decreases as the subject’s distance from the camera increases.This study proposes a 3D separable Convolutional Neural Network(CNN),considering the model’s computa-tional complexity and recognition accuracy.The 20BN-Jester dataset was used to train the model for six gesture classes.After achieving the best offline recognition accuracy of 94.39%,the model was deployed in real-time while considering the subject’s attention,the instant of performing a gesture,and the subject’s distance from the camera.Despite being discussed in numerous research articles,the distance factor remains unresolved in real-time deployment,which leads to degraded recognition results.In the proposed approach,the distance calculation substantially improves the classification performance by reducing the impact of the subject’s distance from the camera.Additionally,the capability of feature extraction,degree of relevance,and statistical significance of the proposed model against other state-of-the-art models were validated using t-distributed Stochastic Neighbor Embedding(t-SNE),Mathew’s Correlation Coefficient(MCC),and the McNemar test,respectively.We observed that the proposed model exhibits state-of-the-art outcomes and a comparatively high significance level.