期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Multimodal interaction design and application in augmented reality for chemical experiment 被引量:3
1
作者 Mengting XIAO Zhiquan FENG +2 位作者 Xiaohui YANG Tao XU Qingbei GUO 《Virtual Reality & Intelligent Hardware》 2020年第4期291-304,共14页
Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number o... Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number of cards cause difficulty and inconvenience for users.Secondly,most users conduct experiments only in the visual modal,and such single-modal interaction greatly reduces the users'real sense of interaction.In order to solve these problems,we propose the Multimodal Interaction Algorithm based on Augmented Reality(ARGEV),which is based on visual and tactile feedback in Augmented Reality.In addition,we design a Virtual and Real Fusion Interactive Tool Suite(VRFITS)with gesture recognition and intelligent equipment.Methods The ARGVE method fuses gesture,intelligent equipment,and virtual models.We use a gesture recognition model trained by a convolutional neural network to recognize the gestures in AR,and to trigger a vibration feedback after a recognizing a five finger grasp gesture.We establish a coordinate mapping relationship between real hands and the virtual model to achieve the fusion of gestures and the virtual model.Results The average accuracy rate of gesture recognition was 99.04%.We verify and apply VRFITS in the Augmented Reality Chemistry Lab(ARCL),and the overall operation load of ARCL is thus reduced by 29.42%,in comparison to traditional simulation virtual experiments.Conclusions We achieve real-time fusion of the gesture,virtual model,and intelligent equipment in ARCL.Compared with the NOBOOK virtual simulation experiment,ARCL improves the users'real sense of operation and interaction efficiency. 展开更多
关键词 Augmented reality Gesture recognition Intelligent equipment multimodal interaction Augmented Reality Chemistry Lab
下载PDF
MEinVR:Multimodal interaction techniques in immersive exploration
2
作者 Ziyue Yuan Shuqi He +1 位作者 Yu Liu Lingyun Yu 《Visual Informatics》 EI 2023年第3期37-48,共12页
Immersive environments have become increasingly popular for visualizing and exploring large-scale,complex scientific data because of their key features:immersion,engagement,and awareness.Virtual reality offers numerou... Immersive environments have become increasingly popular for visualizing and exploring large-scale,complex scientific data because of their key features:immersion,engagement,and awareness.Virtual reality offers numerous new interaction possibilities,including tactile and tangible interactions,gestures,and voice commands.However,it is crucial to determine the most effective combination of these techniques for a more natural interaction experience.In this paper,we present MEinVR,a novel multimodal interaction technique for exploring 3D molecular data in virtual reality.MEinVR combines VR controller and voice input to provide a more intuitive way for users to manipulate data in immersive environments.By using the VR controller to select locations and regions of interest and voice commands to perform tasks,users can efficiently perform complex data exploration tasks.Our findings provide suggestions for the design of multimodal interaction techniques in 3D data exploration in virtual reality. 展开更多
关键词 multimodal interaction Virtual reality Scientific visualization
原文传递
Architectural Design of a Cloud Robotic System for Upper-Limb Rehabilitation with Multimodal Interaction 被引量:2
3
作者 Hui-Jun Li Ai-Guo Song 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第2期258-268,共11页
The rise in the cases of motor impairing therapy. Due to the current situation that the service illnesses demands the research for improvements in rehabilitation of the professional therapists cannot meet the need of ... The rise in the cases of motor impairing therapy. Due to the current situation that the service illnesses demands the research for improvements in rehabilitation of the professional therapists cannot meet the need of the motorimpaired subjects, a cloud robotic system is proposed to provide an Internet-based process for upper-limb rehabilitation with multimodal interaction. In this system, therapists and subjects are connected through the Internet using client/server architecture. At the client site, gradual virtual games are introduced so that the subjects can control and interact with virtual objects through the interaction devices such as robot arms. Computer graphics show the geometric results and interaction haptic/force is fed back during exercising. Both video/audio information and kinematical/physiological data axe transferred to the therapist for monitoring and analysis. In this way, patients can be diagnosed and directed and therapists can manage therapy sessions remotely. The rehabilitation process can be monitored through the Internet. Expert libraries on the central server can serve as a supervisor and give advice based on the training data and the physiological data. The proposed solution is a convenient application that has several features taking advantage of the extensive technological utilization in the area of physical rehabilitation and multimodal interaction. 展开更多
关键词 cloud robot multimodal interaction motor rehabilitation haptic/force feedback
原文传递
Natural multimodal interaction in immersive flow visualization 被引量:1
4
作者 Chengyu Su Chao Yang +4 位作者 Yonghui Chen Fupan Wang Fang Wang Yadong Wu Xiaorong Zhang 《Visual Informatics》 EI 2021年第4期56-66,共11页
In the immersive flow visualization based on virtual reality,how to meet the needs of complex professional flow visualization analysis by natural human–computer interaction is a pressing problem.In order to achieve t... In the immersive flow visualization based on virtual reality,how to meet the needs of complex professional flow visualization analysis by natural human–computer interaction is a pressing problem.In order to achieve the natural and efficient human–computer interaction,we analyze the interaction requirements of flow visualization and study the characteristics of four human–computer interaction channels:hand,head,eye and voice.We give out some multimodal interaction design suggestions and then propose three multimodal interaction methods:head&hand,head&hand&eye and head&hand&eye&voice.The freedom of gestures,the stability of the head,the convenience of eyes and the rapid retrieval of voices are used to improve the accuracy and efficiency of interaction.The interaction load is balanced by multimodal interaction to reduce fatigue.The evaluation shows that our multimodal interaction has higher accuracy,faster time efficiency and much lower fatigue than the traditional joystick interaction. 展开更多
关键词 Flow visualization Virtual reality multimodal interaction Human-computer interaction
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部