Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number o...Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number of cards cause difficulty and inconvenience for users.Secondly,most users conduct experiments only in the visual modal,and such single-modal interaction greatly reduces the users'real sense of interaction.In order to solve these problems,we propose the Multimodal Interaction Algorithm based on Augmented Reality(ARGEV),which is based on visual and tactile feedback in Augmented Reality.In addition,we design a Virtual and Real Fusion Interactive Tool Suite(VRFITS)with gesture recognition and intelligent equipment.Methods The ARGVE method fuses gesture,intelligent equipment,and virtual models.We use a gesture recognition model trained by a convolutional neural network to recognize the gestures in AR,and to trigger a vibration feedback after a recognizing a five finger grasp gesture.We establish a coordinate mapping relationship between real hands and the virtual model to achieve the fusion of gestures and the virtual model.Results The average accuracy rate of gesture recognition was 99.04%.We verify and apply VRFITS in the Augmented Reality Chemistry Lab(ARCL),and the overall operation load of ARCL is thus reduced by 29.42%,in comparison to traditional simulation virtual experiments.Conclusions We achieve real-time fusion of the gesture,virtual model,and intelligent equipment in ARCL.Compared with the NOBOOK virtual simulation experiment,ARCL improves the users'real sense of operation and interaction efficiency.展开更多
Immersive environments have become increasingly popular for visualizing and exploring large-scale,complex scientific data because of their key features:immersion,engagement,and awareness.Virtual reality offers numerou...Immersive environments have become increasingly popular for visualizing and exploring large-scale,complex scientific data because of their key features:immersion,engagement,and awareness.Virtual reality offers numerous new interaction possibilities,including tactile and tangible interactions,gestures,and voice commands.However,it is crucial to determine the most effective combination of these techniques for a more natural interaction experience.In this paper,we present MEinVR,a novel multimodal interaction technique for exploring 3D molecular data in virtual reality.MEinVR combines VR controller and voice input to provide a more intuitive way for users to manipulate data in immersive environments.By using the VR controller to select locations and regions of interest and voice commands to perform tasks,users can efficiently perform complex data exploration tasks.Our findings provide suggestions for the design of multimodal interaction techniques in 3D data exploration in virtual reality.展开更多
The rise in the cases of motor impairing therapy. Due to the current situation that the service illnesses demands the research for improvements in rehabilitation of the professional therapists cannot meet the need of ...The rise in the cases of motor impairing therapy. Due to the current situation that the service illnesses demands the research for improvements in rehabilitation of the professional therapists cannot meet the need of the motorimpaired subjects, a cloud robotic system is proposed to provide an Internet-based process for upper-limb rehabilitation with multimodal interaction. In this system, therapists and subjects are connected through the Internet using client/server architecture. At the client site, gradual virtual games are introduced so that the subjects can control and interact with virtual objects through the interaction devices such as robot arms. Computer graphics show the geometric results and interaction haptic/force is fed back during exercising. Both video/audio information and kinematical/physiological data axe transferred to the therapist for monitoring and analysis. In this way, patients can be diagnosed and directed and therapists can manage therapy sessions remotely. The rehabilitation process can be monitored through the Internet. Expert libraries on the central server can serve as a supervisor and give advice based on the training data and the physiological data. The proposed solution is a convenient application that has several features taking advantage of the extensive technological utilization in the area of physical rehabilitation and multimodal interaction.展开更多
In the immersive flow visualization based on virtual reality,how to meet the needs of complex professional flow visualization analysis by natural human–computer interaction is a pressing problem.In order to achieve t...In the immersive flow visualization based on virtual reality,how to meet the needs of complex professional flow visualization analysis by natural human–computer interaction is a pressing problem.In order to achieve the natural and efficient human–computer interaction,we analyze the interaction requirements of flow visualization and study the characteristics of four human–computer interaction channels:hand,head,eye and voice.We give out some multimodal interaction design suggestions and then propose three multimodal interaction methods:head&hand,head&hand&eye and head&hand&eye&voice.The freedom of gestures,the stability of the head,the convenience of eyes and the rapid retrieval of voices are used to improve the accuracy and efficiency of interaction.The interaction load is balanced by multimodal interaction to reduce fatigue.The evaluation shows that our multimodal interaction has higher accuracy,faster time efficiency and much lower fatigue than the traditional joystick interaction.展开更多
Autism Spectrum Disorder(ASD)is a common neurodevelopmental disorder in children,characterized by social interaction,communication difficulties,and repetitive and stereotyped behaviors.Existing intervention methods ha...Autism Spectrum Disorder(ASD)is a common neurodevelopmental disorder in children,characterized by social interaction,communication difficulties,and repetitive and stereotyped behaviors.Existing intervention methods have limitations,such as requiring long treatment periods and needing to be more convenient to implement.Extended Reality(XR)technology offers a virtual environment to enhance children's social,communication,and self-regulation skills.This paper compares XR theoretical models,application examples,and intervention effects.The study reveals that XR intervention therapy is mainly based on cognitive rehabilitation,teaching,and social-emotional learning theories.It utilizes algorithms,models,artificial intelligence(AI),eye-tracking,and other technologies for interaction,achieving diverse intervention outcomes.Participants showed effective improvement in competency barriers using XR-based multimodal interactive platforms.However,Mixed Reality(MR)technology still requires further development.Future research should explore multimsodal interaction technologies combining XR and AI,optimize models,prioritize the development of MR intervention scenarios,and sustain an optimal intervention level.展开更多
基金the National Key R&D Program of China(2018YFB1004901)the Independent Innovation Team Project of Jinan City(2019GXRC013).
文摘Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number of cards cause difficulty and inconvenience for users.Secondly,most users conduct experiments only in the visual modal,and such single-modal interaction greatly reduces the users'real sense of interaction.In order to solve these problems,we propose the Multimodal Interaction Algorithm based on Augmented Reality(ARGEV),which is based on visual and tactile feedback in Augmented Reality.In addition,we design a Virtual and Real Fusion Interactive Tool Suite(VRFITS)with gesture recognition and intelligent equipment.Methods The ARGVE method fuses gesture,intelligent equipment,and virtual models.We use a gesture recognition model trained by a convolutional neural network to recognize the gestures in AR,and to trigger a vibration feedback after a recognizing a five finger grasp gesture.We establish a coordinate mapping relationship between real hands and the virtual model to achieve the fusion of gestures and the virtual model.Results The average accuracy rate of gesture recognition was 99.04%.We verify and apply VRFITS in the Augmented Reality Chemistry Lab(ARCL),and the overall operation load of ARCL is thus reduced by 29.42%,in comparison to traditional simulation virtual experiments.Conclusions We achieve real-time fusion of the gesture,virtual model,and intelligent equipment in ARCL.Compared with the NOBOOK virtual simulation experiment,ARCL improves the users'real sense of operation and interaction efficiency.
基金NSFC,PR China(62272396)XJTLU Research Development Funding,PR China RDF-19-02-11.
文摘Immersive environments have become increasingly popular for visualizing and exploring large-scale,complex scientific data because of their key features:immersion,engagement,and awareness.Virtual reality offers numerous new interaction possibilities,including tactile and tangible interactions,gestures,and voice commands.However,it is crucial to determine the most effective combination of these techniques for a more natural interaction experience.In this paper,we present MEinVR,a novel multimodal interaction technique for exploring 3D molecular data in virtual reality.MEinVR combines VR controller and voice input to provide a more intuitive way for users to manipulate data in immersive environments.By using the VR controller to select locations and regions of interest and voice commands to perform tasks,users can efficiently perform complex data exploration tasks.Our findings provide suggestions for the design of multimodal interaction techniques in 3D data exploration in virtual reality.
基金This work was supported by the National Key Research and Development Program of China under Grant No. 2016YFB1001300, the National Natural Science Foundation of China under Grant No. 61403080, and the Natural Science Foundation of Jiangsu Province Technology Support Plan under Grant No. BK20140641.
文摘The rise in the cases of motor impairing therapy. Due to the current situation that the service illnesses demands the research for improvements in rehabilitation of the professional therapists cannot meet the need of the motorimpaired subjects, a cloud robotic system is proposed to provide an Internet-based process for upper-limb rehabilitation with multimodal interaction. In this system, therapists and subjects are connected through the Internet using client/server architecture. At the client site, gradual virtual games are introduced so that the subjects can control and interact with virtual objects through the interaction devices such as robot arms. Computer graphics show the geometric results and interaction haptic/force is fed back during exercising. Both video/audio information and kinematical/physiological data axe transferred to the therapist for monitoring and analysis. In this way, patients can be diagnosed and directed and therapists can manage therapy sessions remotely. The rehabilitation process can be monitored through the Internet. Expert libraries on the central server can serve as a supervisor and give advice based on the training data and the physiological data. The proposed solution is a convenient application that has several features taking advantage of the extensive technological utilization in the area of physical rehabilitation and multimodal interaction.
基金supported in part by the National Natural Science Foundation of China(No.61872304,No.61802320)the State Key Laboratory of Aerodynamics(SKLA20200203)the National Numerical Windtunnel Project(NNW2019ZT6-A17).
文摘In the immersive flow visualization based on virtual reality,how to meet the needs of complex professional flow visualization analysis by natural human–computer interaction is a pressing problem.In order to achieve the natural and efficient human–computer interaction,we analyze the interaction requirements of flow visualization and study the characteristics of four human–computer interaction channels:hand,head,eye and voice.We give out some multimodal interaction design suggestions and then propose three multimodal interaction methods:head&hand,head&hand&eye and head&hand&eye&voice.The freedom of gestures,the stability of the head,the convenience of eyes and the rapid retrieval of voices are used to improve the accuracy and efficiency of interaction.The interaction load is balanced by multimodal interaction to reduce fatigue.The evaluation shows that our multimodal interaction has higher accuracy,faster time efficiency and much lower fatigue than the traditional joystick interaction.
基金supported by grants from the National Natural Science Foundation of China(82301735)The University Synergy Innovation Program of Anhui Province(GXXT-2021-003)The Basic and Clinical Collaborative Research Enhancement Programme of Anhui Medical University(2022xkjT016).
文摘Autism Spectrum Disorder(ASD)is a common neurodevelopmental disorder in children,characterized by social interaction,communication difficulties,and repetitive and stereotyped behaviors.Existing intervention methods have limitations,such as requiring long treatment periods and needing to be more convenient to implement.Extended Reality(XR)technology offers a virtual environment to enhance children's social,communication,and self-regulation skills.This paper compares XR theoretical models,application examples,and intervention effects.The study reveals that XR intervention therapy is mainly based on cognitive rehabilitation,teaching,and social-emotional learning theories.It utilizes algorithms,models,artificial intelligence(AI),eye-tracking,and other technologies for interaction,achieving diverse intervention outcomes.Participants showed effective improvement in competency barriers using XR-based multimodal interactive platforms.However,Mixed Reality(MR)technology still requires further development.Future research should explore multimsodal interaction technologies combining XR and AI,optimize models,prioritize the development of MR intervention scenarios,and sustain an optimal intervention level.