This paper first introduces a way to improve interactivity with high polygon count virtual objects through the "mixed" use of image-based representation within one object. That is, both 3D polygonal and imag...This paper first introduces a way to improve interactivity with high polygon count virtual objects through the "mixed" use of image-based representation within one object. That is, both 3D polygonal and image-based representations are maintained for an object, and switched for rendering depending on the functional requirement of the object. Furthermore, in order to reduce the popping effect and provide smooth and gradual transition during the object representation switch, the object is subdivided with the subdivided parts possibly represented differently, i.e., using 3D models or images. As for the image-based representation, the relief texture (RT) method is used. In particular, through the use of the mixed representation, a new way called TangibleScreen is pro-posed to provide object tangibility by associating the image-based representation with a physical prop (projecting the RTs) in a selective and flexible way. Overall, the proposed method provides a way to maintain an interactive frame rate with selective perceptual details in a large-scale virtual environment, while allowing the user to interact with virtual objects in a tangible way.展开更多
Background Physical entity interactions in mixed reality(MR)environments aim to harness human capabilities in manipulating physical objects,thereby enhancing virtual environment(VEs)functionality.In MR,a common strate...Background Physical entity interactions in mixed reality(MR)environments aim to harness human capabilities in manipulating physical objects,thereby enhancing virtual environment(VEs)functionality.In MR,a common strategy is to use virtual agents as substitutes for physical entities,balancing interaction efficiency with environmental immersion.However,the impact of virtual agent size and form on interaction performance remains unclear.Methods Two experiments were conducted to explore how virtual agent size and form affect interaction performance,immersion,and preference in MR environments.The first experiment assessed five virtual agent sizes(25%,50%,75%,100%,and 125%of physical size).The second experiment tested four types of frames(no frame,consistent frame,half frame,and surrounding frame)across all agent sizes.Participants,utilizing a head mounted display,performed tasks involving moving cups,typing words,and using a mouse.They completed questionnaires assessing aspects such as the virtual environment effects,interaction effects,collision concerns,and preferences.Results Results from the first experiment revealed that agents matching physical object size produced the best overall performance.The second experiment demonstrated that consistent framing notably enhances interaction accuracy and speed but reduces immersion.To balance efficiency and immersion,frameless agents matching physical object sizes were deemed optimal.Conclusions Virtual agents matching physical entity sizes enhance user experience and interaction performance.Conversely,familiar frames from 2D interfaces detrimentally affect interaction and immersion in virtual spaces.This study provides valuable insights for the future development of MR systems.展开更多
Using the Internet to learn a language creates wide opportunities to enhance learning (Association of teachers of English in Catalonia (APAC), 2010). The Internet activities promote learners' self-monitoring abil...Using the Internet to learn a language creates wide opportunities to enhance learning (Association of teachers of English in Catalonia (APAC), 2010). The Internet activities promote learners' self-monitoring ability, encourage the use of multimedia and network technology, and develop students' cooperation and participation. During the latest years, there have been many changes in education as these new technologies, including VLEs (Virtual Learning Environments), which have become an important part in the teaching/learning process. According to Tech Terms Computer Dictionary (2012), VLE is a virtual classroom where teachers and students communicate. VLEs have evolved as at an early stage, they were only ways of transmitting information: Teachers uploaded the multimedia resources and students read this information. At a higher stage, VLEs have become interactive. This means that students become active. We have designed a virtual environment where students, weekly, must contribute their opinions and comments in response to a required activity uploaded by the teacher. In this paper, we describe this weekly task and analyze students' opinion about this planned activity. The students become an active subject in this field. In this paper, we show how VLEs are no longer a means of transmitting information but a means of interaction as well as a way of motivating our students to be involved in their learning process展开更多
An intelligent virtual environment is described for training users in the operation of complex engineering systems. After analyzing the original model of virtual environment, a virtual agent perception model was put f...An intelligent virtual environment is described for training users in the operation of complex engineering systems. After analyzing the original model of virtual environment, a virtual agent perception model was put forward. The information layer was inserted into original virtual environment. The model classifies all kinds of information and offers a way for knowledge description of virtual environment, and contributes to set up feeling model for the Virtual Agent within virtual environment.展开更多
Background In mega-biodiverse environments,where different species are more likely to be heard than seen,species monitoring is generally performed using bioacoustics methodologies.Furthermore,since bird vocalizations ...Background In mega-biodiverse environments,where different species are more likely to be heard than seen,species monitoring is generally performed using bioacoustics methodologies.Furthermore,since bird vocalizations are reasonable estimators of biodiversity,their monitoring is of great importance in the formulation of conservation policies.However,birdsong recognition is an arduous task that requires dedicated training in order to achieve mastery,which is costly in terms of time and money due to the lack of accessibility of relevant information in field trips or even specialized databases.Immersive technology based on virtual reality(VR)and spatial audio may improve species monitoring by enhancing information accessibility,interaction,and user engagement.Methods This study used spatial audio,a Bluetooth controller,and a head-mounted display(HMD)to conduct an immersive training experience in VR.Participants moved inside a virtual world using a Bluetooth controller,while their task was to recognize targeted birdsongs.We measured the accuracy of recognition and user engagement according to the User Engagement Scale.Results The experimental results revealed significantly higher engagement and accuracy for participants in the VR-based training system than in a traditional computer-based training system.All four dimensions of the user engagement scale received high ratings from the participants,suggesting that VR-based training provides a motivating and attractive environment for learning demanding tasks through appropriate design,exploiting the sensory system,and virtual reality interactivity.Conclusions The accuracy and engagement of the VR-based training system were significantly high when tested against traditional training.Future research will focus on developing a variety of realistic ecosystems and their associated birds to increase the information on newer bird species within the training system.Finally,the proposed VR-based training system must be tested with additional participants and for a longer duration to measure information recall and recognition mastery among users.展开更多
Background Within a virtual environment(VE)the control of locomotion(e.g.,self-travel)is critical for creating a realistic and functional experience.Usually the direction of locomotion,whileusing a head-mounted displa...Background Within a virtual environment(VE)the control of locomotion(e.g.,self-travel)is critical for creating a realistic and functional experience.Usually the direction of locomotion,whileusing a head-mounted display(HMD),is determined by the direction the head is pointing and the forwardor backward motion is controlled with a hand held controllers.However,hand held devices can be difficultto use while the eyes are covered with a HMD.Free hand gestures,that are tracked with a camera or ahand data glove,have an advantage of eliminating the need to look at the hand controller but the design ofhand or finger gestures for this purpose has not been well developed.Methods This study used a depth-sensing camera to track fingertip location(curling and straightening the fingers),which was converted toforward or backward self-travel in the VE.Fingertip position was converted to self-travel velocity using amapping function with three parameters:a region of zero velocity(dead zone)around the relaxed handposition,a linear relationship of fingertip position to velocity(slope orβ)beginning at the edge of the deadzone,and an exponential relationship rather than a linear one mapping fingertip position to velocity(exponent).Using a HMD,participants moved forward along a virtual road and stopped at a target on theroad by controlling self-travel velocity with finger flexion and extension.Each of the 3 mapping functionparameters was tested at 3 levels.Outcomes measured included usability ratings,fatigue,nausea,and timeto complete the tasks.Results Twenty subjects participated but five did not complete the study due tonausea.The size of the dead zone had little effect on performance or usability.Subjects preferred lower β values which were associated with better subjective ratings of control and reduced time to complete thetask,especially for large targets.Exponent values of 1.0 or greater were preferred and reduced the time tocomplete the task,especially for small targets.Conclusions Small finger movements can be used tocontrol velocity of self-travel in VE.The functions used for converting fingertip position to movementvelocity influence usability and performance.展开更多
Background This paper shows how current collaborative virtual environments(VEs)such as Mozilla Hubs and AltspaceVR can aid in the task of requirements gathering in VR for simulation and training.Methods We performed a...Background This paper shows how current collaborative virtual environments(VEs)such as Mozilla Hubs and AltspaceVR can aid in the task of requirements gathering in VR for simulation and training.Methods We performed a qualitative study on our use of these technologies in the requirements gathering of two projects.Results Our results show that requirements gathering in virtual reality has an impact on the process of requirements identification.We report advantages and shortcomings that will be of interest to future practitioners.For example,we found that VR sessions for requirements gathering in current VEs could benefit from better pointers and better sound quality.Conclusion Current VEs are useful for the requirements gathering task in the development of VR simulators and VR training environments.展开更多
Analytics and visualization of multi-dimensional and complex geo-data,such as three-dimensional(3D)subsurface ground models,is critical for development of underground space and design and construction of underground s...Analytics and visualization of multi-dimensional and complex geo-data,such as three-dimensional(3D)subsurface ground models,is critical for development of underground space and design and construction of underground structures(e.g.,tunnels,dams,and slopes)in engineering practices.Although complicated 3D subsurface ground models now can be developed from site investigation data(e.g.,boreholes)which is often sparse in practice,it remains a great challenge to visualize a 3D subsurface ground model with sophisticated stratigraphic variations by conventional two-dimensional(2D)geological cross-sections.Virtual reality(VR)technology,which has an attractive capability of constructing a virtual environment that links to the physical world,has been rapidly developed and applied to visualization in various disciplines recently.Leveraging on the rapid development of VR,this study proposes a framework for immersive visualization of 3D subsurface ground models in geo-applications using VR technology.The 3D subsurface model is first developed from limited borehole data in a data-driven manner.Then,a VR system is developed using related software and hardware devices currently available in the markets for immersive visualization and interaction with the developed 3D subsurface ground model.The results demonstrate that VR visualization of the 3D subsurface ground model in an immersive environment has great potential in revolutionizing the geo-practices from 2D cross-sections to a 3D immersive virtual environment in digital era,particularly for the emerging digital twins.展开更多
This paper investigates user preferences and behaviour associated with 2D and 3D modes of urban representation within a novel Topographic Immersive Virtual Environment(TopoIVE)created from official 1:10,000 mapping.Si...This paper investigates user preferences and behaviour associated with 2D and 3D modes of urban representation within a novel Topographic Immersive Virtual Environment(TopoIVE)created from official 1:10,000 mapping.Sixty participants were divided into two groups:the first were given a navigational task within a simulated city and the second were given the freedom to explore it.A Head-Mounted Display(HMD)Virtual Reality(VR)app allowed participants to switch between 2D and 3D representations of buildings with a remote controller and their use of these modes during the experiment was recorded.Participants performed mental rotation tests before entering the TopoIVE and were interviewed afterwards about their experiences using the app.The results indicate that participants preferred the 3D mode of representation overall,although preference for the 2D mode was slightly higher amongst those undertaking the navigational task,and reveal that different wayfinding solutions were adopted by participants according to their gender.Overall,the findings suggest that users exploit different aspects of 2D and 3D modes of visualization in their wayfinding strategy,regardless of their task.The potential to combine the functionality of 2D and 3D modes therefore offers substantial opportunities for the development of immersive virtual reality products derived from topographic datasets.展开更多
In order to provide a simple and efficient approach to perform the real-time interactive motion control of virtual human in virtual maintenance environment(VME),the motion control method of virtual human based on limi...In order to provide a simple and efficient approach to perform the real-time interactive motion control of virtual human in virtual maintenance environment(VME),the motion control method of virtual human based on limited input information is proposed.With the space position tracking system with only one sensor the action sequences and motion models of virtual human,the human motions and hand actions in VME are driven by the sensor data in stages and in real time through the transmission condition control in the process of maintenance operation.And the input data and information is processed based on the method of Kalman filtering and wavelet transforming to improve the control effects.An experimental VME is also established to validate the control efficiency,and the experiment results show that the space motion control of virtual human in VME can be performed based on limited information with proposed control strategy.展开更多
基金Project (No. R01-2006-000-11142-0) supported by the "Teukjung Gicho" Program of the Korea Science Foundation
文摘This paper first introduces a way to improve interactivity with high polygon count virtual objects through the "mixed" use of image-based representation within one object. That is, both 3D polygonal and image-based representations are maintained for an object, and switched for rendering depending on the functional requirement of the object. Furthermore, in order to reduce the popping effect and provide smooth and gradual transition during the object representation switch, the object is subdivided with the subdivided parts possibly represented differently, i.e., using 3D models or images. As for the image-based representation, the relief texture (RT) method is used. In particular, through the use of the mixed representation, a new way called TangibleScreen is pro-posed to provide object tangibility by associating the image-based representation with a physical prop (projecting the RTs) in a selective and flexible way. Overall, the proposed method provides a way to maintain an interactive frame rate with selective perceptual details in a large-scale virtual environment, while allowing the user to interact with virtual objects in a tangible way.
基金the Strategic research and consulting project of Chinese Academy of Engineering(2023-HY-14).
文摘Background Physical entity interactions in mixed reality(MR)environments aim to harness human capabilities in manipulating physical objects,thereby enhancing virtual environment(VEs)functionality.In MR,a common strategy is to use virtual agents as substitutes for physical entities,balancing interaction efficiency with environmental immersion.However,the impact of virtual agent size and form on interaction performance remains unclear.Methods Two experiments were conducted to explore how virtual agent size and form affect interaction performance,immersion,and preference in MR environments.The first experiment assessed five virtual agent sizes(25%,50%,75%,100%,and 125%of physical size).The second experiment tested four types of frames(no frame,consistent frame,half frame,and surrounding frame)across all agent sizes.Participants,utilizing a head mounted display,performed tasks involving moving cups,typing words,and using a mouse.They completed questionnaires assessing aspects such as the virtual environment effects,interaction effects,collision concerns,and preferences.Results Results from the first experiment revealed that agents matching physical object size produced the best overall performance.The second experiment demonstrated that consistent framing notably enhances interaction accuracy and speed but reduces immersion.To balance efficiency and immersion,frameless agents matching physical object sizes were deemed optimal.Conclusions Virtual agents matching physical entity sizes enhance user experience and interaction performance.Conversely,familiar frames from 2D interfaces detrimentally affect interaction and immersion in virtual spaces.This study provides valuable insights for the future development of MR systems.
文摘Using the Internet to learn a language creates wide opportunities to enhance learning (Association of teachers of English in Catalonia (APAC), 2010). The Internet activities promote learners' self-monitoring ability, encourage the use of multimedia and network technology, and develop students' cooperation and participation. During the latest years, there have been many changes in education as these new technologies, including VLEs (Virtual Learning Environments), which have become an important part in the teaching/learning process. According to Tech Terms Computer Dictionary (2012), VLE is a virtual classroom where teachers and students communicate. VLEs have evolved as at an early stage, they were only ways of transmitting information: Teachers uploaded the multimedia resources and students read this information. At a higher stage, VLEs have become interactive. This means that students become active. We have designed a virtual environment where students, weekly, must contribute their opinions and comments in response to a required activity uploaded by the teacher. In this paper, we describe this weekly task and analyze students' opinion about this planned activity. The students become an active subject in this field. In this paper, we show how VLEs are no longer a means of transmitting information but a means of interaction as well as a way of motivating our students to be involved in their learning process
基金Supported by National Natural Science Foundation of China(No.60472093)
文摘An intelligent virtual environment is described for training users in the operation of complex engineering systems. After analyzing the original model of virtual environment, a virtual agent perception model was put forward. The information layer was inserted into original virtual environment. The model classifies all kinds of information and offers a way for knowledge description of virtual environment, and contributes to set up feeling model for the Virtual Agent within virtual environment.
文摘Background In mega-biodiverse environments,where different species are more likely to be heard than seen,species monitoring is generally performed using bioacoustics methodologies.Furthermore,since bird vocalizations are reasonable estimators of biodiversity,their monitoring is of great importance in the formulation of conservation policies.However,birdsong recognition is an arduous task that requires dedicated training in order to achieve mastery,which is costly in terms of time and money due to the lack of accessibility of relevant information in field trips or even specialized databases.Immersive technology based on virtual reality(VR)and spatial audio may improve species monitoring by enhancing information accessibility,interaction,and user engagement.Methods This study used spatial audio,a Bluetooth controller,and a head-mounted display(HMD)to conduct an immersive training experience in VR.Participants moved inside a virtual world using a Bluetooth controller,while their task was to recognize targeted birdsongs.We measured the accuracy of recognition and user engagement according to the User Engagement Scale.Results The experimental results revealed significantly higher engagement and accuracy for participants in the VR-based training system than in a traditional computer-based training system.All four dimensions of the user engagement scale received high ratings from the participants,suggesting that VR-based training provides a motivating and attractive environment for learning demanding tasks through appropriate design,exploiting the sensory system,and virtual reality interactivity.Conclusions The accuracy and engagement of the VR-based training system were significantly high when tested against traditional training.Future research will focus on developing a variety of realistic ecosystems and their associated birds to increase the information on newer bird species within the training system.Finally,the proposed VR-based training system must be tested with additional participants and for a longer duration to measure information recall and recognition mastery among users.
文摘Background Within a virtual environment(VE)the control of locomotion(e.g.,self-travel)is critical for creating a realistic and functional experience.Usually the direction of locomotion,whileusing a head-mounted display(HMD),is determined by the direction the head is pointing and the forwardor backward motion is controlled with a hand held controllers.However,hand held devices can be difficultto use while the eyes are covered with a HMD.Free hand gestures,that are tracked with a camera or ahand data glove,have an advantage of eliminating the need to look at the hand controller but the design ofhand or finger gestures for this purpose has not been well developed.Methods This study used a depth-sensing camera to track fingertip location(curling and straightening the fingers),which was converted toforward or backward self-travel in the VE.Fingertip position was converted to self-travel velocity using amapping function with three parameters:a region of zero velocity(dead zone)around the relaxed handposition,a linear relationship of fingertip position to velocity(slope orβ)beginning at the edge of the deadzone,and an exponential relationship rather than a linear one mapping fingertip position to velocity(exponent).Using a HMD,participants moved forward along a virtual road and stopped at a target on theroad by controlling self-travel velocity with finger flexion and extension.Each of the 3 mapping functionparameters was tested at 3 levels.Outcomes measured included usability ratings,fatigue,nausea,and timeto complete the tasks.Results Twenty subjects participated but five did not complete the study due tonausea.The size of the dead zone had little effect on performance or usability.Subjects preferred lower β values which were associated with better subjective ratings of control and reduced time to complete thetask,especially for large targets.Exponent values of 1.0 or greater were preferred and reduced the time tocomplete the task,especially for small targets.Conclusions Small finger movements can be used tocontrol velocity of self-travel in VE.The functions used for converting fingertip position to movementvelocity influence usability and performance.
基金the projects 75926 with the Colombian Navythe project 2020021 between Universidad de los Andes and the Military Hospital in Colombia.
文摘Background This paper shows how current collaborative virtual environments(VEs)such as Mozilla Hubs and AltspaceVR can aid in the task of requirements gathering in VR for simulation and training.Methods We performed a qualitative study on our use of these technologies in the requirements gathering of two projects.Results Our results show that requirements gathering in virtual reality has an impact on the process of requirements identification.We report advantages and shortcomings that will be of interest to future practitioners.For example,we found that VR sessions for requirements gathering in current VEs could benefit from better pointers and better sound quality.Conclusion Current VEs are useful for the requirements gathering task in the development of VR simulators and VR training environments.
基金supported by the Research Grant Council of Hong Kong Special Administrative Region(Project No.CityU 11203322)Shenzhen Science and Technology Innovation Commission(Shenzhen-Hong Kong-Macao Science and Technology Project(Category C)No.SGDX20210823104002020),China.
文摘Analytics and visualization of multi-dimensional and complex geo-data,such as three-dimensional(3D)subsurface ground models,is critical for development of underground space and design and construction of underground structures(e.g.,tunnels,dams,and slopes)in engineering practices.Although complicated 3D subsurface ground models now can be developed from site investigation data(e.g.,boreholes)which is often sparse in practice,it remains a great challenge to visualize a 3D subsurface ground model with sophisticated stratigraphic variations by conventional two-dimensional(2D)geological cross-sections.Virtual reality(VR)technology,which has an attractive capability of constructing a virtual environment that links to the physical world,has been rapidly developed and applied to visualization in various disciplines recently.Leveraging on the rapid development of VR,this study proposes a framework for immersive visualization of 3D subsurface ground models in geo-applications using VR technology.The 3D subsurface model is first developed from limited borehole data in a data-driven manner.Then,a VR system is developed using related software and hardware devices currently available in the markets for immersive visualization and interaction with the developed 3D subsurface ground model.The results demonstrate that VR visualization of the 3D subsurface ground model in an immersive environment has great potential in revolutionizing the geo-practices from 2D cross-sections to a 3D immersive virtual environment in digital era,particularly for the emerging digital twins.
文摘This paper investigates user preferences and behaviour associated with 2D and 3D modes of urban representation within a novel Topographic Immersive Virtual Environment(TopoIVE)created from official 1:10,000 mapping.Sixty participants were divided into two groups:the first were given a navigational task within a simulated city and the second were given the freedom to explore it.A Head-Mounted Display(HMD)Virtual Reality(VR)app allowed participants to switch between 2D and 3D representations of buildings with a remote controller and their use of these modes during the experiment was recorded.Participants performed mental rotation tests before entering the TopoIVE and were interviewed afterwards about their experiences using the app.The results indicate that participants preferred the 3D mode of representation overall,although preference for the 2D mode was slightly higher amongst those undertaking the navigational task,and reveal that different wayfinding solutions were adopted by participants according to their gender.Overall,the findings suggest that users exploit different aspects of 2D and 3D modes of visualization in their wayfinding strategy,regardless of their task.The potential to combine the functionality of 2D and 3D modes therefore offers substantial opportunities for the development of immersive virtual reality products derived from topographic datasets.
文摘In order to provide a simple and efficient approach to perform the real-time interactive motion control of virtual human in virtual maintenance environment(VME),the motion control method of virtual human based on limited input information is proposed.With the space position tracking system with only one sensor the action sequences and motion models of virtual human,the human motions and hand actions in VME are driven by the sensor data in stages and in real time through the transmission condition control in the process of maintenance operation.And the input data and information is processed based on the method of Kalman filtering and wavelet transforming to improve the control effects.An experimental VME is also established to validate the control efficiency,and the experiment results show that the space motion control of virtual human in VME can be performed based on limited information with proposed control strategy.