Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomica...Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomical,functional,and radical hepatectomy through tumor identification and localization of target hepatic segments,driving a transformative shift in themanagement of hepatic surgical diseases,moving away from traditional,empirical diagnostic and treatment approaches toward digital,intelligent ones.The Hepatic Surgery Group of the Surgery Branch of the Chinese Medical Association,Digital Medicine Branch of the Chinese Medical Association,Digital Intelligent Surgery Committee of the Chinese Society of ResearchHospitals,and Liver Cancer Committee of the Chinese Medical Doctor Association organized the relevant experts in China to formulate this consensus.This consensus provides a comprehensive outline of the principles,advantages,processes,and key considerations associated with the application of augmented reality and mixed-reality technology combined with indocyanine green fluorescence imaging technology for hepatic segmental and subsegmental resection.The purpose is to streamline and standardize the application of these technologies.展开更多
In recent years,the government has issued a series of documents to promote the construction of digital campuses.This initiative serves to encourage the deep integration of information technology and intelligent techno...In recent years,the government has issued a series of documents to promote the construction of digital campuses.This initiative serves to encourage the deep integration of information technology and intelligent technology education and digital reform,the combination of virtual reality and campus management is the need for innovative thinking and economic and social development,and then better change our learning style and living environment.The construction of the digital campus is based on virtual reality technology,BIM,GIS,and three-dimensional modeling technology to provide an immersive platform for students,promote the integration of virtual reality technology and education,help teachers,students,and parents to understand all kinds of education information and resources,to achieve their interoperability.From the off-campus environment to the school teaching equipment,teachers to teaching quality certification,and learning,to extracurricular entertainment,opening ceremonies to graduation parties,to bring more efficient,convenient,and safe campus life for teachers,students,and staff in school,and break the traditional information restrictions.展开更多
With the rapid development of virtual reality technology,it has been widely used in the field of education.It can promote the development of learning transfer,which is an effective method for learners to learn effecti...With the rapid development of virtual reality technology,it has been widely used in the field of education.It can promote the development of learning transfer,which is an effective method for learners to learn effectively.Therefore,this paper describes how to use virtual reality technology to achieve learning transfer in order to achieve teaching goals and improve learning efficiency.展开更多
With the continuous progress of virtual simulation technology,medical surgery visualization system has been developed from two-dimensional to three-dimensional,from digital to network and intelligentization.The visual...With the continuous progress of virtual simulation technology,medical surgery visualization system has been developed from two-dimensional to three-dimensional,from digital to network and intelligentization.The visualization system with mixed reality technology will also be used in all stage of medical surgery,such as case discussion,surgical planning,intraoperative guidance,post-operative evaluation,rehabilitation,so as to further promote high intelligence,high precision of medical surgery,and consequently improve effectiveness of treatment and quality of medical service.This paper discusses the composition and technical characteristics of medical operation visualization system based on mixed reality technology,and introduces some typical applications of mixed reality technology in medical operation visualization,which provides a new perspective for the application of mixed technology in medical surgery.展开更多
Background This work aims to provide an overview of the Mixed Reality(MR)technology’s use in maritime industry for training purposes.Current training procedures cover a broad range of procedural operations for Life-S...Background This work aims to provide an overview of the Mixed Reality(MR)technology’s use in maritime industry for training purposes.Current training procedures cover a broad range of procedural operations for Life-Saving Appliances(LSA)lifeboats;however,several gaps and limitations have been identified related to the practical training that can be addressed through the use of MR.Augmented,Virtual and Mixed Reality applications are already used in various fields in maritime industry,but their full potential have not been yet exploited.SafePASS project aims to exploit MR advantages in the maritime training by introducing a relevant application focusing on use and maintenance of LSA lifeboats.Methods An MR Training application is proposed supporting the training of crew members in equipment usage and operation,as well as in maintenance activities and procedures.The application consists of the training tool that trains crew members on handling lifeboats,the training evaluation tool that allows trainers to assess the performance of trainees,and the maintenance tool that supports crew members to perform maintenance activities and procedures on lifeboats.For each tool,an indicative session and scenario workflow are implemented,along with the main supported interactions of the trainee with the equipment.Results The application has been tested and validated both in lab environment and using a real LSA lifeboat,resulting to improved experience for the users that provided feedback and recommendations for further development.The application has also been demonstrated onboard a cruise ship,showcasing the supported functionalities to relevant stakeholders that recognized the added value of the application and suggested potential future exploitation areas.Conclusions The MR Training application has been evaluated as very promising in providing a user-friendly training environment that can support crew members in LSA lifeboat operation and maintenance,while it is still subject to improvement and further expansion.展开更多
Mixed reality(MR)technology is a new digital holographic image technology,which appears in the field of graphics after virtual reality(VR)and augmented reality(AR)technology,a new interdisciplinary frontier.As a new g...Mixed reality(MR)technology is a new digital holographic image technology,which appears in the field of graphics after virtual reality(VR)and augmented reality(AR)technology,a new interdisciplinary frontier.As a new generation of technology,MR has attracted great attention of clinicians in recent years.The emergence of MR will bring about revolutionary changes in medical education training,medical research,medical communication,and clinical treatment.At present,MR technology has become the popular frontline information technology for medical applications.With the popularization of digital technology in the medical field,the development prospects of MR are inestimable.The purpose of this review article is to introduce the application of MR technology in the medical field and prospect its trend in the future.展开更多
Due to the narrowness of space and the complexity of structure, the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process. To solve the problem, at the beginning of airc...Due to the narrowness of space and the complexity of structure, the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process. To solve the problem, at the beginning of aircraft design, the different stages of the lifecycle of aircraft must be thought about, which include the trial manufacture, assembly, maintenance, recycling and destruction of the product. Recently, thanks to the development of the virtual reality and augmented reality, some low-cost and fast solutions are found for the product assembly. This paper presents a mixed reality-based interactive technology for the aircraft cabin assembly, which can enhance the efficiency of the assemblage in a virtual environment in terms of vision, information and operation. In the mixed reality-based assembly environment, the physical scene can be obtained by a camera and then generated by a computer. The virtual parts, the features of visual assembly, the navigation information, the physical parts and the physical assembly environment will be mixed and presented in the same assembly scene. The mixed or the augmented information will provide some assembling information as a detailed assembly instruction in the mixed reality-based assembly environment. Constraint proxy and its match rules help to reconstruct and visualize the restriction relationship among different parts, and to avoid the complex calculation of constraint's match. Finally, a desktop prototype system of virtual assembly has been built to assist the assembly verification and training with the virtual hand.展开更多
The mixed reality conference system proposed in this paper is a robust,real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video co...The mixed reality conference system proposed in this paper is a robust,real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video conference,which realizes the entire process of holographic video conference from client to cloud to the client.This paper mainly focuses on designing and implementing a video conference system based on AI segmentation technology and mixed reality.Several mixed reality conference system components are discussed,including data collection,data transmission,processing,and mixed reality presentation.The data layer is mainly used for data collection,integration,and video and audio codecs.The network layer uses Web-RTC to realize peer-to-peer data communication.The data processing layer is the core part of the system,mainly for human video matting and human-computer interaction,which is the key to realizing mixed reality conferences and improving the interactive experience.The presentation layer explicitly includes the login interface of the mixed reality conference system,the presentation of real-time matting of human subjects,and the presentation objects.With the mixed reality conference system,conference participants in different places can see each other in real-time in their mixed reality scene and share presentation content and 3D models based on mixed reality technology to have a more interactive and immersive experience.展开更多
A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the ...A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the working time because of waiting to avoid conflicts. Herein, wepropose an adaptive concurrency control approach that can reduce conflictsand work time. We classify shared object manipulation in mixed reality intodetailed goals and tasks. Then, we model the relationships among goal,task, and ownership. As the collaborative work progresses, the proposedsystem adapts the different concurrency control mechanisms of shared objectmanipulation according to the modeling of goal–task–ownership. With theproposed concurrency control scheme, users can hold shared objects andmove and rotate together in a mixed reality environment similar to realindustrial sites. Additionally, this system provides MS Hololens and Myosensors to recognize inputs from a user and provides results in a mixed realityenvironment. The proposed method is applied to install an air conditioneras a case study. Experimental results and user studies show that, comparedwith the conventional approach, the proposed method reduced the number ofconflicts, waiting time, and total working time.展开更多
Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing ...Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.展开更多
Background Physical entity interactions in mixed reality(MR)environments aim to harness human capabilities in manipulating physical objects,thereby enhancing virtual environment(VEs)functionality.In MR,a common strate...Background Physical entity interactions in mixed reality(MR)environments aim to harness human capabilities in manipulating physical objects,thereby enhancing virtual environment(VEs)functionality.In MR,a common strategy is to use virtual agents as substitutes for physical entities,balancing interaction efficiency with environmental immersion.However,the impact of virtual agent size and form on interaction performance remains unclear.Methods Two experiments were conducted to explore how virtual agent size and form affect interaction performance,immersion,and preference in MR environments.The first experiment assessed five virtual agent sizes(25%,50%,75%,100%,and 125%of physical size).The second experiment tested four types of frames(no frame,consistent frame,half frame,and surrounding frame)across all agent sizes.Participants,utilizing a head mounted display,performed tasks involving moving cups,typing words,and using a mouse.They completed questionnaires assessing aspects such as the virtual environment effects,interaction effects,collision concerns,and preferences.Results Results from the first experiment revealed that agents matching physical object size produced the best overall performance.The second experiment demonstrated that consistent framing notably enhances interaction accuracy and speed but reduces immersion.To balance efficiency and immersion,frameless agents matching physical object sizes were deemed optimal.Conclusions Virtual agents matching physical entity sizes enhance user experience and interaction performance.Conversely,familiar frames from 2D interfaces detrimentally affect interaction and immersion in virtual spaces.This study provides valuable insights for the future development of MR systems.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
Traditional teaching and learning about industrial robots uses abstract instructions,which are difficult for students to understand.Meanwhile,there are safety issues associated with the use of practical training equip...Traditional teaching and learning about industrial robots uses abstract instructions,which are difficult for students to understand.Meanwhile,there are safety issues associated with the use of practical training equipment.To address these problems,this paper developed an instructional system based on mixed-reality(MR)technology for teaching about industrial robots.The Siasun T6A-series robots were taken as a case study,and the Microsoft MR device HoloLens-2 was used as the instructional platform.First,the parameters of the robots were analyzed based on their structural drawings.Then,the robot modules were decomposed,and 1:1 three-dimensional(3D)digital reproductions were created in Maya.Next,a library of digital models of the robot components was established,and a 3D spatial operation interface for the virtual instructional system was created in Unity.Subsequently,a C#code framework was established to satisfy the requirements of interactive functions and data transmission,and the data were saved in JSON format.In this way,a key technique that facilitates the understanding of spatial structures and a variety of human-machine interactions were realized.Finally,an instructional system based on HoloLens-2 was established for understanding the structures and principles of robots.The results showed that the instructional system developed in this study provides realistic 3D visualizations and a natural,efficient approach for human-machine interactions.This system could effectively improve the efficiency of knowledge transfer and the student’s motivation to learn.展开更多
Telemedicine includes remote teleradiology,remote ultrasound diagnostics,telesurgery,telemedicine consultation,and other forms.Telemedicine consultation is the most used form of telemedicine.However,the traditional te...Telemedicine includes remote teleradiology,remote ultrasound diagnostics,telesurgery,telemedicine consultation,and other forms.Telemedicine consultation is the most used form of telemedicine.However,the traditional telemedicine consultation is limited by a lack of communication and presentation methods,and its wide application is greatly limited.Mixed reality technology cuts through the boundaries between virtual reality and actual reality,bringing a new method of remote consultation.展开更多
This paper explores the transformative impact of virtual worlds, augmented reality (AR), and the metaverse in the healthcare sector. It delves into the ways these technologies are reshaping patient care, medical educa...This paper explores the transformative impact of virtual worlds, augmented reality (AR), and the metaverse in the healthcare sector. It delves into the ways these technologies are reshaping patient care, medical education, and research, while also addressing the challenges and opportunities they present. The paper highlights the potential benefits of these technologies and emphasizes the need for comprehensive regulatory frameworks and ethical guidelines to ensure responsible integration. Finally it outlines their transformative impact and discusses the challenges and opportunities they present for the future of healthcare provision.展开更多
基金National Key Research and Development Program(2016YFC0106500800)NationalMajor Scientific Instruments and Equipments Development Project of National Natural Science Foundation of China(81627805)+3 种基金National Natural Science Foundation of China-Guangdong Joint Fund Key Program(U1401254)National Natural Science Foundation of China Mathematics Tianyuan Foundation(12026602)Guangdong Provincial Natural Science Foundation Team Project(6200171)Guangdong Provincial Health Appropriate Technology Promotion Project(20230319214525105,20230322152307666).
文摘Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomical,functional,and radical hepatectomy through tumor identification and localization of target hepatic segments,driving a transformative shift in themanagement of hepatic surgical diseases,moving away from traditional,empirical diagnostic and treatment approaches toward digital,intelligent ones.The Hepatic Surgery Group of the Surgery Branch of the Chinese Medical Association,Digital Medicine Branch of the Chinese Medical Association,Digital Intelligent Surgery Committee of the Chinese Society of ResearchHospitals,and Liver Cancer Committee of the Chinese Medical Doctor Association organized the relevant experts in China to formulate this consensus.This consensus provides a comprehensive outline of the principles,advantages,processes,and key considerations associated with the application of augmented reality and mixed-reality technology combined with indocyanine green fluorescence imaging technology for hepatic segmental and subsegmental resection.The purpose is to streamline and standardize the application of these technologies.
文摘In recent years,the government has issued a series of documents to promote the construction of digital campuses.This initiative serves to encourage the deep integration of information technology and intelligent technology education and digital reform,the combination of virtual reality and campus management is the need for innovative thinking and economic and social development,and then better change our learning style and living environment.The construction of the digital campus is based on virtual reality technology,BIM,GIS,and three-dimensional modeling technology to provide an immersive platform for students,promote the integration of virtual reality technology and education,help teachers,students,and parents to understand all kinds of education information and resources,to achieve their interoperability.From the off-campus environment to the school teaching equipment,teachers to teaching quality certification,and learning,to extracurricular entertainment,opening ceremonies to graduation parties,to bring more efficient,convenient,and safe campus life for teachers,students,and staff in school,and break the traditional information restrictions.
文摘With the rapid development of virtual reality technology,it has been widely used in the field of education.It can promote the development of learning transfer,which is an effective method for learners to learn effectively.Therefore,this paper describes how to use virtual reality technology to achieve learning transfer in order to achieve teaching goals and improve learning efficiency.
基金Supported by the Chinese Ministry of Education Online Education Research Foundation Key Project(No.2017ZD116)the College Teaching Guarantee Project(No.2018-4142Z6L)~~
文摘With the continuous progress of virtual simulation technology,medical surgery visualization system has been developed from two-dimensional to three-dimensional,from digital to network and intelligentization.The visualization system with mixed reality technology will also be used in all stage of medical surgery,such as case discussion,surgical planning,intraoperative guidance,post-operative evaluation,rehabilitation,so as to further promote high intelligence,high precision of medical surgery,and consequently improve effectiveness of treatment and quality of medical service.This paper discusses the composition and technical characteristics of medical operation visualization system based on mixed reality technology,and introduces some typical applications of mixed reality technology in medical operation visualization,which provides a new perspective for the application of mixed technology in medical surgery.
基金Supported by the Safe PASS project that has received funding from the European Union’s Horizon 2020 Research and Innovation programme (815146)。
文摘Background This work aims to provide an overview of the Mixed Reality(MR)technology’s use in maritime industry for training purposes.Current training procedures cover a broad range of procedural operations for Life-Saving Appliances(LSA)lifeboats;however,several gaps and limitations have been identified related to the practical training that can be addressed through the use of MR.Augmented,Virtual and Mixed Reality applications are already used in various fields in maritime industry,but their full potential have not been yet exploited.SafePASS project aims to exploit MR advantages in the maritime training by introducing a relevant application focusing on use and maintenance of LSA lifeboats.Methods An MR Training application is proposed supporting the training of crew members in equipment usage and operation,as well as in maintenance activities and procedures.The application consists of the training tool that trains crew members on handling lifeboats,the training evaluation tool that allows trainers to assess the performance of trainees,and the maintenance tool that supports crew members to perform maintenance activities and procedures on lifeboats.For each tool,an indicative session and scenario workflow are implemented,along with the main supported interactions of the trainee with the equipment.Results The application has been tested and validated both in lab environment and using a real LSA lifeboat,resulting to improved experience for the users that provided feedback and recommendations for further development.The application has also been demonstrated onboard a cruise ship,showcasing the supported functionalities to relevant stakeholders that recognized the added value of the application and suggested potential future exploitation areas.Conclusions The MR Training application has been evaluated as very promising in providing a user-friendly training environment that can support crew members in LSA lifeboat operation and maintenance,while it is still subject to improvement and further expansion.
文摘Mixed reality(MR)technology is a new digital holographic image technology,which appears in the field of graphics after virtual reality(VR)and augmented reality(AR)technology,a new interdisciplinary frontier.As a new generation of technology,MR has attracted great attention of clinicians in recent years.The emergence of MR will bring about revolutionary changes in medical education training,medical research,medical communication,and clinical treatment.At present,MR technology has become the popular frontline information technology for medical applications.With the popularization of digital technology in the medical field,the development prospects of MR are inestimable.The purpose of this review article is to introduce the application of MR technology in the medical field and prospect its trend in the future.
基金supported by National Defence Basic Research Foundation of China (Grant No. B1420060173)National Hi-tech Research and Development Program of China (863 Program, Grant No. 2006AA04Z138)
文摘Due to the narrowness of space and the complexity of structure, the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process. To solve the problem, at the beginning of aircraft design, the different stages of the lifecycle of aircraft must be thought about, which include the trial manufacture, assembly, maintenance, recycling and destruction of the product. Recently, thanks to the development of the virtual reality and augmented reality, some low-cost and fast solutions are found for the product assembly. This paper presents a mixed reality-based interactive technology for the aircraft cabin assembly, which can enhance the efficiency of the assemblage in a virtual environment in terms of vision, information and operation. In the mixed reality-based assembly environment, the physical scene can be obtained by a camera and then generated by a computer. The virtual parts, the features of visual assembly, the navigation information, the physical parts and the physical assembly environment will be mixed and presented in the same assembly scene. The mixed or the augmented information will provide some assembling information as a detailed assembly instruction in the mixed reality-based assembly environment. Constraint proxy and its match rules help to reconstruct and visualize the restriction relationship among different parts, and to avoid the complex calculation of constraint's match. Finally, a desktop prototype system of virtual assembly has been built to assist the assembly verification and training with the virtual hand.
基金supported in part by the Major Fundamental Research of Natural Science Foundation of Shandong Province under Grant ZR2019ZD05Joint fund for smart computing of Shandong Natural Science Foundation under Grant ZR2020LZH013+1 种基金Open project of State Key Laboratory of Computer Architecture CARCHA202002Human Video Matting Project of Hisense Co.,Ltd.under Grant QD1170020023.
文摘The mixed reality conference system proposed in this paper is a robust,real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video conference,which realizes the entire process of holographic video conference from client to cloud to the client.This paper mainly focuses on designing and implementing a video conference system based on AI segmentation technology and mixed reality.Several mixed reality conference system components are discussed,including data collection,data transmission,processing,and mixed reality presentation.The data layer is mainly used for data collection,integration,and video and audio codecs.The network layer uses Web-RTC to realize peer-to-peer data communication.The data processing layer is the core part of the system,mainly for human video matting and human-computer interaction,which is the key to realizing mixed reality conferences and improving the interactive experience.The presentation layer explicitly includes the login interface of the mixed reality conference system,the presentation of real-time matting of human subjects,and the presentation objects.With the mixed reality conference system,conference participants in different places can see each other in real-time in their mixed reality scene and share presentation content and 3D models based on mixed reality technology to have a more interactive and immersive experience.
基金supported by“Regional Innovation Strategy (RIS)”through the National Research Foundation of Korea (NRF)funded by the Ministry of Education (MOE) (2021RIS-004).
文摘A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the working time because of waiting to avoid conflicts. Herein, wepropose an adaptive concurrency control approach that can reduce conflictsand work time. We classify shared object manipulation in mixed reality intodetailed goals and tasks. Then, we model the relationships among goal,task, and ownership. As the collaborative work progresses, the proposedsystem adapts the different concurrency control mechanisms of shared objectmanipulation according to the modeling of goal–task–ownership. With theproposed concurrency control scheme, users can hold shared objects andmove and rotate together in a mixed reality environment similar to realindustrial sites. Additionally, this system provides MS Hololens and Myosensors to recognize inputs from a user and provides results in a mixed realityenvironment. The proposed method is applied to install an air conditioneras a case study. Experimental results and user studies show that, comparedwith the conventional approach, the proposed method reduced the number ofconflicts, waiting time, and total working time.
基金Supported by the National Key R&D Program of China(2018YFB2100601)National Natural Science Foundation of China(61872024)。
文摘Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.
基金the Strategic research and consulting project of Chinese Academy of Engineering(2023-HY-14).
文摘Background Physical entity interactions in mixed reality(MR)environments aim to harness human capabilities in manipulating physical objects,thereby enhancing virtual environment(VEs)functionality.In MR,a common strategy is to use virtual agents as substitutes for physical entities,balancing interaction efficiency with environmental immersion.However,the impact of virtual agent size and form on interaction performance remains unclear.Methods Two experiments were conducted to explore how virtual agent size and form affect interaction performance,immersion,and preference in MR environments.The first experiment assessed five virtual agent sizes(25%,50%,75%,100%,and 125%of physical size).The second experiment tested four types of frames(no frame,consistent frame,half frame,and surrounding frame)across all agent sizes.Participants,utilizing a head mounted display,performed tasks involving moving cups,typing words,and using a mouse.They completed questionnaires assessing aspects such as the virtual environment effects,interaction effects,collision concerns,and preferences.Results Results from the first experiment revealed that agents matching physical object size produced the best overall performance.The second experiment demonstrated that consistent framing notably enhances interaction accuracy and speed but reduces immersion.To balance efficiency and immersion,frameless agents matching physical object sizes were deemed optimal.Conclusions Virtual agents matching physical entity sizes enhance user experience and interaction performance.Conversely,familiar frames from 2D interfaces detrimentally affect interaction and immersion in virtual spaces.This study provides valuable insights for the future development of MR systems.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
文摘Traditional teaching and learning about industrial robots uses abstract instructions,which are difficult for students to understand.Meanwhile,there are safety issues associated with the use of practical training equipment.To address these problems,this paper developed an instructional system based on mixed-reality(MR)technology for teaching about industrial robots.The Siasun T6A-series robots were taken as a case study,and the Microsoft MR device HoloLens-2 was used as the instructional platform.First,the parameters of the robots were analyzed based on their structural drawings.Then,the robot modules were decomposed,and 1:1 three-dimensional(3D)digital reproductions were created in Maya.Next,a library of digital models of the robot components was established,and a 3D spatial operation interface for the virtual instructional system was created in Unity.Subsequently,a C#code framework was established to satisfy the requirements of interactive functions and data transmission,and the data were saved in JSON format.In this way,a key technique that facilitates the understanding of spatial structures and a variety of human-machine interactions were realized.Finally,an instructional system based on HoloLens-2 was established for understanding the structures and principles of robots.The results showed that the instructional system developed in this study provides realistic 3D visualizations and a natural,efficient approach for human-machine interactions.This system could effectively improve the efficiency of knowledge transfer and the student’s motivation to learn.
文摘Telemedicine includes remote teleradiology,remote ultrasound diagnostics,telesurgery,telemedicine consultation,and other forms.Telemedicine consultation is the most used form of telemedicine.However,the traditional telemedicine consultation is limited by a lack of communication and presentation methods,and its wide application is greatly limited.Mixed reality technology cuts through the boundaries between virtual reality and actual reality,bringing a new method of remote consultation.
文摘This paper explores the transformative impact of virtual worlds, augmented reality (AR), and the metaverse in the healthcare sector. It delves into the ways these technologies are reshaping patient care, medical education, and research, while also addressing the challenges and opportunities they present. The paper highlights the potential benefits of these technologies and emphasizes the need for comprehensive regulatory frameworks and ethical guidelines to ensure responsible integration. Finally it outlines their transformative impact and discusses the challenges and opportunities they present for the future of healthcare provision.