Due to the narrowness of space and the complexity of structure, the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process. To solve the problem, at the beginning of airc...Due to the narrowness of space and the complexity of structure, the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process. To solve the problem, at the beginning of aircraft design, the different stages of the lifecycle of aircraft must be thought about, which include the trial manufacture, assembly, maintenance, recycling and destruction of the product. Recently, thanks to the development of the virtual reality and augmented reality, some low-cost and fast solutions are found for the product assembly. This paper presents a mixed reality-based interactive technology for the aircraft cabin assembly, which can enhance the efficiency of the assemblage in a virtual environment in terms of vision, information and operation. In the mixed reality-based assembly environment, the physical scene can be obtained by a camera and then generated by a computer. The virtual parts, the features of visual assembly, the navigation information, the physical parts and the physical assembly environment will be mixed and presented in the same assembly scene. The mixed or the augmented information will provide some assembling information as a detailed assembly instruction in the mixed reality-based assembly environment. Constraint proxy and its match rules help to reconstruct and visualize the restriction relationship among different parts, and to avoid the complex calculation of constraint's match. Finally, a desktop prototype system of virtual assembly has been built to assist the assembly verification and training with the virtual hand.展开更多
BACKGROUND As a new digital holographic imaging technology,mixed reality(MR)technology has unique advantages in determining the liver anatomy and location of tumor lesions.With the popularization of 5 G communication ...BACKGROUND As a new digital holographic imaging technology,mixed reality(MR)technology has unique advantages in determining the liver anatomy and location of tumor lesions.With the popularization of 5 G communication technology,MR shows great potential in preoperative planning and intraoperative navigation,making hepatectomy more accurate and safer.AIM To evaluate the application value of MR technology in hepatectomy for hepatocellular carcinoma(HCC).METHODS The clinical data of 95 patients who underwent open hepatectomy surgery for HCC between June 2018 and October 2020 at our hospital were analyzed retrospectively.We selected 95 patients with HCC according to the inclusion criteria and exclusion criteria.In 38 patients,hepatectomy was assisted by MR(Group A),and an additional 57 patients underwent traditional hepatectomy without MR(Group B).The perioperative outcomes of the two groups were collected and compared to evaluate the application value of MR in hepatectomy for patients with HCC.RESULTS We summarized the technical process of MR-assisted hepatectomy in the treatment of HCC.Compared to traditional hepatectomy in Group B,MR-assisted hepatectomy in Group A yielded a shorter operation time(202.86±46.02 min vs 229.52±57.13 min,P=0.003),less volume of bleeding(329.29±97.31 mL vs 398.23±159.61 mL,P=0.028),and shorter obstructive time of the portal vein(17.71±4.16 min vs 21.58±5.24 min,P=0.019).Group A had lower alanine aminotransferas and higher albumin values on the third day after the operation(119.74±29.08 U/L vs 135.53±36.68 U/L,P=0.029 and 33.60±3.21 g/L vs 31.80±3.51 g/L,P=0.014,respectively).The total postoperative complications and hospitalization days in Group A were significantly less than those in Group B[14(37.84%)vs 35(60.34%),P=0.032 and 12.05±4.04 d vs 13.78±4.13 d,P=0.049,respectively].CONCLUSION MR has some application value in three-dimensional visualization of the liver,surgical planning,and intraoperative navigation during hepatectomy,and it significantly improves the perioperative outcomes of hepatectomy for HCC.展开更多
The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery s...The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery simulation and using the created scenarios in real-time surgery using mixed reality.In this article,we described our experience on developing a dedicated 3 dimensional visualization and reconstruction software for surgeons to be used in advanced liver surgery and living donor liver transplantation.Furthermore,we shared the recent developments in the field by explaining the outreach of the software from virtual reality to augmented reality and mixed reality.展开更多
In the modern era,preoperative planning is substantially facilitated by artificial reality technologies,which permit a better understanding of patient anatomy,thus increasing the safety and accuracy of surgical interv...In the modern era,preoperative planning is substantially facilitated by artificial reality technologies,which permit a better understanding of patient anatomy,thus increasing the safety and accuracy of surgical interventions.In the field of orthopedic surgery,the increase in safety and accuracy improves treatment quality and orthopedic patient outcomes.Artificial reality technologies,which include virtual reality(VR),augmented reality(AR),and mixed reality(MR),use digital images obtained from computed tomography or magnetic resonance imaging.VR replaces the user’s physical environment with one that is computer generated.AR and MR have been defined as technologies that permit the fusing of the physical with the virtual environment,enabling the user to interact with both physical and virtual objects.MR has been defined as a technology that,in contrast to AR,enables users to visualize the depth and perspective of the virtual models.We aimed to shed light on the role that MR can play in the visualization of orthopedic surgical anatomy.The literature suggests that MR could be a valuable tool in orthopedic surgeon’s hands for visualization of the anatomy.However,we remark that confusion exists in the literature concerning the characteristics of MR.Thus,a more clear description of MR is needed in orthopedic research,so that the potential of this technology can be more deeply understood.展开更多
Background This work aims to provide an overview of the Mixed Reality(MR)technology’s use in maritime industry for training purposes.Current training procedures cover a broad range of procedural operations for Life-S...Background This work aims to provide an overview of the Mixed Reality(MR)technology’s use in maritime industry for training purposes.Current training procedures cover a broad range of procedural operations for Life-Saving Appliances(LSA)lifeboats;however,several gaps and limitations have been identified related to the practical training that can be addressed through the use of MR.Augmented,Virtual and Mixed Reality applications are already used in various fields in maritime industry,but their full potential have not been yet exploited.SafePASS project aims to exploit MR advantages in the maritime training by introducing a relevant application focusing on use and maintenance of LSA lifeboats.Methods An MR Training application is proposed supporting the training of crew members in equipment usage and operation,as well as in maintenance activities and procedures.The application consists of the training tool that trains crew members on handling lifeboats,the training evaluation tool that allows trainers to assess the performance of trainees,and the maintenance tool that supports crew members to perform maintenance activities and procedures on lifeboats.For each tool,an indicative session and scenario workflow are implemented,along with the main supported interactions of the trainee with the equipment.Results The application has been tested and validated both in lab environment and using a real LSA lifeboat,resulting to improved experience for the users that provided feedback and recommendations for further development.The application has also been demonstrated onboard a cruise ship,showcasing the supported functionalities to relevant stakeholders that recognized the added value of the application and suggested potential future exploitation areas.Conclusions The MR Training application has been evaluated as very promising in providing a user-friendly training environment that can support crew members in LSA lifeboat operation and maintenance,while it is still subject to improvement and further expansion.展开更多
Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomica...Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomical,functional,and radical hepatectomy through tumor identification and localization of target hepatic segments,driving a transformative shift in themanagement of hepatic surgical diseases,moving away from traditional,empirical diagnostic and treatment approaches toward digital,intelligent ones.The Hepatic Surgery Group of the Surgery Branch of the Chinese Medical Association,Digital Medicine Branch of the Chinese Medical Association,Digital Intelligent Surgery Committee of the Chinese Society of ResearchHospitals,and Liver Cancer Committee of the Chinese Medical Doctor Association organized the relevant experts in China to formulate this consensus.This consensus provides a comprehensive outline of the principles,advantages,processes,and key considerations associated with the application of augmented reality and mixed-reality technology combined with indocyanine green fluorescence imaging technology for hepatic segmental and subsegmental resection.The purpose is to streamline and standardize the application of these technologies.展开更多
The mixed reality conference system proposed in this paper is a robust,real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video co...The mixed reality conference system proposed in this paper is a robust,real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video conference,which realizes the entire process of holographic video conference from client to cloud to the client.This paper mainly focuses on designing and implementing a video conference system based on AI segmentation technology and mixed reality.Several mixed reality conference system components are discussed,including data collection,data transmission,processing,and mixed reality presentation.The data layer is mainly used for data collection,integration,and video and audio codecs.The network layer uses Web-RTC to realize peer-to-peer data communication.The data processing layer is the core part of the system,mainly for human video matting and human-computer interaction,which is the key to realizing mixed reality conferences and improving the interactive experience.The presentation layer explicitly includes the login interface of the mixed reality conference system,the presentation of real-time matting of human subjects,and the presentation objects.With the mixed reality conference system,conference participants in different places can see each other in real-time in their mixed reality scene and share presentation content and 3D models based on mixed reality technology to have a more interactive and immersive experience.展开更多
To improve and develop education systems,the communication between instructors and learners in a class during the learning process is of utmost importance.Currently the presentations of 3D models using mixed reality(M...To improve and develop education systems,the communication between instructors and learners in a class during the learning process is of utmost importance.Currently the presentations of 3D models using mixed reality(MR)technology can be used to avoid misinterpretations of oral and 2D model presentations.As an independent concept and MR applications,MR combines the excellent of each virtual reality(VR)and augmented reality(AR).This work aims to present the descriptions of MR systems,which include its devices,applications,and literature reviews and proposes computer vision tracking using the AR Toolkit Tracking Library.The focus of this work will be on creating 3D models and implementing in Unity 3D using the Vuforia SDK platform to develop VR and AR applications for architectural presentations.展开更多
Traditional teaching and learning about industrial robots uses abstract instructions,which are difficult for students to understand.Meanwhile,there are safety issues associated with the use of practical training equip...Traditional teaching and learning about industrial robots uses abstract instructions,which are difficult for students to understand.Meanwhile,there are safety issues associated with the use of practical training equipment.To address these problems,this paper developed an instructional system based on mixed-reality(MR)technology for teaching about industrial robots.The Siasun T6A-series robots were taken as a case study,and the Microsoft MR device HoloLens-2 was used as the instructional platform.First,the parameters of the robots were analyzed based on their structural drawings.Then,the robot modules were decomposed,and 1:1 three-dimensional(3D)digital reproductions were created in Maya.Next,a library of digital models of the robot components was established,and a 3D spatial operation interface for the virtual instructional system was created in Unity.Subsequently,a C#code framework was established to satisfy the requirements of interactive functions and data transmission,and the data were saved in JSON format.In this way,a key technique that facilitates the understanding of spatial structures and a variety of human-machine interactions were realized.Finally,an instructional system based on HoloLens-2 was established for understanding the structures and principles of robots.The results showed that the instructional system developed in this study provides realistic 3D visualizations and a natural,efficient approach for human-machine interactions.This system could effectively improve the efficiency of knowledge transfer and the student’s motivation to learn.展开更多
A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the ...A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the working time because of waiting to avoid conflicts. Herein, wepropose an adaptive concurrency control approach that can reduce conflictsand work time. We classify shared object manipulation in mixed reality intodetailed goals and tasks. Then, we model the relationships among goal,task, and ownership. As the collaborative work progresses, the proposedsystem adapts the different concurrency control mechanisms of shared objectmanipulation according to the modeling of goal–task–ownership. With theproposed concurrency control scheme, users can hold shared objects andmove and rotate together in a mixed reality environment similar to realindustrial sites. Additionally, this system provides MS Hololens and Myosensors to recognize inputs from a user and provides results in a mixed realityenvironment. The proposed method is applied to install an air conditioneras a case study. Experimental results and user studies show that, comparedwith the conventional approach, the proposed method reduced the number ofconflicts, waiting time, and total working time.展开更多
There have been numerous works proposed to merge augmented reality/mixed reality(AR/MR)and Internet of Things(IoT)in various ways.However,they have focused on their specific target applications and have limitations on...There have been numerous works proposed to merge augmented reality/mixed reality(AR/MR)and Internet of Things(IoT)in various ways.However,they have focused on their specific target applications and have limitations on interoperability or reusability when utilizing them to different domains or adding other devices to the system.This paper proposes a novel architecture of a convergence platform for AR/MR and IoT systems and services.The proposed architecture adopts the oneM2M IoT standard as the basic framework that converges AR/MR and IoT systems and enables the development of application services used in general-purpose environments without being subordinate to specific systems,domains,and device manufacturers.We implement the proposed architecture utilizing the open-source oneM2M-based IoT server and device platforms released by the open alliance for IoT standards(OCEAN)and Microsoft HoloLens as an MR device platform.We also suggest and demonstrate the practical use cases and discuss the advantages of the proposed architecture.展开更多
Nowadays, urban design faces complex demands. It has become a necessity to negotiate between stakeholder objectives, the expectations of citizens and the demands of planning. It is desirable to involve the stakeholder...Nowadays, urban design faces complex demands. It has become a necessity to negotiate between stakeholder objectives, the expectations of citizens and the demands of planning. It is desirable to involve the stakeholders and citizens from an early stage in the planning process to enable their different viewpoints to be successfully expressed and comprehended. Therefore, the basic aim of the study was how the MR (mixed reality) application is designed to encourage and improve communication on urban design among stakeholders and citizens? In this paper, we discuss new approaches to visualize urban building and environment alternatives to different stakeholders and provide them with tools to explore different approaches to urban planning in order to support citizen's participation in urban planning with augmented and mixed reality. The major finding of the study is that learning "how these participatory technologies may help build a community of practice around an urban project". And throughout the different experiences, we can learn to assist towards development of a methodology to use the mixed reality as a simulation tool in the enhancement of collaborative interaction in real-Egyptian project. So, we can determine a number of recommendations to deal with new participatory design tools for urban planning projects.展开更多
To study recall accuracy of the offensive and defensive situations including movements of elite-athlete/novice oneself, a novel experimental system was developed where defensive actions were performed by the subject w...To study recall accuracy of the offensive and defensive situations including movements of elite-athlete/novice oneself, a novel experimental system was developed where defensive actions were performed by the subject with a CG (Computer Graphics) player who presented predetermined offensive actions. Both the CG player's movements and subject's movements were reproduced by a video using mixed reality technology for recall examination. This system was also designed to rearrange the natural sequence of image frames resulting in a reproducible video in which the time relation of offense and defense was falsified. Displacement of timing in the false video was twofold; delayed from the truth or advanced from the truth. Using this two-video, true/false imagery method, the subject was asked to select the true video by recall; thus it became possible to examine the recall accuracy quantitatively by controlling the timing displacement. Results of the experiment using this system revealed that karate expert possessed a skill to recognize the time relation between the opponent's movement and one's own movement perceptually that was more developed than that of the novice. It was further identified that the expert as well as the novice recognized delayed displacement more accurately than they could recognize advanced displacement.展开更多
In this study, we develop a mixed reality game system to investigate characteristics ofjudgrnents of individual players in an evacuation process. The characteristics of judgments of the players that are inferred from ...In this study, we develop a mixed reality game system to investigate characteristics ofjudgrnents of individual players in an evacuation process. The characteristics of judgments of the players that are inferred from the performance of the game are then incorporated into a multi-agent simulation as rules. The behavior of evacuees is evaluated in approximations of real situations, by using the agent simulation including different judgments of evacuees. Using the results of the simulation, effective methods are discussed for achieving the escape of the evacuees within a short time.展开更多
Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing ...Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.展开更多
A precise knowledge of intra-parenchymal vascular and biliary architecture and the location of lesions in relation to the complex anatomy is indispensable to perform liver surgery.Therefore,virtual three-dimensional(3...A precise knowledge of intra-parenchymal vascular and biliary architecture and the location of lesions in relation to the complex anatomy is indispensable to perform liver surgery.Therefore,virtual three-dimensional(3D)-reconstruction models from computed tomography/magnetic resonance imaging scans of the liver might be helpful for visualization.Augmented reality,mixed reality and 3Dnavigation could transfer such 3D-image data directly into the operation theater to support the surgeon.This review examines the literature about the clinical and intraoperative use of these image guidance techniques in liver surgery and provides the reader with the opportunity to learn about these techniques.Augmented reality and mixed reality have been shown to be feasible for the use in open and minimally invasive liver surgery.3D-navigation facilitated targeting of intraparenchymal lesions.The existing data is limited to small cohorts and description about technical details e.g.,accordance between the virtual 3D-model and the real liver anatomy.Randomized controlled trials regarding clinical data or oncological outcome are not available.Up to now there is no intraoperative application of artificial intelligence in liver surgery.The usability of all these sophisticated image guidance tools has still not reached the grade of immersion which would be necessary for a widespread use in the daily surgical routine.Although there are many challenges,augmented reality,mixed reality,3Dnavigation and artificial intelligence are emerging fields in hepato-biliary surgery.展开更多
Background Physical entity interactions in mixed reality(MR)environments aim to harness human capabilities in manipulating physical objects,thereby enhancing virtual environment(VEs)functionality.In MR,a common strate...Background Physical entity interactions in mixed reality(MR)environments aim to harness human capabilities in manipulating physical objects,thereby enhancing virtual environment(VEs)functionality.In MR,a common strategy is to use virtual agents as substitutes for physical entities,balancing interaction efficiency with environmental immersion.However,the impact of virtual agent size and form on interaction performance remains unclear.Methods Two experiments were conducted to explore how virtual agent size and form affect interaction performance,immersion,and preference in MR environments.The first experiment assessed five virtual agent sizes(25%,50%,75%,100%,and 125%of physical size).The second experiment tested four types of frames(no frame,consistent frame,half frame,and surrounding frame)across all agent sizes.Participants,utilizing a head mounted display,performed tasks involving moving cups,typing words,and using a mouse.They completed questionnaires assessing aspects such as the virtual environment effects,interaction effects,collision concerns,and preferences.Results Results from the first experiment revealed that agents matching physical object size produced the best overall performance.The second experiment demonstrated that consistent framing notably enhances interaction accuracy and speed but reduces immersion.To balance efficiency and immersion,frameless agents matching physical object sizes were deemed optimal.Conclusions Virtual agents matching physical entity sizes enhance user experience and interaction performance.Conversely,familiar frames from 2D interfaces detrimentally affect interaction and immersion in virtual spaces.This study provides valuable insights for the future development of MR systems.展开更多
We present a mixed reality-based assistive system for shading paper sketches.Given a paper sketch made by an artist,our interface helps inexperienced users to shade it appropriately.Initially,using a simple Delaunaytr...We present a mixed reality-based assistive system for shading paper sketches.Given a paper sketch made by an artist,our interface helps inexperienced users to shade it appropriately.Initially,using a simple Delaunaytriangulation based inflation algorithm,an approximate depth map is computed.The system then highlights areas(to assist shading)based on a rendering of the 2.5-dimensional inflated model of the input contour.With the help of a mixed reality system,we project the highlighted areas back to aid users.The hints given by the system are used for shading and are smudged appropriately to apply an artistic shading to the sketch.The user is given flexibility at various levels to simulate conditions such as height and light position.Experiments show that the proposed system aids novice users in creating sketches with impressive shading.展开更多
Over the last several years,remote collaboration has been getting more attention in the research community because of the COVID-19 pandemic.In previous studies,researchers have investigated the effect of adding visual...Over the last several years,remote collaboration has been getting more attention in the research community because of the COVID-19 pandemic.In previous studies,researchers have investigated the effect of adding visual communication cues or shared views in collaboration,but there has not been any previous study exploring the influence between them.In this paper,we investigate the influence of view types on the use of visual communication cues.We compared the use of the three visual cues(hand gesture,a pointer with hand gesture,and sketches with hand gesture)across two view types(dependent and independent views),respectively.We conducted a user study,and the results showed that hand gesture and sketches with the hand gesture cueswerewell matchedwith the dependent viewcondition,and using a pointer with the hand gesture cue was suited to the independent view condition.With the dependent view,the hand gesture and sketch cues required less mental effort for collaborative communication,had better usability,provided better message understanding,and increased feeling of co-presence compared to the independent view.Since the dependent view supported the same viewpoint between the remote expert and a local worker,the local worker could easily understand the remote expert’s hand gestures.In contrast,in the independent view case,when they had different viewpoints,it was not easy for the local worker to understand the remote expert’s hand gestures.The sketch cue had a benefit of showing the final position and orientation of the manipulating objects with the dependent view,but this benefit was less obvious in the independent view case(which provided a further view compared to the dependent view)because precise drawing in the sketches was difficult from a distance.On the contrary,a pointer with the hand gesture cue required less mental effort to collaborate,had better usability,provided better message understanding,and an increased feeling of co-presence in the independent view condition than in the dependent view condition.The pointer cue could be used instead of a hand gesture in the independent view condition because the pointer could still show precise pointing information regardless of the view type.展开更多
It is possible for cost professionals to prepare an informed and compendious cost plan by identifying all the factors that cause cost overruns,variations,safety hazards and others without having a significant pr...It is possible for cost professionals to prepare an informed and compendious cost plan by identifying all the factors that cause cost overruns,variations,safety hazards and others without having a significant prior experience.The implementation of Extended Reality can address this phenomenon.The paper aims to introduce the concept of Extended Reality in the field of quantity surveying by exploring its untapped potential and also looks to identify critical barriers in implementing this technology.A detailed review of literature study produced eight critical factors acting as barriers in successful implementation.With the suggestions from the industry professionals,the inter-relationship among these factors were established and prioritised using Interpretative Structural Modelling(ISM)tool.Further,these factors were categorised using MICMAC(Cross-Impact Matrix Multiplication Applied to Classification)analysis.This study identifies,lack of expertise and lack of suitable software as the key driving factors in successful implementation and all the remaining factors are directly or indirectly influenced by them.The sample size considered in building the ISM network is limited to the Indian construction industry.The disadvantages of Extended Reality have not been covered in the study.There may be several negative repercussions to human health due to this technology.This study can be used by industry professionals in understanding how advance technology like this can overcome many challenges pertinent to cost planning and estimation.This study stands out among the few research topics which contribute to reducing the knowledge gap among the cost professionals irrespective of their experience.展开更多
基金supported by National Defence Basic Research Foundation of China (Grant No. B1420060173)National Hi-tech Research and Development Program of China (863 Program, Grant No. 2006AA04Z138)
文摘Due to the narrowness of space and the complexity of structure, the assembly of aircraft cabin has become one of the major bottlenecks in the whole manufacturing process. To solve the problem, at the beginning of aircraft design, the different stages of the lifecycle of aircraft must be thought about, which include the trial manufacture, assembly, maintenance, recycling and destruction of the product. Recently, thanks to the development of the virtual reality and augmented reality, some low-cost and fast solutions are found for the product assembly. This paper presents a mixed reality-based interactive technology for the aircraft cabin assembly, which can enhance the efficiency of the assemblage in a virtual environment in terms of vision, information and operation. In the mixed reality-based assembly environment, the physical scene can be obtained by a camera and then generated by a computer. The virtual parts, the features of visual assembly, the navigation information, the physical parts and the physical assembly environment will be mixed and presented in the same assembly scene. The mixed or the augmented information will provide some assembling information as a detailed assembly instruction in the mixed reality-based assembly environment. Constraint proxy and its match rules help to reconstruct and visualize the restriction relationship among different parts, and to avoid the complex calculation of constraint's match. Finally, a desktop prototype system of virtual assembly has been built to assist the assembly verification and training with the virtual hand.
文摘BACKGROUND As a new digital holographic imaging technology,mixed reality(MR)technology has unique advantages in determining the liver anatomy and location of tumor lesions.With the popularization of 5 G communication technology,MR shows great potential in preoperative planning and intraoperative navigation,making hepatectomy more accurate and safer.AIM To evaluate the application value of MR technology in hepatectomy for hepatocellular carcinoma(HCC).METHODS The clinical data of 95 patients who underwent open hepatectomy surgery for HCC between June 2018 and October 2020 at our hospital were analyzed retrospectively.We selected 95 patients with HCC according to the inclusion criteria and exclusion criteria.In 38 patients,hepatectomy was assisted by MR(Group A),and an additional 57 patients underwent traditional hepatectomy without MR(Group B).The perioperative outcomes of the two groups were collected and compared to evaluate the application value of MR in hepatectomy for patients with HCC.RESULTS We summarized the technical process of MR-assisted hepatectomy in the treatment of HCC.Compared to traditional hepatectomy in Group B,MR-assisted hepatectomy in Group A yielded a shorter operation time(202.86±46.02 min vs 229.52±57.13 min,P=0.003),less volume of bleeding(329.29±97.31 mL vs 398.23±159.61 mL,P=0.028),and shorter obstructive time of the portal vein(17.71±4.16 min vs 21.58±5.24 min,P=0.019).Group A had lower alanine aminotransferas and higher albumin values on the third day after the operation(119.74±29.08 U/L vs 135.53±36.68 U/L,P=0.029 and 33.60±3.21 g/L vs 31.80±3.51 g/L,P=0.014,respectively).The total postoperative complications and hospitalization days in Group A were significantly less than those in Group B[14(37.84%)vs 35(60.34%),P=0.032 and 12.05±4.04 d vs 13.78±4.13 d,P=0.049,respectively].CONCLUSION MR has some application value in three-dimensional visualization of the liver,surgical planning,and intraoperative navigation during hepatectomy,and it significantly improves the perioperative outcomes of hepatectomy for HCC.
文摘The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery simulation and using the created scenarios in real-time surgery using mixed reality.In this article,we described our experience on developing a dedicated 3 dimensional visualization and reconstruction software for surgeons to be used in advanced liver surgery and living donor liver transplantation.Furthermore,we shared the recent developments in the field by explaining the outreach of the software from virtual reality to augmented reality and mixed reality.
文摘In the modern era,preoperative planning is substantially facilitated by artificial reality technologies,which permit a better understanding of patient anatomy,thus increasing the safety and accuracy of surgical interventions.In the field of orthopedic surgery,the increase in safety and accuracy improves treatment quality and orthopedic patient outcomes.Artificial reality technologies,which include virtual reality(VR),augmented reality(AR),and mixed reality(MR),use digital images obtained from computed tomography or magnetic resonance imaging.VR replaces the user’s physical environment with one that is computer generated.AR and MR have been defined as technologies that permit the fusing of the physical with the virtual environment,enabling the user to interact with both physical and virtual objects.MR has been defined as a technology that,in contrast to AR,enables users to visualize the depth and perspective of the virtual models.We aimed to shed light on the role that MR can play in the visualization of orthopedic surgical anatomy.The literature suggests that MR could be a valuable tool in orthopedic surgeon’s hands for visualization of the anatomy.However,we remark that confusion exists in the literature concerning the characteristics of MR.Thus,a more clear description of MR is needed in orthopedic research,so that the potential of this technology can be more deeply understood.
基金Supported by the Safe PASS project that has received funding from the European Union’s Horizon 2020 Research and Innovation programme (815146)。
文摘Background This work aims to provide an overview of the Mixed Reality(MR)technology’s use in maritime industry for training purposes.Current training procedures cover a broad range of procedural operations for Life-Saving Appliances(LSA)lifeboats;however,several gaps and limitations have been identified related to the practical training that can be addressed through the use of MR.Augmented,Virtual and Mixed Reality applications are already used in various fields in maritime industry,but their full potential have not been yet exploited.SafePASS project aims to exploit MR advantages in the maritime training by introducing a relevant application focusing on use and maintenance of LSA lifeboats.Methods An MR Training application is proposed supporting the training of crew members in equipment usage and operation,as well as in maintenance activities and procedures.The application consists of the training tool that trains crew members on handling lifeboats,the training evaluation tool that allows trainers to assess the performance of trainees,and the maintenance tool that supports crew members to perform maintenance activities and procedures on lifeboats.For each tool,an indicative session and scenario workflow are implemented,along with the main supported interactions of the trainee with the equipment.Results The application has been tested and validated both in lab environment and using a real LSA lifeboat,resulting to improved experience for the users that provided feedback and recommendations for further development.The application has also been demonstrated onboard a cruise ship,showcasing the supported functionalities to relevant stakeholders that recognized the added value of the application and suggested potential future exploitation areas.Conclusions The MR Training application has been evaluated as very promising in providing a user-friendly training environment that can support crew members in LSA lifeboat operation and maintenance,while it is still subject to improvement and further expansion.
基金National Key Research and Development Program(2016YFC0106500800)NationalMajor Scientific Instruments and Equipments Development Project of National Natural Science Foundation of China(81627805)+3 种基金National Natural Science Foundation of China-Guangdong Joint Fund Key Program(U1401254)National Natural Science Foundation of China Mathematics Tianyuan Foundation(12026602)Guangdong Provincial Natural Science Foundation Team Project(6200171)Guangdong Provincial Health Appropriate Technology Promotion Project(20230319214525105,20230322152307666).
文摘Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomical,functional,and radical hepatectomy through tumor identification and localization of target hepatic segments,driving a transformative shift in themanagement of hepatic surgical diseases,moving away from traditional,empirical diagnostic and treatment approaches toward digital,intelligent ones.The Hepatic Surgery Group of the Surgery Branch of the Chinese Medical Association,Digital Medicine Branch of the Chinese Medical Association,Digital Intelligent Surgery Committee of the Chinese Society of ResearchHospitals,and Liver Cancer Committee of the Chinese Medical Doctor Association organized the relevant experts in China to formulate this consensus.This consensus provides a comprehensive outline of the principles,advantages,processes,and key considerations associated with the application of augmented reality and mixed-reality technology combined with indocyanine green fluorescence imaging technology for hepatic segmental and subsegmental resection.The purpose is to streamline and standardize the application of these technologies.
基金supported in part by the Major Fundamental Research of Natural Science Foundation of Shandong Province under Grant ZR2019ZD05Joint fund for smart computing of Shandong Natural Science Foundation under Grant ZR2020LZH013+1 种基金Open project of State Key Laboratory of Computer Architecture CARCHA202002Human Video Matting Project of Hisense Co.,Ltd.under Grant QD1170020023.
文摘The mixed reality conference system proposed in this paper is a robust,real-time video conference application software that makes up for the simple interaction and lack of immersion and realism of traditional video conference,which realizes the entire process of holographic video conference from client to cloud to the client.This paper mainly focuses on designing and implementing a video conference system based on AI segmentation technology and mixed reality.Several mixed reality conference system components are discussed,including data collection,data transmission,processing,and mixed reality presentation.The data layer is mainly used for data collection,integration,and video and audio codecs.The network layer uses Web-RTC to realize peer-to-peer data communication.The data processing layer is the core part of the system,mainly for human video matting and human-computer interaction,which is the key to realizing mixed reality conferences and improving the interactive experience.The presentation layer explicitly includes the login interface of the mixed reality conference system,the presentation of real-time matting of human subjects,and the presentation objects.With the mixed reality conference system,conference participants in different places can see each other in real-time in their mixed reality scene and share presentation content and 3D models based on mixed reality technology to have a more interactive and immersive experience.
文摘To improve and develop education systems,the communication between instructors and learners in a class during the learning process is of utmost importance.Currently the presentations of 3D models using mixed reality(MR)technology can be used to avoid misinterpretations of oral and 2D model presentations.As an independent concept and MR applications,MR combines the excellent of each virtual reality(VR)and augmented reality(AR).This work aims to present the descriptions of MR systems,which include its devices,applications,and literature reviews and proposes computer vision tracking using the AR Toolkit Tracking Library.The focus of this work will be on creating 3D models and implementing in Unity 3D using the Vuforia SDK platform to develop VR and AR applications for architectural presentations.
文摘Traditional teaching and learning about industrial robots uses abstract instructions,which are difficult for students to understand.Meanwhile,there are safety issues associated with the use of practical training equipment.To address these problems,this paper developed an instructional system based on mixed-reality(MR)technology for teaching about industrial robots.The Siasun T6A-series robots were taken as a case study,and the Microsoft MR device HoloLens-2 was used as the instructional platform.First,the parameters of the robots were analyzed based on their structural drawings.Then,the robot modules were decomposed,and 1:1 three-dimensional(3D)digital reproductions were created in Maya.Next,a library of digital models of the robot components was established,and a 3D spatial operation interface for the virtual instructional system was created in Unity.Subsequently,a C#code framework was established to satisfy the requirements of interactive functions and data transmission,and the data were saved in JSON format.In this way,a key technique that facilitates the understanding of spatial structures and a variety of human-machine interactions were realized.Finally,an instructional system based on HoloLens-2 was established for understanding the structures and principles of robots.The results showed that the instructional system developed in this study provides realistic 3D visualizations and a natural,efficient approach for human-machine interactions.This system could effectively improve the efficiency of knowledge transfer and the student’s motivation to learn.
基金supported by“Regional Innovation Strategy (RIS)”through the National Research Foundation of Korea (NRF)funded by the Ministry of Education (MOE) (2021RIS-004).
文摘A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the working time because of waiting to avoid conflicts. Herein, wepropose an adaptive concurrency control approach that can reduce conflictsand work time. We classify shared object manipulation in mixed reality intodetailed goals and tasks. Then, we model the relationships among goal,task, and ownership. As the collaborative work progresses, the proposedsystem adapts the different concurrency control mechanisms of shared objectmanipulation according to the modeling of goal–task–ownership. With theproposed concurrency control scheme, users can hold shared objects andmove and rotate together in a mixed reality environment similar to realindustrial sites. Additionally, this system provides MS Hololens and Myosensors to recognize inputs from a user and provides results in a mixed realityenvironment. The proposed method is applied to install an air conditioneras a case study. Experimental results and user studies show that, comparedwith the conventional approach, the proposed method reduced the number ofconflicts, waiting time, and total working time.
基金This research was supported by MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2018-0-01431)the High-Potential Individuals Global Training Program(2019-0-01611)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘There have been numerous works proposed to merge augmented reality/mixed reality(AR/MR)and Internet of Things(IoT)in various ways.However,they have focused on their specific target applications and have limitations on interoperability or reusability when utilizing them to different domains or adding other devices to the system.This paper proposes a novel architecture of a convergence platform for AR/MR and IoT systems and services.The proposed architecture adopts the oneM2M IoT standard as the basic framework that converges AR/MR and IoT systems and enables the development of application services used in general-purpose environments without being subordinate to specific systems,domains,and device manufacturers.We implement the proposed architecture utilizing the open-source oneM2M-based IoT server and device platforms released by the open alliance for IoT standards(OCEAN)and Microsoft HoloLens as an MR device platform.We also suggest and demonstrate the practical use cases and discuss the advantages of the proposed architecture.
文摘Nowadays, urban design faces complex demands. It has become a necessity to negotiate between stakeholder objectives, the expectations of citizens and the demands of planning. It is desirable to involve the stakeholders and citizens from an early stage in the planning process to enable their different viewpoints to be successfully expressed and comprehended. Therefore, the basic aim of the study was how the MR (mixed reality) application is designed to encourage and improve communication on urban design among stakeholders and citizens? In this paper, we discuss new approaches to visualize urban building and environment alternatives to different stakeholders and provide them with tools to explore different approaches to urban planning in order to support citizen's participation in urban planning with augmented and mixed reality. The major finding of the study is that learning "how these participatory technologies may help build a community of practice around an urban project". And throughout the different experiences, we can learn to assist towards development of a methodology to use the mixed reality as a simulation tool in the enhancement of collaborative interaction in real-Egyptian project. So, we can determine a number of recommendations to deal with new participatory design tools for urban planning projects.
文摘To study recall accuracy of the offensive and defensive situations including movements of elite-athlete/novice oneself, a novel experimental system was developed where defensive actions were performed by the subject with a CG (Computer Graphics) player who presented predetermined offensive actions. Both the CG player's movements and subject's movements were reproduced by a video using mixed reality technology for recall examination. This system was also designed to rearrange the natural sequence of image frames resulting in a reproducible video in which the time relation of offense and defense was falsified. Displacement of timing in the false video was twofold; delayed from the truth or advanced from the truth. Using this two-video, true/false imagery method, the subject was asked to select the true video by recall; thus it became possible to examine the recall accuracy quantitatively by controlling the timing displacement. Results of the experiment using this system revealed that karate expert possessed a skill to recognize the time relation between the opponent's movement and one's own movement perceptually that was more developed than that of the novice. It was further identified that the expert as well as the novice recognized delayed displacement more accurately than they could recognize advanced displacement.
文摘In this study, we develop a mixed reality game system to investigate characteristics ofjudgrnents of individual players in an evacuation process. The characteristics of judgments of the players that are inferred from the performance of the game are then incorporated into a multi-agent simulation as rules. The behavior of evacuees is evaluated in approximations of real situations, by using the agent simulation including different judgments of evacuees. Using the results of the simulation, effective methods are discussed for achieving the escape of the evacuees within a short time.
基金Supported by the National Key R&D Program of China(2018YFB2100601)National Natural Science Foundation of China(61872024)。
文摘Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.
文摘A precise knowledge of intra-parenchymal vascular and biliary architecture and the location of lesions in relation to the complex anatomy is indispensable to perform liver surgery.Therefore,virtual three-dimensional(3D)-reconstruction models from computed tomography/magnetic resonance imaging scans of the liver might be helpful for visualization.Augmented reality,mixed reality and 3Dnavigation could transfer such 3D-image data directly into the operation theater to support the surgeon.This review examines the literature about the clinical and intraoperative use of these image guidance techniques in liver surgery and provides the reader with the opportunity to learn about these techniques.Augmented reality and mixed reality have been shown to be feasible for the use in open and minimally invasive liver surgery.3D-navigation facilitated targeting of intraparenchymal lesions.The existing data is limited to small cohorts and description about technical details e.g.,accordance between the virtual 3D-model and the real liver anatomy.Randomized controlled trials regarding clinical data or oncological outcome are not available.Up to now there is no intraoperative application of artificial intelligence in liver surgery.The usability of all these sophisticated image guidance tools has still not reached the grade of immersion which would be necessary for a widespread use in the daily surgical routine.Although there are many challenges,augmented reality,mixed reality,3Dnavigation and artificial intelligence are emerging fields in hepato-biliary surgery.
基金the Strategic research and consulting project of Chinese Academy of Engineering(2023-HY-14).
文摘Background Physical entity interactions in mixed reality(MR)environments aim to harness human capabilities in manipulating physical objects,thereby enhancing virtual environment(VEs)functionality.In MR,a common strategy is to use virtual agents as substitutes for physical entities,balancing interaction efficiency with environmental immersion.However,the impact of virtual agent size and form on interaction performance remains unclear.Methods Two experiments were conducted to explore how virtual agent size and form affect interaction performance,immersion,and preference in MR environments.The first experiment assessed five virtual agent sizes(25%,50%,75%,100%,and 125%of physical size).The second experiment tested four types of frames(no frame,consistent frame,half frame,and surrounding frame)across all agent sizes.Participants,utilizing a head mounted display,performed tasks involving moving cups,typing words,and using a mouse.They completed questionnaires assessing aspects such as the virtual environment effects,interaction effects,collision concerns,and preferences.Results Results from the first experiment revealed that agents matching physical object size produced the best overall performance.The second experiment demonstrated that consistent framing notably enhances interaction accuracy and speed but reduces immersion.To balance efficiency and immersion,frameless agents matching physical object sizes were deemed optimal.Conclusions Virtual agents matching physical entity sizes enhance user experience and interaction performance.Conversely,familiar frames from 2D interfaces detrimentally affect interaction and immersion in virtual spaces.This study provides valuable insights for the future development of MR systems.
文摘We present a mixed reality-based assistive system for shading paper sketches.Given a paper sketch made by an artist,our interface helps inexperienced users to shade it appropriately.Initially,using a simple Delaunaytriangulation based inflation algorithm,an approximate depth map is computed.The system then highlights areas(to assist shading)based on a rendering of the 2.5-dimensional inflated model of the input contour.With the help of a mixed reality system,we project the highlighted areas back to aid users.The hints given by the system are used for shading and are smudged appropriately to apply an artistic shading to the sketch.The user is given flexibility at various levels to simulate conditions such as height and light position.Experiments show that the proposed system aids novice users in creating sketches with impressive shading.
基金Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by Korea government Ministry of Science,ICT(MSIT)(No.2019-0-01343,convergence security core talent training business)the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT).(NRF-2020R1A4A1019191).
文摘Over the last several years,remote collaboration has been getting more attention in the research community because of the COVID-19 pandemic.In previous studies,researchers have investigated the effect of adding visual communication cues or shared views in collaboration,but there has not been any previous study exploring the influence between them.In this paper,we investigate the influence of view types on the use of visual communication cues.We compared the use of the three visual cues(hand gesture,a pointer with hand gesture,and sketches with hand gesture)across two view types(dependent and independent views),respectively.We conducted a user study,and the results showed that hand gesture and sketches with the hand gesture cueswerewell matchedwith the dependent viewcondition,and using a pointer with the hand gesture cue was suited to the independent view condition.With the dependent view,the hand gesture and sketch cues required less mental effort for collaborative communication,had better usability,provided better message understanding,and increased feeling of co-presence compared to the independent view.Since the dependent view supported the same viewpoint between the remote expert and a local worker,the local worker could easily understand the remote expert’s hand gestures.In contrast,in the independent view case,when they had different viewpoints,it was not easy for the local worker to understand the remote expert’s hand gestures.The sketch cue had a benefit of showing the final position and orientation of the manipulating objects with the dependent view,but this benefit was less obvious in the independent view case(which provided a further view compared to the dependent view)because precise drawing in the sketches was difficult from a distance.On the contrary,a pointer with the hand gesture cue required less mental effort to collaborate,had better usability,provided better message understanding,and an increased feeling of co-presence in the independent view condition than in the dependent view condition.The pointer cue could be used instead of a hand gesture in the independent view condition because the pointer could still show precise pointing information regardless of the view type.
文摘It is possible for cost professionals to prepare an informed and compendious cost plan by identifying all the factors that cause cost overruns,variations,safety hazards and others without having a significant prior experience.The implementation of Extended Reality can address this phenomenon.The paper aims to introduce the concept of Extended Reality in the field of quantity surveying by exploring its untapped potential and also looks to identify critical barriers in implementing this technology.A detailed review of literature study produced eight critical factors acting as barriers in successful implementation.With the suggestions from the industry professionals,the inter-relationship among these factors were established and prioritised using Interpretative Structural Modelling(ISM)tool.Further,these factors were categorised using MICMAC(Cross-Impact Matrix Multiplication Applied to Classification)analysis.This study identifies,lack of expertise and lack of suitable software as the key driving factors in successful implementation and all the remaining factors are directly or indirectly influenced by them.The sample size considered in building the ISM network is limited to the Indian construction industry.The disadvantages of Extended Reality have not been covered in the study.There may be several negative repercussions to human health due to this technology.This study can be used by industry professionals in understanding how advance technology like this can overcome many challenges pertinent to cost planning and estimation.This study stands out among the few research topics which contribute to reducing the knowledge gap among the cost professionals irrespective of their experience.