Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in ed...Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in education continues to increase,educators actively seek innovative and immersive methods to engage students in learning.However,exploring these possibilities also entails identifying and overcoming existing barriers to optimal educational integration.Concurrently,this surge in demand has prompted the identification of specific barriers,one of which is three-dimensional(3D)modeling.Creating 3D objects for augmented reality education applications can be challenging and time-consuming for the educators.To address this,we have developed a pipeline that creates realistic 3D objects from the two-dimensional(2D)photograph.Applications for augmented and virtual reality can then utilize these created 3D objects.We evaluated the proposed pipeline based on the usability of the 3D object and performance metrics.Quantitatively,with 117 respondents,the co-creation team was surveyed with openended questions to evaluate the precision of the 3D object created by the proposed photogrammetry pipeline.We analyzed the survey data using descriptive-analytical methods and found that the proposed pipeline produces 3D models that are positively accurate when compared to real-world objects,with an average mean score above 8.This study adds new knowledge in creating 3D objects for augmented reality applications by using the photogrammetry technique;finally,it discusses potential problems and future research directions for 3D objects in the education sector.展开更多
In lightweight augmented reality(AR)glasses,the light engines must be very compact while keeping a high optical efficiency to enable longtime comfortable wearing and high ambient contrast ratio.“Liquid-crystal-on-sil...In lightweight augmented reality(AR)glasses,the light engines must be very compact while keeping a high optical efficiency to enable longtime comfortable wearing and high ambient contrast ratio.“Liquid-crystal-on-silicon(LCoS)or micro-LED,who wins?”is recently a heated debate question.Conventional LCoS system is facing tremendous challenges due to its bulky illumination systems;it often incorporates a bulky polarizing beam splitter(PBS)cube.To minimize the formfactor of an LCoS system,here we demonstrate an ultracompact illumination system consisting of an in-coupling prism,and a light guide plate with multiple parallelepiped extraction prisms.The overall module volume including the illumination optics and an LCoS panel(4.4-μm pixel pitch and 1024x1024 resolution elements),but excluding the projection optics,is merely 0.25 cc(cm3).Yet,our system exhibits an excellent illuminance uniformity and an impressive optical efficiency(36%–41%for a polarized input light).Such an ultracompact and high-efficiency LCoS illumination system is expected to revolutionize the next-generation AR glasses.展开更多
Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the instal...Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.展开更多
Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overt...Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overtaking maneuvers,leading to accidents and fatalities.In this paper,we consider atrous convolution,a powerful tool for explicitly adjusting the field-of-view of a filter as well as controlling the resolution of feature responses generated by Deep Convolutional Neural Networks in the context of semantic image segmentation.This article explores the potential of seeing-through vehicles as a solution to enhance overtaking safety.See-through vehicles leverage advanced technologies such as cameras,sensors,and displays to provide drivers with a real-time view of the vehicle ahead,including the areas hidden from their direct line of sight.To address the problems of safe passing and occlusion by huge vehicles,we designed a see-through vehicle system in this study,we employed a windshield display in the back car together with cameras in both cars.The server within the back car was used to segment the car,and the segmented portion of the car displayed the video from the front car.Our see-through system improves the driver’s field of vision and helps him change lanes,cross a large car that is blocking their view,and safely overtake other vehicles.Our network was trained and tested on the Cityscape dataset using semantic segmentation.This transparent technique will instruct the driver on the concealed traffic situation that the front vehicle has obscured.For our findings,we have achieved 97.1% F1-score.The article also discusses the challenges and opportunities of implementing see-through vehicles in real-world scenarios,including technical,regulatory,and user acceptance factors.展开更多
ObjectiveThis study aimed to explore the applications of three-dimensional (3D) technology, including virtual reality, augmented reality (AR), and 3D printing system, in the field of medicine, particularly in renal in...ObjectiveThis study aimed to explore the applications of three-dimensional (3D) technology, including virtual reality, augmented reality (AR), and 3D printing system, in the field of medicine, particularly in renal interventions for cancer treatment.MethodsA specialized software transforms 2D medical images into precise 3D digital models, facilitating improved anatomical understanding and surgical planning. Patient-specific 3D printed anatomical models are utilized for preoperative planning, intraoperative guidance, and surgical education. AR technology enables the overlay of digital perceptions onto real-world surgical environments.ResultsPatient-specific 3D printed anatomical models have multiple applications, such as preoperative planning, intraoperative guidance, trainee education, and patient counseling. Virtual reality involves substituting the real world with a computer-generated 3D environment, while AR overlays digitally created perceptions onto the existing reality. The advances in 3D modeling technology have sparked considerable interest in their application to partial nephrectomy in the realm of renal cancer. 3D printing, also known as additive manufacturing, constructs 3D objects based on computer-aided design or digital 3D models. Utilizing 3D-printed preoperative renal models provides benefits for surgical planning, offering a more reliable assessment of the tumor's relationship with vital anatomical structures and enabling better preparation for procedures. AR technology allows surgeons to visualize patient-specific renal anatomical structures and their spatial relationships with surrounding organs by projecting CT/MRI images onto a live laparoscopic video. Incorporating patient-specific 3D digital models into healthcare enhances best practice, resulting in improved patient care, increased patient satisfaction, and cost saving for the healthcare system.展开更多
BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolvi...BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolving with the employment of augmented reality.Yet,the accuracy of augmented reality navigation systems has not been determined.AIM To examine the accuracy of component alignment and restoration of the affected limb’s mechanical axis in primary total knee arthroplasty(TKA),utilizing an augmented reality navigation system and to assess whether such systems are conspicuously fruitful for an accomplished knee surgeon.METHODS From May 2021 to December 2021,30 patients,25 women and five men,under-went a primary unilateral TKA.Revision cases were excluded.A preoperative radiographic procedure was performed to evaluate the limb’s axial alignment.All patients were operated on by the same team,without a tourniquet,utilizing three distinct prostheses with the assistance of the Knee+™augmented reality navigation system in every operation.Postoperatively,the same radiographic exam protocol was executed to evaluate the implants’position,orientation and coronal plane alignment.We recorded measurements in 3 stages regarding femoral varus and flexion,tibial varus and posterior slope.Firstly,the expected values from the Augmented Reality system were documented.Then we calculated the same values after each cut and finally,the same measurements were recorded radiolo-gically after the operations.Concerning statistical analysis,Lin’s concordance correlation coefficient was estimated,while Wilcoxon Signed Rank Test was performed when needed.RESULTS A statistically significant difference was observed regarding mean expected values and radiographic mea-surements for femoral flexion measurements only(Z score=2.67,P value=0.01).Nonetheless,this difference was statistically significantly lower than 1 degree(Z score=-4.21,P value<0.01).In terms of discrepancies in the calculations of expected values and controlled measurements,a statistically significant difference between tibial varus values was detected(Z score=-2.33,P value=0.02),which was also statistically significantly lower than 1 degree(Z score=-4.99,P value<0.01).CONCLUSION The results indicate satisfactory postoperative coronal alignment without outliers across all three different implants utilized.Augmented reality navigation systems can bolster orthopaedic surgeons’accuracy in achieving precise axial alignment.However,further research is required to further evaluate their efficacy and potential.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
Virtual reality(VR)and augmented reality(AR)technologies have become increasingly important instruments in the field of art education as information technology develops quickly,transforming the conventional art educat...Virtual reality(VR)and augmented reality(AR)technologies have become increasingly important instruments in the field of art education as information technology develops quickly,transforming the conventional art education approach.The present situation,benefits,difficulties,and potential development tendencies of VR and AR technologies in art education will be investigated in this study.By means of literature analysis and case studies,this paper presents the fundamental ideas of VR and AR technologies together with their several uses in art education,namely virtual museums,interactive art production,art history instruction,and distant art cooperation.The research examines how these technologies might improve students’immersion,raise their learning motivation,and encourage innovative ideas and multidisciplinary cooperation.Practical application concerns including technology costs,content production obstacles,user acceptance,privacy,and ethical questions also come under discussion.At last,the article offers ideas and suggestions to help VR and AR technologies be effectively integrated into art education through teacher training,curriculum design,technology infrastructure development,and multidisciplinary cooperation.This study offers useful advice for teachers of art as well as important references for legislators and technology developers working together to further the creative growth of art education.展开更多
The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent s...The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.展开更多
Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the v...Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes.Data Sources: The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the Pub Med database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles.Results: In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery,which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology.Conclusions: With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling,and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.展开更多
To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. T...To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (〈1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.展开更多
The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery s...The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery simulation and using the created scenarios in real-time surgery using mixed reality.In this article,we described our experience on developing a dedicated 3 dimensional visualization and reconstruction software for surgeons to be used in advanced liver surgery and living donor liver transplantation.Furthermore,we shared the recent developments in the field by explaining the outreach of the software from virtual reality to augmented reality and mixed reality.展开更多
Nonlinear errors always exist in data obtained from tracker in augmented reality (AR), which badly influence the effect of AR. This paper proposes to rectify the errors using BP neural network. As BP neural network ...Nonlinear errors always exist in data obtained from tracker in augmented reality (AR), which badly influence the effect of AR. This paper proposes to rectify the errors using BP neural network. As BP neural network is prone to getting into local extrema and convergence is slow, genetic algorithm is employed to optimize the initial weights and threshold of neural network. This paper discusses how to set the crucial parameters in the algorithm. Experimental results show that the method ensures that the neural network achieves global convergence quickly and correctly. Tracking precision of AR system is improved after the tracker is rectified, and the third dimension of AR system is enhanced.展开更多
Vision 2030 requires a new generation of people with a wide variety of abilities,talents,and skills.The adoption of augmented reality(AR)and virtual reality is one possible way to align education with Vision 2030.Imme...Vision 2030 requires a new generation of people with a wide variety of abilities,talents,and skills.The adoption of augmented reality(AR)and virtual reality is one possible way to align education with Vision 2030.Immersive technologies like AR are rapidly becoming powerful and versatile enough to be adopted in education to achieve this goal.Technologies such as AR could be beneficial tools to enhance maintainable growth in education.We reviewed the most recent studies in augmented reality to check its appropriateness in aligning with the educational goals of Vision 2030.First,the various definitions,terminologies,and technologies of AR are described briefly.Then,the specific characteristics and benefits of AR systems are determined.There may be a significance of the pedagogical method used by adapting the AR scheme and the consistency of the equipment and learning experiences.Therefore,three kinds of instructional methods that stress roles,location,and tasks were evaluated.The kind of learning that is offered by the distinct kinds of AR approaches is elaborated upon.The technological,pedagogical,learning problems experienced with AR are described.The potential solutions for a few of the issues experienced and the topics for subsequent research are presented in this article.展开更多
Augmented reality(AR)displays are attracting significant attention and efforts.In this paper,we review the adopted device configurations of see-through displays,summarize the current development status and highlight f...Augmented reality(AR)displays are attracting significant attention and efforts.In this paper,we review the adopted device configurations of see-through displays,summarize the current development status and highlight future challenges in micro-displays.A brief introduction to optical gratings is presented to help understand the challenging design of grating-based waveguide for AR displays.Finally,we discuss the most recent progress in diffraction grating and its implications.展开更多
Product assembly simulation is considered as one of the key technologies in the process of complex product design and manufacturing.Virtual assembly realizes the assembly process design,verification,and optimization o...Product assembly simulation is considered as one of the key technologies in the process of complex product design and manufacturing.Virtual assembly realizes the assembly process design,verification,and optimization of complex products in the virtual environment,which plays an active and effective role in improving the assembly quality and efficiency of complex products.In recent years,augmented reality(AR)and digital twin(DT)technology have brought new opportunities and challenges to the digital assembly of complex products owing to their characteristics of virtual reality fusion and interactive control.This paper expounds the concept and connotation of AR,enumerates a typical AR assembly system structure,analyzes the key technologies and applications of AR in digital assembly,and notes that DT technology is the future development trend of intelligent assembly research.展开更多
In fields such as science and engineering, virtual environment is commonly used to provide replacements for practical hands-on laboratories. Sometimes, these environments take the form of a remote interface to the phy...In fields such as science and engineering, virtual environment is commonly used to provide replacements for practical hands-on laboratories. Sometimes, these environments take the form of a remote interface to the physical laboratory apparatus and at other times, in the form of a complete software implementation that simulates the laboratory apparatus. In this paper, we report on the use of a semi-immersive 3D mobile Augmented Reality (mAR) interface and limited simulations as a replacement for practical hands-on laboratories in science and engineering. The 3D-mAR based interfaces implementations for three different experiments (from micro-electronics, power and communications engineering) are presented;the discovered limitations are discussed along with the results of an evaluation by science and engineering students from two different institutions and plans for future work.展开更多
Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an ...Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an appropriate benchmark.For AR applications in practice,a variety of challenging situations(e.g.,fast motion,strong rotation,serious motion blur,dynamic interference)may be easily encountered since a home user may not carefully move the AR device,and the real environment may be quite complex.In addition,the frequency of camera lost should be minimized and the recovery from the failure status should be fast and accurate for good AR experience.Existing SLAM datasets/benchmarks generally only provide the evaluation of pose accuracy and their camera motions are somehow simple and do not fit well the common cases in the mobile AR applications.With the above motivation,we build a new visual-inertial dataset as well as a series of evaluation criteria for AR.We also review the existing monocular VSLAM/VISLAM approaches with detailed analyses and comparisons.Especially,we select 8 representative monocular VSLAM/VISLAM approaches/systems and quantitatively evaluate them on our benchmark.Our dataset,sample code and corresponding evaluation tools are available at the benchmark website http://www.zjucvg.net/eval-vislam/.展开更多
Background Mixed-reality technologies,including virtual reality(VR)and augmented reality(AR),are considered to be promising potential tools for science teaching and learning processes that could foster positive emotio...Background Mixed-reality technologies,including virtual reality(VR)and augmented reality(AR),are considered to be promising potential tools for science teaching and learning processes that could foster positive emotions,motivate autonomous learning,and improve learning outcomes.Methods In this study,a technology-aided biological microscope learning system based on VR/AR is presented.The structure of the microscope is described in a detailed three-dimensional(3D)model,each component being represented with their topological interrelationships and associations among them being established.The interactive behavior of the model was specified,and a standard operating guide was compiled.The motion control of components was simulated based on collision detection.Combined with immersive VR equipment and AR technology,we developed a virtual microscope subsystem and a mobile virtual microscope guidance system.Results The system consisted of a VR subsystem and an AR subsystem.The focus of the VR subsystem was to simulate operating the microscope and associated interactive behaviors that allowed users to observe and operate the components of the 3D microscope model by means of natural interactions in an immersive scenario.The AR subsystem allowed participants to use a mobile terminal that took a picture of a microscope from a textbook and then displayed the structure and functions of the instrument,as well as the relevant operating guidance.This flexibly allowed students to use the system before or after class without time and space constraints.The system allowed users to switch between the VR and AR subsystems.Conclusions The system is useful for helping learners(especially K-12 students)to recognize a microscope's structure and grasp the required operational skills by simulating operations using an interactive process.In the future,such technology-assisted education would be a successful learning platform in an open learning space.展开更多
文摘Augmented reality(AR)is an emerging dynamic technology that effectively supports education across different levels.The increased use of mobile devices has an even greater impact.As the demand for AR applications in education continues to increase,educators actively seek innovative and immersive methods to engage students in learning.However,exploring these possibilities also entails identifying and overcoming existing barriers to optimal educational integration.Concurrently,this surge in demand has prompted the identification of specific barriers,one of which is three-dimensional(3D)modeling.Creating 3D objects for augmented reality education applications can be challenging and time-consuming for the educators.To address this,we have developed a pipeline that creates realistic 3D objects from the two-dimensional(2D)photograph.Applications for augmented and virtual reality can then utilize these created 3D objects.We evaluated the proposed pipeline based on the usability of the 3D object and performance metrics.Quantitatively,with 117 respondents,the co-creation team was surveyed with openended questions to evaluate the precision of the 3D object created by the proposed photogrammetry pipeline.We analyzed the survey data using descriptive-analytical methods and found that the proposed pipeline produces 3D models that are positively accurate when compared to real-world objects,with an average mean score above 8.This study adds new knowledge in creating 3D objects for augmented reality applications by using the photogrammetry technique;finally,it discusses potential problems and future research directions for 3D objects in the education sector.
文摘In lightweight augmented reality(AR)glasses,the light engines must be very compact while keeping a high optical efficiency to enable longtime comfortable wearing and high ambient contrast ratio.“Liquid-crystal-on-silicon(LCoS)or micro-LED,who wins?”is recently a heated debate question.Conventional LCoS system is facing tremendous challenges due to its bulky illumination systems;it often incorporates a bulky polarizing beam splitter(PBS)cube.To minimize the formfactor of an LCoS system,here we demonstrate an ultracompact illumination system consisting of an in-coupling prism,and a light guide plate with multiple parallelepiped extraction prisms.The overall module volume including the illumination optics and an LCoS panel(4.4-μm pixel pitch and 1024x1024 resolution elements),but excluding the projection optics,is merely 0.25 cc(cm3).Yet,our system exhibits an excellent illuminance uniformity and an impressive optical efficiency(36%–41%for a polarized input light).Such an ultracompact and high-efficiency LCoS illumination system is expected to revolutionize the next-generation AR glasses.
文摘Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.
基金financially supported by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D Program(Project No.P0016038)supported by the MSIT(Ministry of Sci-ence and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2022-RS-2022-00156354)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overtaking maneuvers,leading to accidents and fatalities.In this paper,we consider atrous convolution,a powerful tool for explicitly adjusting the field-of-view of a filter as well as controlling the resolution of feature responses generated by Deep Convolutional Neural Networks in the context of semantic image segmentation.This article explores the potential of seeing-through vehicles as a solution to enhance overtaking safety.See-through vehicles leverage advanced technologies such as cameras,sensors,and displays to provide drivers with a real-time view of the vehicle ahead,including the areas hidden from their direct line of sight.To address the problems of safe passing and occlusion by huge vehicles,we designed a see-through vehicle system in this study,we employed a windshield display in the back car together with cameras in both cars.The server within the back car was used to segment the car,and the segmented portion of the car displayed the video from the front car.Our see-through system improves the driver’s field of vision and helps him change lanes,cross a large car that is blocking their view,and safely overtake other vehicles.Our network was trained and tested on the Cityscape dataset using semantic segmentation.This transparent technique will instruct the driver on the concealed traffic situation that the front vehicle has obscured.For our findings,we have achieved 97.1% F1-score.The article also discusses the challenges and opportunities of implementing see-through vehicles in real-world scenarios,including technical,regulatory,and user acceptance factors.
文摘ObjectiveThis study aimed to explore the applications of three-dimensional (3D) technology, including virtual reality, augmented reality (AR), and 3D printing system, in the field of medicine, particularly in renal interventions for cancer treatment.MethodsA specialized software transforms 2D medical images into precise 3D digital models, facilitating improved anatomical understanding and surgical planning. Patient-specific 3D printed anatomical models are utilized for preoperative planning, intraoperative guidance, and surgical education. AR technology enables the overlay of digital perceptions onto real-world surgical environments.ResultsPatient-specific 3D printed anatomical models have multiple applications, such as preoperative planning, intraoperative guidance, trainee education, and patient counseling. Virtual reality involves substituting the real world with a computer-generated 3D environment, while AR overlays digitally created perceptions onto the existing reality. The advances in 3D modeling technology have sparked considerable interest in their application to partial nephrectomy in the realm of renal cancer. 3D printing, also known as additive manufacturing, constructs 3D objects based on computer-aided design or digital 3D models. Utilizing 3D-printed preoperative renal models provides benefits for surgical planning, offering a more reliable assessment of the tumor's relationship with vital anatomical structures and enabling better preparation for procedures. AR technology allows surgeons to visualize patient-specific renal anatomical structures and their spatial relationships with surrounding organs by projecting CT/MRI images onto a live laparoscopic video. Incorporating patient-specific 3D digital models into healthcare enhances best practice, resulting in improved patient care, increased patient satisfaction, and cost saving for the healthcare system.
文摘BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolving with the employment of augmented reality.Yet,the accuracy of augmented reality navigation systems has not been determined.AIM To examine the accuracy of component alignment and restoration of the affected limb’s mechanical axis in primary total knee arthroplasty(TKA),utilizing an augmented reality navigation system and to assess whether such systems are conspicuously fruitful for an accomplished knee surgeon.METHODS From May 2021 to December 2021,30 patients,25 women and five men,under-went a primary unilateral TKA.Revision cases were excluded.A preoperative radiographic procedure was performed to evaluate the limb’s axial alignment.All patients were operated on by the same team,without a tourniquet,utilizing three distinct prostheses with the assistance of the Knee+™augmented reality navigation system in every operation.Postoperatively,the same radiographic exam protocol was executed to evaluate the implants’position,orientation and coronal plane alignment.We recorded measurements in 3 stages regarding femoral varus and flexion,tibial varus and posterior slope.Firstly,the expected values from the Augmented Reality system were documented.Then we calculated the same values after each cut and finally,the same measurements were recorded radiolo-gically after the operations.Concerning statistical analysis,Lin’s concordance correlation coefficient was estimated,while Wilcoxon Signed Rank Test was performed when needed.RESULTS A statistically significant difference was observed regarding mean expected values and radiographic mea-surements for femoral flexion measurements only(Z score=2.67,P value=0.01).Nonetheless,this difference was statistically significantly lower than 1 degree(Z score=-4.21,P value<0.01).In terms of discrepancies in the calculations of expected values and controlled measurements,a statistically significant difference between tibial varus values was detected(Z score=-2.33,P value=0.02),which was also statistically significantly lower than 1 degree(Z score=-4.99,P value<0.01).CONCLUSION The results indicate satisfactory postoperative coronal alignment without outliers across all three different implants utilized.Augmented reality navigation systems can bolster orthopaedic surgeons’accuracy in achieving precise axial alignment.However,further research is required to further evaluate their efficacy and potential.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
文摘Virtual reality(VR)and augmented reality(AR)technologies have become increasingly important instruments in the field of art education as information technology develops quickly,transforming the conventional art education approach.The present situation,benefits,difficulties,and potential development tendencies of VR and AR technologies in art education will be investigated in this study.By means of literature analysis and case studies,this paper presents the fundamental ideas of VR and AR technologies together with their several uses in art education,namely virtual museums,interactive art production,art history instruction,and distant art cooperation.The research examines how these technologies might improve students’immersion,raise their learning motivation,and encourage innovative ideas and multidisciplinary cooperation.Practical application concerns including technology costs,content production obstacles,user acceptance,privacy,and ethical questions also come under discussion.At last,the article offers ideas and suggestions to help VR and AR technologies be effectively integrated into art education through teacher training,curriculum design,technology infrastructure development,and multidisciplinary cooperation.This study offers useful advice for teachers of art as well as important references for legislators and technology developers working together to further the creative growth of art education.
基金Guizhou University of Finance and Economics 2024 Student Self-Funded Research Project Funding(Project no.2024ZXSY001)。
文摘The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.
基金supported by grants from the Mission Plan Program of Beijing Municipal Administration of Hospitals(SML20152201)Beijing Municipal Administration of Hospitals Clinical Medicine Development of Special Funding(ZYLX201712)+1 种基金the National Natural Science Foundation of China(81427803)Beijing Tsinghua Changgung Hospital Fund(12015C1039)
文摘Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes.Data Sources: The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the Pub Med database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles.Results: In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery,which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology.Conclusions: With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling,and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.
基金supported by a Grant-in-Aid for Scientific Research (22659366) from the Japan Society for the Promotion of Science
文摘To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (〈1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.
文摘The development of digital intelligent diagnostic and treatment technology has opened countless new opportunities for liver surgery from the era of digital anatomy to a new era of digital diagnostics,virtual surgery simulation and using the created scenarios in real-time surgery using mixed reality.In this article,we described our experience on developing a dedicated 3 dimensional visualization and reconstruction software for surgeons to be used in advanced liver surgery and living donor liver transplantation.Furthermore,we shared the recent developments in the field by explaining the outreach of the software from virtual reality to augmented reality and mixed reality.
基金supported by Tianjin Sci-tech Planning Projects (14RCGFGX00846)Natural Science Foundation of Hebei Province (F2015202239)+1 种基金Tianjin Sci-tech Planning Projects (15ZCZDNC00130)Science and Technology Research Project of Hebei Province (Z2015044)
基金Project supported by Science Foundation of Shanghai Municipal Commission of Science and Technology (Grant No .025115008)
文摘Nonlinear errors always exist in data obtained from tracker in augmented reality (AR), which badly influence the effect of AR. This paper proposes to rectify the errors using BP neural network. As BP neural network is prone to getting into local extrema and convergence is slow, genetic algorithm is employed to optimize the initial weights and threshold of neural network. This paper discusses how to set the crucial parameters in the algorithm. Experimental results show that the method ensures that the neural network achieves global convergence quickly and correctly. Tracking precision of AR system is improved after the tracker is rectified, and the third dimension of AR system is enhanced.
文摘Vision 2030 requires a new generation of people with a wide variety of abilities,talents,and skills.The adoption of augmented reality(AR)and virtual reality is one possible way to align education with Vision 2030.Immersive technologies like AR are rapidly becoming powerful and versatile enough to be adopted in education to achieve this goal.Technologies such as AR could be beneficial tools to enhance maintainable growth in education.We reviewed the most recent studies in augmented reality to check its appropriateness in aligning with the educational goals of Vision 2030.First,the various definitions,terminologies,and technologies of AR are described briefly.Then,the specific characteristics and benefits of AR systems are determined.There may be a significance of the pedagogical method used by adapting the AR scheme and the consistency of the equipment and learning experiences.Therefore,three kinds of instructional methods that stress roles,location,and tasks were evaluated.The kind of learning that is offered by the distinct kinds of AR approaches is elaborated upon.The technological,pedagogical,learning problems experienced with AR are described.The potential solutions for a few of the issues experienced and the topics for subsequent research are presented in this article.
基金Air Force Office of Scientific Research(FA9550-14-1-0279)Goertek Electronics.
文摘Augmented reality(AR)displays are attracting significant attention and efforts.In this paper,we review the adopted device configurations of see-through displays,summarize the current development status and highlight future challenges in micro-displays.A brief introduction to optical gratings is presented to help understand the challenging design of grating-based waveguide for AR displays.Finally,we discuss the most recent progress in diffraction grating and its implications.
基金the National Natural Science Foundation of China(51875517,51490663 and 51821093)and Key Research and Development Program of Zhejiang Province(2017C01045).
文摘Product assembly simulation is considered as one of the key technologies in the process of complex product design and manufacturing.Virtual assembly realizes the assembly process design,verification,and optimization of complex products in the virtual environment,which plays an active and effective role in improving the assembly quality and efficiency of complex products.In recent years,augmented reality(AR)and digital twin(DT)technology have brought new opportunities and challenges to the digital assembly of complex products owing to their characteristics of virtual reality fusion and interactive control.This paper expounds the concept and connotation of AR,enumerates a typical AR assembly system structure,analyzes the key technologies and applications of AR in digital assembly,and notes that DT technology is the future development trend of intelligent assembly research.
文摘In fields such as science and engineering, virtual environment is commonly used to provide replacements for practical hands-on laboratories. Sometimes, these environments take the form of a remote interface to the physical laboratory apparatus and at other times, in the form of a complete software implementation that simulates the laboratory apparatus. In this paper, we report on the use of a semi-immersive 3D mobile Augmented Reality (mAR) interface and limited simulations as a replacement for practical hands-on laboratories in science and engineering. The 3D-mAR based interfaces implementations for three different experiments (from micro-electronics, power and communications engineering) are presented;the discovered limitations are discussed along with the results of an evaluation by science and engineering students from two different institutions and plans for future work.
基金the National Key Research and Development Program of China(2016YFB1001501)NSF of China(61672457)+1 种基金the Fundamental Research Funds for the Central Universities(2018FZA5011)Zhejiang University-SenseTime Joint Lab of 3D Vision.
文摘Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an appropriate benchmark.For AR applications in practice,a variety of challenging situations(e.g.,fast motion,strong rotation,serious motion blur,dynamic interference)may be easily encountered since a home user may not carefully move the AR device,and the real environment may be quite complex.In addition,the frequency of camera lost should be minimized and the recovery from the failure status should be fast and accurate for good AR experience.Existing SLAM datasets/benchmarks generally only provide the evaluation of pose accuracy and their camera motions are somehow simple and do not fit well the common cases in the mobile AR applications.With the above motivation,we build a new visual-inertial dataset as well as a series of evaluation criteria for AR.We also review the existing monocular VSLAM/VISLAM approaches with detailed analyses and comparisons.Especially,we select 8 representative monocular VSLAM/VISLAM approaches/systems and quantitatively evaluate them on our benchmark.Our dataset,sample code and corresponding evaluation tools are available at the benchmark website http://www.zjucvg.net/eval-vislam/.
基金the National Key Research and Development Program of China(2018YFB1004905).
文摘Background Mixed-reality technologies,including virtual reality(VR)and augmented reality(AR),are considered to be promising potential tools for science teaching and learning processes that could foster positive emotions,motivate autonomous learning,and improve learning outcomes.Methods In this study,a technology-aided biological microscope learning system based on VR/AR is presented.The structure of the microscope is described in a detailed three-dimensional(3D)model,each component being represented with their topological interrelationships and associations among them being established.The interactive behavior of the model was specified,and a standard operating guide was compiled.The motion control of components was simulated based on collision detection.Combined with immersive VR equipment and AR technology,we developed a virtual microscope subsystem and a mobile virtual microscope guidance system.Results The system consisted of a VR subsystem and an AR subsystem.The focus of the VR subsystem was to simulate operating the microscope and associated interactive behaviors that allowed users to observe and operate the components of the 3D microscope model by means of natural interactions in an immersive scenario.The AR subsystem allowed participants to use a mobile terminal that took a picture of a microscope from a textbook and then displayed the structure and functions of the instrument,as well as the relevant operating guidance.This flexibly allowed students to use the system before or after class without time and space constraints.The system allowed users to switch between the VR and AR subsystems.Conclusions The system is useful for helping learners(especially K-12 students)to recognize a microscope's structure and grasp the required operational skills by simulating operations using an interactive process.In the future,such technology-assisted education would be a successful learning platform in an open learning space.