The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent s...The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.展开更多
Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the instal...Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.展开更多
Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overt...Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overtaking maneuvers,leading to accidents and fatalities.In this paper,we consider atrous convolution,a powerful tool for explicitly adjusting the field-of-view of a filter as well as controlling the resolution of feature responses generated by Deep Convolutional Neural Networks in the context of semantic image segmentation.This article explores the potential of seeing-through vehicles as a solution to enhance overtaking safety.See-through vehicles leverage advanced technologies such as cameras,sensors,and displays to provide drivers with a real-time view of the vehicle ahead,including the areas hidden from their direct line of sight.To address the problems of safe passing and occlusion by huge vehicles,we designed a see-through vehicle system in this study,we employed a windshield display in the back car together with cameras in both cars.The server within the back car was used to segment the car,and the segmented portion of the car displayed the video from the front car.Our see-through system improves the driver’s field of vision and helps him change lanes,cross a large car that is blocking their view,and safely overtake other vehicles.Our network was trained and tested on the Cityscape dataset using semantic segmentation.This transparent technique will instruct the driver on the concealed traffic situation that the front vehicle has obscured.For our findings,we have achieved 97.1% F1-score.The article also discusses the challenges and opportunities of implementing see-through vehicles in real-world scenarios,including technical,regulatory,and user acceptance factors.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolvi...BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolving with the employment of augmented reality.Yet,the accuracy of augmented reality navigation systems has not been determined.AIM To examine the accuracy of component alignment and restoration of the affected limb’s mechanical axis in primary total knee arthroplasty(TKA),utilizing an augmented reality navigation system and to assess whether such systems are conspicuously fruitful for an accomplished knee surgeon.METHODS From May 2021 to December 2021,30 patients,25 women and five men,under-went a primary unilateral TKA.Revision cases were excluded.A preoperative radiographic procedure was performed to evaluate the limb’s axial alignment.All patients were operated on by the same team,without a tourniquet,utilizing three distinct prostheses with the assistance of the Knee+™augmented reality navigation system in every operation.Postoperatively,the same radiographic exam protocol was executed to evaluate the implants’position,orientation and coronal plane alignment.We recorded measurements in 3 stages regarding femoral varus and flexion,tibial varus and posterior slope.Firstly,the expected values from the Augmented Reality system were documented.Then we calculated the same values after each cut and finally,the same measurements were recorded radiolo-gically after the operations.Concerning statistical analysis,Lin’s concordance correlation coefficient was estimated,while Wilcoxon Signed Rank Test was performed when needed.RESULTS A statistically significant difference was observed regarding mean expected values and radiographic mea-surements for femoral flexion measurements only(Z score=2.67,P value=0.01).Nonetheless,this difference was statistically significantly lower than 1 degree(Z score=-4.21,P value<0.01).In terms of discrepancies in the calculations of expected values and controlled measurements,a statistically significant difference between tibial varus values was detected(Z score=-2.33,P value=0.02),which was also statistically significantly lower than 1 degree(Z score=-4.99,P value<0.01).CONCLUSION The results indicate satisfactory postoperative coronal alignment without outliers across all three different implants utilized.Augmented reality navigation systems can bolster orthopaedic surgeons’accuracy in achieving precise axial alignment.However,further research is required to further evaluate their efficacy and potential.展开更多
Wireless smart sensors(WSS)process field data and inform inspectors about the infrastructure health and safety.In bridge engineering,inspectors need reliable data about changes in displacements under loads to make cor...Wireless smart sensors(WSS)process field data and inform inspectors about the infrastructure health and safety.In bridge engineering,inspectors need reliable data about changes in displacements under loads to make correct decisions about repairs and replacements.Access to displacement information in the field and in real-time remains a challenge as inspectors do not see the data in real time.Displacement data from WSS in the field undergoes additional processing and is seen at a different location.If inspectors were able to see structural displacements in real-time at the locations of interest,they could conduct additional observations,creating a new,information-based,decision-making reality in the field.This paper develops a new,human-centered interface that provides inspectors with real-time access to actionable structural data during inspection and monitoring enhanced by augmented reality(AR).It summarizes and evaluates the development and validation of the new human-infrastructure interface in laboratory experiments.The experiments demonstrate that the interface that processes all calculations in the AR device accurately estimates dynamic displacements in comparison with the laser.Using this new AR interface tool,inspectors can observe and compare displacement data,share it across space and time,visualize displacements in time history,and understand structural deflection more accurately through a displacement time history visualization.展开更多
Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implemen...Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implementations of visual inspection are substantially time-consuming,labor-intensive,and error-prone because useful auxiliary tools that can instantly highlight defects or damage locations from images are not available.Therefore,an advanced building inspection framework is developed and implemented with augmented reality(AR)and real-time damage detection in this study.In this framework,engineers should walk around and film every corner of the building interior to generate the three-dimensional(3D)environment through ARKit.Meanwhile,a trained YOLOv5 model real-time detects defects during this process,even in a large-scale field,and the defect locations indicating the detected defects are then marked in this 3D environment.The defects areas can be measured with centimeter-level accuracy with the light detection and ranging(LiDAR)on devices.All required damage information,including defect positions and sizes,is collected at a time and can be rendered in the 2D and 3D views.Finally,this visual inspection can be efficiently conducted,and the previously generated environment can also be loaded to re-localize existing defect marks for future maintenance and change observation.Moreover,the proposed framework is also implemented and verified by an underground parking lot in a building to detect and quantify surface defects on concrete components.As seen in the results,the conventional building inspection is significantly improved with the aid of the proposed framework in terms of damage localization,damage quantification,and inspection efficiency.展开更多
Augmented Reality(AR)tries to seamlessly integrate virtual content into the real world of the user.Ideally,the virtual content would behave exactly like real objects.This necessitates a correct and precise estimation ...Augmented Reality(AR)tries to seamlessly integrate virtual content into the real world of the user.Ideally,the virtual content would behave exactly like real objects.This necessitates a correct and precise estimation of the user’s viewpoint(or that of a camera)with regard to the virtual content’s coordinate sys-tem.Therefore,the real-time establishment of 3-dimension(3D)maps in real scenes is particularly important for augmented reality technology.So in this paper,we integrate Simultaneous Localization and Mapping(SLAM)technology into augmented reality.Our research is to implement an augmented reality system without markers using the ORB-SLAM2 framework algorithm.In this paper we propose an improved method for Oriented FAST and Rotated BRIEF(ORB)feature extraction and optimized key frame selection,as well as the use of the Progressive Sample Consensus(PROSAC)algorithm for planar estimation of augmented reality implementations,thus solving the problem of increased sys-tem runtime because of the loss of large amounts of texture information in images.In this paper,we get better results by comparing experiments and data analysis.However,there are some improved methods of PROSAC algorithm which are more suitable for the detection of plane feature points.展开更多
Background With an increasing number of vehicles becoming autonomous,intelligent,and connected,paying attention to the future usage of car human-machine interface with these vehicles should become more relevant.Severa...Background With an increasing number of vehicles becoming autonomous,intelligent,and connected,paying attention to the future usage of car human-machine interface with these vehicles should become more relevant.Several studies have addressed car HMI but were less attentive to designing and implementing interactive glazing for every day(autonomous)driving contexts.Methods Reflecting on the literature,we describe an engineering psychology practice and the design of six novel future user scenarios,which envision the application of a specific set of augmented reality(AR)support user interactions.Additionally,we conduct evaluations on specific scenarios and experiential prototypes,which reveal that these AR scenarios aid the target user groups in experiencing a new type of interaction.The overall evaluation is positive with valuable assessment results and suggestions.Conclusions This study can interest applied psychology educators who aspire to teach how AR can be operationalized in a human-centered design process to students with minimal pre-existing expertise or minimal scientific knowledge in engineering psychology.展开更多
In response to the construction needs of “Real 3D China”, the system structure, functional framework, application direction and product form of block level augmented reality three-dimensional map is designed. Those ...In response to the construction needs of “Real 3D China”, the system structure, functional framework, application direction and product form of block level augmented reality three-dimensional map is designed. Those provide references and ideas for the later large-scale production of augmented reality three-dimensional map. The augmented reality three-dimensional map is produced based on skyline software. Including the map browsing, measurement and analysis and so on, the basic function of three-dimensional map is realized. The special functional module including housing management, pipeline management and so on is developed combining the need of residential quarters development, that expands the application fields of augmented reality three-dimensional map. Those lay the groundwork for the application of augmented reality three-dimensional map. .展开更多
With the advent of the information age,augmented reality technology can enhance the sense of reality in the virtual world and immerse people in the real and virtual world.People have always been interested in virtual ...With the advent of the information age,augmented reality technology can enhance the sense of reality in the virtual world and immerse people in the real and virtual world.People have always been interested in virtual space or augmented reality technology,especially in the face of historical development trends.Museums have always been open to the public,they are not just a collection,but also an exhibition for the public.Through analyzing different museums,it is found that museums with augmented reality technology exhibitions are non-profit museums for the purpose of emotional experience.In order to find out the immersion factors of virtual reality in museums,on the basis of previous research,this paper divides immersion in augmented reality into story immersion,five-sense experience immersion,and spatial interaction immersion.Through the analysis of different museums,it is found that immersion through five-sense experience is the most commonly used method when all display media are included.This method provides educational content for museum visitors and projects virtual presentations through monitors and portable IT devices,thereby increasing visitors’viewing pleasure.展开更多
Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number o...Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number of cards cause difficulty and inconvenience for users.Secondly,most users conduct experiments only in the visual modal,and such single-modal interaction greatly reduces the users'real sense of interaction.In order to solve these problems,we propose the Multimodal Interaction Algorithm based on Augmented Reality(ARGEV),which is based on visual and tactile feedback in Augmented Reality.In addition,we design a Virtual and Real Fusion Interactive Tool Suite(VRFITS)with gesture recognition and intelligent equipment.Methods The ARGVE method fuses gesture,intelligent equipment,and virtual models.We use a gesture recognition model trained by a convolutional neural network to recognize the gestures in AR,and to trigger a vibration feedback after a recognizing a five finger grasp gesture.We establish a coordinate mapping relationship between real hands and the virtual model to achieve the fusion of gestures and the virtual model.Results The average accuracy rate of gesture recognition was 99.04%.We verify and apply VRFITS in the Augmented Reality Chemistry Lab(ARCL),and the overall operation load of ARCL is thus reduced by 29.42%,in comparison to traditional simulation virtual experiments.Conclusions We achieve real-time fusion of the gesture,virtual model,and intelligent equipment in ARCL.Compared with the NOBOOK virtual simulation experiment,ARCL improves the users'real sense of operation and interaction efficiency.展开更多
Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the v...Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes.Data Sources: The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the Pub Med database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles.Results: In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery,which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology.Conclusions: With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling,and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.展开更多
To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. T...To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (〈1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.展开更多
Nonlinear errors always exist in data obtained from tracker in augmented reality (AR), which badly influence the effect of AR. This paper proposes to rectify the errors using BP neural network. As BP neural network ...Nonlinear errors always exist in data obtained from tracker in augmented reality (AR), which badly influence the effect of AR. This paper proposes to rectify the errors using BP neural network. As BP neural network is prone to getting into local extrema and convergence is slow, genetic algorithm is employed to optimize the initial weights and threshold of neural network. This paper discusses how to set the crucial parameters in the algorithm. Experimental results show that the method ensures that the neural network achieves global convergence quickly and correctly. Tracking precision of AR system is improved after the tracker is rectified, and the third dimension of AR system is enhanced.展开更多
Vision 2030 requires a new generation of people with a wide variety of abilities,talents,and skills.The adoption of augmented reality(AR)and virtual reality is one possible way to align education with Vision 2030.Imme...Vision 2030 requires a new generation of people with a wide variety of abilities,talents,and skills.The adoption of augmented reality(AR)and virtual reality is one possible way to align education with Vision 2030.Immersive technologies like AR are rapidly becoming powerful and versatile enough to be adopted in education to achieve this goal.Technologies such as AR could be beneficial tools to enhance maintainable growth in education.We reviewed the most recent studies in augmented reality to check its appropriateness in aligning with the educational goals of Vision 2030.First,the various definitions,terminologies,and technologies of AR are described briefly.Then,the specific characteristics and benefits of AR systems are determined.There may be a significance of the pedagogical method used by adapting the AR scheme and the consistency of the equipment and learning experiences.Therefore,three kinds of instructional methods that stress roles,location,and tasks were evaluated.The kind of learning that is offered by the distinct kinds of AR approaches is elaborated upon.The technological,pedagogical,learning problems experienced with AR are described.The potential solutions for a few of the issues experienced and the topics for subsequent research are presented in this article.展开更多
Augmented reality(AR)displays are attracting significant attention and efforts.In this paper,we review the adopted device configurations of see-through displays,summarize the current development status and highlight f...Augmented reality(AR)displays are attracting significant attention and efforts.In this paper,we review the adopted device configurations of see-through displays,summarize the current development status and highlight future challenges in micro-displays.A brief introduction to optical gratings is presented to help understand the challenging design of grating-based waveguide for AR displays.Finally,we discuss the most recent progress in diffraction grating and its implications.展开更多
Product assembly simulation is considered as one of the key technologies in the process of complex product design and manufacturing.Virtual assembly realizes the assembly process design,verification,and optimization o...Product assembly simulation is considered as one of the key technologies in the process of complex product design and manufacturing.Virtual assembly realizes the assembly process design,verification,and optimization of complex products in the virtual environment,which plays an active and effective role in improving the assembly quality and efficiency of complex products.In recent years,augmented reality(AR)and digital twin(DT)technology have brought new opportunities and challenges to the digital assembly of complex products owing to their characteristics of virtual reality fusion and interactive control.This paper expounds the concept and connotation of AR,enumerates a typical AR assembly system structure,analyzes the key technologies and applications of AR in digital assembly,and notes that DT technology is the future development trend of intelligent assembly research.展开更多
In fields such as science and engineering, virtual environment is commonly used to provide replacements for practical hands-on laboratories. Sometimes, these environments take the form of a remote interface to the phy...In fields such as science and engineering, virtual environment is commonly used to provide replacements for practical hands-on laboratories. Sometimes, these environments take the form of a remote interface to the physical laboratory apparatus and at other times, in the form of a complete software implementation that simulates the laboratory apparatus. In this paper, we report on the use of a semi-immersive 3D mobile Augmented Reality (mAR) interface and limited simulations as a replacement for practical hands-on laboratories in science and engineering. The 3D-mAR based interfaces implementations for three different experiments (from micro-electronics, power and communications engineering) are presented;the discovered limitations are discussed along with the results of an evaluation by science and engineering students from two different institutions and plans for future work.展开更多
基金Guizhou University of Finance and Economics 2024 Student Self-Funded Research Project Funding(Project no.2024ZXSY001)。
文摘The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.
文摘Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.
基金financially supported by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D Program(Project No.P0016038)supported by the MSIT(Ministry of Sci-ence and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2022-RS-2022-00156354)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overtaking maneuvers,leading to accidents and fatalities.In this paper,we consider atrous convolution,a powerful tool for explicitly adjusting the field-of-view of a filter as well as controlling the resolution of feature responses generated by Deep Convolutional Neural Networks in the context of semantic image segmentation.This article explores the potential of seeing-through vehicles as a solution to enhance overtaking safety.See-through vehicles leverage advanced technologies such as cameras,sensors,and displays to provide drivers with a real-time view of the vehicle ahead,including the areas hidden from their direct line of sight.To address the problems of safe passing and occlusion by huge vehicles,we designed a see-through vehicle system in this study,we employed a windshield display in the back car together with cameras in both cars.The server within the back car was used to segment the car,and the segmented portion of the car displayed the video from the front car.Our see-through system improves the driver’s field of vision and helps him change lanes,cross a large car that is blocking their view,and safely overtake other vehicles.Our network was trained and tested on the Cityscape dataset using semantic segmentation.This transparent technique will instruct the driver on the concealed traffic situation that the front vehicle has obscured.For our findings,we have achieved 97.1% F1-score.The article also discusses the challenges and opportunities of implementing see-through vehicles in real-world scenarios,including technical,regulatory,and user acceptance factors.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
文摘BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolving with the employment of augmented reality.Yet,the accuracy of augmented reality navigation systems has not been determined.AIM To examine the accuracy of component alignment and restoration of the affected limb’s mechanical axis in primary total knee arthroplasty(TKA),utilizing an augmented reality navigation system and to assess whether such systems are conspicuously fruitful for an accomplished knee surgeon.METHODS From May 2021 to December 2021,30 patients,25 women and five men,under-went a primary unilateral TKA.Revision cases were excluded.A preoperative radiographic procedure was performed to evaluate the limb’s axial alignment.All patients were operated on by the same team,without a tourniquet,utilizing three distinct prostheses with the assistance of the Knee+™augmented reality navigation system in every operation.Postoperatively,the same radiographic exam protocol was executed to evaluate the implants’position,orientation and coronal plane alignment.We recorded measurements in 3 stages regarding femoral varus and flexion,tibial varus and posterior slope.Firstly,the expected values from the Augmented Reality system were documented.Then we calculated the same values after each cut and finally,the same measurements were recorded radiolo-gically after the operations.Concerning statistical analysis,Lin’s concordance correlation coefficient was estimated,while Wilcoxon Signed Rank Test was performed when needed.RESULTS A statistically significant difference was observed regarding mean expected values and radiographic mea-surements for femoral flexion measurements only(Z score=2.67,P value=0.01).Nonetheless,this difference was statistically significantly lower than 1 degree(Z score=-4.21,P value<0.01).In terms of discrepancies in the calculations of expected values and controlled measurements,a statistically significant difference between tibial varus values was detected(Z score=-2.33,P value=0.02),which was also statistically significantly lower than 1 degree(Z score=-4.99,P value<0.01).CONCLUSION The results indicate satisfactory postoperative coronal alignment without outliers across all three different implants utilized.Augmented reality navigation systems can bolster orthopaedic surgeons’accuracy in achieving precise axial alignment.However,further research is required to further evaluate their efficacy and potential.
基金Air Force Research Laboratory(AFRL,Grant No.FA9453-18-2-0022)the New Mexico Consortium(NMC,Grant No.2RNA6)the US Department of Transportation Center:Transportation Consortium of South-Central States(TRANSET)Project 19STUNM02(TRANSET,Grant No.8-18-060ST)。
文摘Wireless smart sensors(WSS)process field data and inform inspectors about the infrastructure health and safety.In bridge engineering,inspectors need reliable data about changes in displacements under loads to make correct decisions about repairs and replacements.Access to displacement information in the field and in real-time remains a challenge as inspectors do not see the data in real time.Displacement data from WSS in the field undergoes additional processing and is seen at a different location.If inspectors were able to see structural displacements in real-time at the locations of interest,they could conduct additional observations,creating a new,information-based,decision-making reality in the field.This paper develops a new,human-centered interface that provides inspectors with real-time access to actionable structural data during inspection and monitoring enhanced by augmented reality(AR).It summarizes and evaluates the development and validation of the new human-infrastructure interface in laboratory experiments.The experiments demonstrate that the interface that processes all calculations in the AR device accurately estimates dynamic displacements in comparison with the laser.Using this new AR interface tool,inspectors can observe and compare displacement data,share it across space and time,visualize displacements in time history,and understand structural deflection more accurately through a displacement time history visualization.
文摘Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implementations of visual inspection are substantially time-consuming,labor-intensive,and error-prone because useful auxiliary tools that can instantly highlight defects or damage locations from images are not available.Therefore,an advanced building inspection framework is developed and implemented with augmented reality(AR)and real-time damage detection in this study.In this framework,engineers should walk around and film every corner of the building interior to generate the three-dimensional(3D)environment through ARKit.Meanwhile,a trained YOLOv5 model real-time detects defects during this process,even in a large-scale field,and the defect locations indicating the detected defects are then marked in this 3D environment.The defects areas can be measured with centimeter-level accuracy with the light detection and ranging(LiDAR)on devices.All required damage information,including defect positions and sizes,is collected at a time and can be rendered in the 2D and 3D views.Finally,this visual inspection can be efficiently conducted,and the previously generated environment can also be loaded to re-localize existing defect marks for future maintenance and change observation.Moreover,the proposed framework is also implemented and verified by an underground parking lot in a building to detect and quantify surface defects on concrete components.As seen in the results,the conventional building inspection is significantly improved with the aid of the proposed framework in terms of damage localization,damage quantification,and inspection efficiency.
基金supported by the Hainan Provincial Natural Science Foundation of China(project number:621QN269)the Sanya Science and Information Bureau Foundation(project number:2021GXYL251).
文摘Augmented Reality(AR)tries to seamlessly integrate virtual content into the real world of the user.Ideally,the virtual content would behave exactly like real objects.This necessitates a correct and precise estimation of the user’s viewpoint(or that of a camera)with regard to the virtual content’s coordinate sys-tem.Therefore,the real-time establishment of 3-dimension(3D)maps in real scenes is particularly important for augmented reality technology.So in this paper,we integrate Simultaneous Localization and Mapping(SLAM)technology into augmented reality.Our research is to implement an augmented reality system without markers using the ORB-SLAM2 framework algorithm.In this paper we propose an improved method for Oriented FAST and Rotated BRIEF(ORB)feature extraction and optimized key frame selection,as well as the use of the Progressive Sample Consensus(PROSAC)algorithm for planar estimation of augmented reality implementations,thus solving the problem of increased sys-tem runtime because of the loss of large amounts of texture information in images.In this paper,we get better results by comparing experiments and data analysis.However,there are some improved methods of PROSAC algorithm which are more suitable for the detection of plane feature points.
基金Supported by the‘Automotive Glazing Application in Intelligent Cockpit Human-Machine Interface’project(SKHX2021049)a collaboration between the Saint-Go Bain Research and the Beijing Normal University。
文摘Background With an increasing number of vehicles becoming autonomous,intelligent,and connected,paying attention to the future usage of car human-machine interface with these vehicles should become more relevant.Several studies have addressed car HMI but were less attentive to designing and implementing interactive glazing for every day(autonomous)driving contexts.Methods Reflecting on the literature,we describe an engineering psychology practice and the design of six novel future user scenarios,which envision the application of a specific set of augmented reality(AR)support user interactions.Additionally,we conduct evaluations on specific scenarios and experiential prototypes,which reveal that these AR scenarios aid the target user groups in experiencing a new type of interaction.The overall evaluation is positive with valuable assessment results and suggestions.Conclusions This study can interest applied psychology educators who aspire to teach how AR can be operationalized in a human-centered design process to students with minimal pre-existing expertise or minimal scientific knowledge in engineering psychology.
文摘In response to the construction needs of “Real 3D China”, the system structure, functional framework, application direction and product form of block level augmented reality three-dimensional map is designed. Those provide references and ideas for the later large-scale production of augmented reality three-dimensional map. The augmented reality three-dimensional map is produced based on skyline software. Including the map browsing, measurement and analysis and so on, the basic function of three-dimensional map is realized. The special functional module including housing management, pipeline management and so on is developed combining the need of residential quarters development, that expands the application fields of augmented reality three-dimensional map. Those lay the groundwork for the application of augmented reality three-dimensional map. .
文摘With the advent of the information age,augmented reality technology can enhance the sense of reality in the virtual world and immerse people in the real and virtual world.People have always been interested in virtual space or augmented reality technology,especially in the face of historical development trends.Museums have always been open to the public,they are not just a collection,but also an exhibition for the public.Through analyzing different museums,it is found that museums with augmented reality technology exhibitions are non-profit museums for the purpose of emotional experience.In order to find out the immersion factors of virtual reality in museums,on the basis of previous research,this paper divides immersion in augmented reality into story immersion,five-sense experience immersion,and spatial interaction immersion.Through the analysis of different museums,it is found that immersion through five-sense experience is the most commonly used method when all display media are included.This method provides educational content for museum visitors and projects virtual presentations through monitors and portable IT devices,thereby increasing visitors’viewing pleasure.
基金the National Key R&D Program of China(2018YFB1004901)the Independent Innovation Team Project of Jinan City(2019GXRC013).
文摘Background Augmented reality classrooms have become an interesting research topic in the field of education,but there are some limitations.Firstly,most researchers use cards to operate experiments,and a large number of cards cause difficulty and inconvenience for users.Secondly,most users conduct experiments only in the visual modal,and such single-modal interaction greatly reduces the users'real sense of interaction.In order to solve these problems,we propose the Multimodal Interaction Algorithm based on Augmented Reality(ARGEV),which is based on visual and tactile feedback in Augmented Reality.In addition,we design a Virtual and Real Fusion Interactive Tool Suite(VRFITS)with gesture recognition and intelligent equipment.Methods The ARGVE method fuses gesture,intelligent equipment,and virtual models.We use a gesture recognition model trained by a convolutional neural network to recognize the gestures in AR,and to trigger a vibration feedback after a recognizing a five finger grasp gesture.We establish a coordinate mapping relationship between real hands and the virtual model to achieve the fusion of gestures and the virtual model.Results The average accuracy rate of gesture recognition was 99.04%.We verify and apply VRFITS in the Augmented Reality Chemistry Lab(ARCL),and the overall operation load of ARCL is thus reduced by 29.42%,in comparison to traditional simulation virtual experiments.Conclusions We achieve real-time fusion of the gesture,virtual model,and intelligent equipment in ARCL.Compared with the NOBOOK virtual simulation experiment,ARCL improves the users'real sense of operation and interaction efficiency.
基金supported by grants from the Mission Plan Program of Beijing Municipal Administration of Hospitals(SML20152201)Beijing Municipal Administration of Hospitals Clinical Medicine Development of Special Funding(ZYLX201712)+1 种基金the National Natural Science Foundation of China(81427803)Beijing Tsinghua Changgung Hospital Fund(12015C1039)
文摘Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes.Data Sources: The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the Pub Med database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles.Results: In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery,which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology.Conclusions: With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling,and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.
基金supported by a Grant-in-Aid for Scientific Research (22659366) from the Japan Society for the Promotion of Science
文摘To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (〈1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye.
基金Project supported by Science Foundation of Shanghai Municipal Commission of Science and Technology (Grant No .025115008)
文摘Nonlinear errors always exist in data obtained from tracker in augmented reality (AR), which badly influence the effect of AR. This paper proposes to rectify the errors using BP neural network. As BP neural network is prone to getting into local extrema and convergence is slow, genetic algorithm is employed to optimize the initial weights and threshold of neural network. This paper discusses how to set the crucial parameters in the algorithm. Experimental results show that the method ensures that the neural network achieves global convergence quickly and correctly. Tracking precision of AR system is improved after the tracker is rectified, and the third dimension of AR system is enhanced.
基金supported by Tianjin Sci-tech Planning Projects (14RCGFGX00846)Natural Science Foundation of Hebei Province (F2015202239)+1 种基金Tianjin Sci-tech Planning Projects (15ZCZDNC00130)Science and Technology Research Project of Hebei Province (Z2015044)
文摘Vision 2030 requires a new generation of people with a wide variety of abilities,talents,and skills.The adoption of augmented reality(AR)and virtual reality is one possible way to align education with Vision 2030.Immersive technologies like AR are rapidly becoming powerful and versatile enough to be adopted in education to achieve this goal.Technologies such as AR could be beneficial tools to enhance maintainable growth in education.We reviewed the most recent studies in augmented reality to check its appropriateness in aligning with the educational goals of Vision 2030.First,the various definitions,terminologies,and technologies of AR are described briefly.Then,the specific characteristics and benefits of AR systems are determined.There may be a significance of the pedagogical method used by adapting the AR scheme and the consistency of the equipment and learning experiences.Therefore,three kinds of instructional methods that stress roles,location,and tasks were evaluated.The kind of learning that is offered by the distinct kinds of AR approaches is elaborated upon.The technological,pedagogical,learning problems experienced with AR are described.The potential solutions for a few of the issues experienced and the topics for subsequent research are presented in this article.
基金Air Force Office of Scientific Research(FA9550-14-1-0279)Goertek Electronics.
文摘Augmented reality(AR)displays are attracting significant attention and efforts.In this paper,we review the adopted device configurations of see-through displays,summarize the current development status and highlight future challenges in micro-displays.A brief introduction to optical gratings is presented to help understand the challenging design of grating-based waveguide for AR displays.Finally,we discuss the most recent progress in diffraction grating and its implications.
基金the National Natural Science Foundation of China(51875517,51490663 and 51821093)and Key Research and Development Program of Zhejiang Province(2017C01045).
文摘Product assembly simulation is considered as one of the key technologies in the process of complex product design and manufacturing.Virtual assembly realizes the assembly process design,verification,and optimization of complex products in the virtual environment,which plays an active and effective role in improving the assembly quality and efficiency of complex products.In recent years,augmented reality(AR)and digital twin(DT)technology have brought new opportunities and challenges to the digital assembly of complex products owing to their characteristics of virtual reality fusion and interactive control.This paper expounds the concept and connotation of AR,enumerates a typical AR assembly system structure,analyzes the key technologies and applications of AR in digital assembly,and notes that DT technology is the future development trend of intelligent assembly research.
文摘In fields such as science and engineering, virtual environment is commonly used to provide replacements for practical hands-on laboratories. Sometimes, these environments take the form of a remote interface to the physical laboratory apparatus and at other times, in the form of a complete software implementation that simulates the laboratory apparatus. In this paper, we report on the use of a semi-immersive 3D mobile Augmented Reality (mAR) interface and limited simulations as a replacement for practical hands-on laboratories in science and engineering. The 3D-mAR based interfaces implementations for three different experiments (from micro-electronics, power and communications engineering) are presented;the discovered limitations are discussed along with the results of an evaluation by science and engineering students from two different institutions and plans for future work.