Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the instal...Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.展开更多
BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolvi...BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolving with the employment of augmented reality.Yet,the accuracy of augmented reality navigation systems has not been determined.AIM To examine the accuracy of component alignment and restoration of the affected limb’s mechanical axis in primary total knee arthroplasty(TKA),utilizing an augmented reality navigation system and to assess whether such systems are conspicuously fruitful for an accomplished knee surgeon.METHODS From May 2021 to December 2021,30 patients,25 women and five men,under-went a primary unilateral TKA.Revision cases were excluded.A preoperative radiographic procedure was performed to evaluate the limb’s axial alignment.All patients were operated on by the same team,without a tourniquet,utilizing three distinct prostheses with the assistance of the Knee+™augmented reality navigation system in every operation.Postoperatively,the same radiographic exam protocol was executed to evaluate the implants’position,orientation and coronal plane alignment.We recorded measurements in 3 stages regarding femoral varus and flexion,tibial varus and posterior slope.Firstly,the expected values from the Augmented Reality system were documented.Then we calculated the same values after each cut and finally,the same measurements were recorded radiolo-gically after the operations.Concerning statistical analysis,Lin’s concordance correlation coefficient was estimated,while Wilcoxon Signed Rank Test was performed when needed.RESULTS A statistically significant difference was observed regarding mean expected values and radiographic mea-surements for femoral flexion measurements only(Z score=2.67,P value=0.01).Nonetheless,this difference was statistically significantly lower than 1 degree(Z score=-4.21,P value<0.01).In terms of discrepancies in the calculations of expected values and controlled measurements,a statistically significant difference between tibial varus values was detected(Z score=-2.33,P value=0.02),which was also statistically significantly lower than 1 degree(Z score=-4.99,P value<0.01).CONCLUSION The results indicate satisfactory postoperative coronal alignment without outliers across all three different implants utilized.Augmented reality navigation systems can bolster orthopaedic surgeons’accuracy in achieving precise axial alignment.However,further research is required to further evaluate their efficacy and potential.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent s...The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.展开更多
Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomica...Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomical,functional,and radical hepatectomy through tumor identification and localization of target hepatic segments,driving a transformative shift in themanagement of hepatic surgical diseases,moving away from traditional,empirical diagnostic and treatment approaches toward digital,intelligent ones.The Hepatic Surgery Group of the Surgery Branch of the Chinese Medical Association,Digital Medicine Branch of the Chinese Medical Association,Digital Intelligent Surgery Committee of the Chinese Society of ResearchHospitals,and Liver Cancer Committee of the Chinese Medical Doctor Association organized the relevant experts in China to formulate this consensus.This consensus provides a comprehensive outline of the principles,advantages,processes,and key considerations associated with the application of augmented reality and mixed-reality technology combined with indocyanine green fluorescence imaging technology for hepatic segmental and subsegmental resection.The purpose is to streamline and standardize the application of these technologies.展开更多
Wireless smart sensors(WSS)process field data and inform inspectors about the infrastructure health and safety.In bridge engineering,inspectors need reliable data about changes in displacements under loads to make cor...Wireless smart sensors(WSS)process field data and inform inspectors about the infrastructure health and safety.In bridge engineering,inspectors need reliable data about changes in displacements under loads to make correct decisions about repairs and replacements.Access to displacement information in the field and in real-time remains a challenge as inspectors do not see the data in real time.Displacement data from WSS in the field undergoes additional processing and is seen at a different location.If inspectors were able to see structural displacements in real-time at the locations of interest,they could conduct additional observations,creating a new,information-based,decision-making reality in the field.This paper develops a new,human-centered interface that provides inspectors with real-time access to actionable structural data during inspection and monitoring enhanced by augmented reality(AR).It summarizes and evaluates the development and validation of the new human-infrastructure interface in laboratory experiments.The experiments demonstrate that the interface that processes all calculations in the AR device accurately estimates dynamic displacements in comparison with the laser.Using this new AR interface tool,inspectors can observe and compare displacement data,share it across space and time,visualize displacements in time history,and understand structural deflection more accurately through a displacement time history visualization.展开更多
Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implemen...Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implementations of visual inspection are substantially time-consuming,labor-intensive,and error-prone because useful auxiliary tools that can instantly highlight defects or damage locations from images are not available.Therefore,an advanced building inspection framework is developed and implemented with augmented reality(AR)and real-time damage detection in this study.In this framework,engineers should walk around and film every corner of the building interior to generate the three-dimensional(3D)environment through ARKit.Meanwhile,a trained YOLOv5 model real-time detects defects during this process,even in a large-scale field,and the defect locations indicating the detected defects are then marked in this 3D environment.The defects areas can be measured with centimeter-level accuracy with the light detection and ranging(LiDAR)on devices.All required damage information,including defect positions and sizes,is collected at a time and can be rendered in the 2D and 3D views.Finally,this visual inspection can be efficiently conducted,and the previously generated environment can also be loaded to re-localize existing defect marks for future maintenance and change observation.Moreover,the proposed framework is also implemented and verified by an underground parking lot in a building to detect and quantify surface defects on concrete components.As seen in the results,the conventional building inspection is significantly improved with the aid of the proposed framework in terms of damage localization,damage quantification,and inspection efficiency.展开更多
Objective:To evaluate the accuracy of our new three-dimensional(3D)automatic augmented reality(AAR)system guided by artificial intelligence in the identification of tumour’s location at the level of the preserved neu...Objective:To evaluate the accuracy of our new three-dimensional(3D)automatic augmented reality(AAR)system guided by artificial intelligence in the identification of tumour’s location at the level of the preserved neurovascular bundle(NVB)at the end of the extirpative phase of nerve-sparing robot-assisted radical prostatectomy.Methods:In this prospective study,we enrolled patients with prostate cancer(clinical stages cT1ce3,cN0,and cM0)with a positive index lesion at target biopsy,suspicious for capsular contact or extracapsular extension at preoperative multiparametric magnetic resonance imaging.Patients underwent robot-assisted radical prostatectomy at San Luigi Gonzaga Hospital(Orbassano,Turin,Italy),from December 2020 to December 2021.At the end of extirpative phase,thanks to our new AAR artificial intelligence driven system,the virtual prostate 3D model allowed to identify the tumour’s location at the level of the preserved NVB and to perform a selective excisional biopsy,sparing the remaining portion of the bundle.Perioperative and postoperative data were evaluated,especially focusing on the positive surgical margin(PSM)rates,potency,continence recovery,and biochemical recurrence.Results:Thirty-four patients were enrolled.In 15(44.1%)cases,the target lesion was in contact with the prostatic capsule at multiparametric magnetic resonance imaging(Wheeler grade L2)while in 19(55.9%)cases extracapsular extension was detected(Wheeler grade L3).3D AAR guided biopsies were negative in all pathological tumour stage 2(pT2)patients while they revealed the presence of cancer in 14 cases in the pT3 cohort(14/16;87.5%).PSM rates were 0%and 7.1%in the pathological stages pT2 and pT3(<3 mm,Gleason score 3),respectively.Conclusion:With the proposed 3D AAR system,it is possible to correctly identify the lesion’s location on the NVB in 87.5%of pT3 patients and perform a 3D-guided tailored nerve-sparing even in locally advanced diseases,without compromising the oncological safety in terms of PSM rates.展开更多
Augmented Reality(AR)tries to seamlessly integrate virtual content into the real world of the user.Ideally,the virtual content would behave exactly like real objects.This necessitates a correct and precise estimation ...Augmented Reality(AR)tries to seamlessly integrate virtual content into the real world of the user.Ideally,the virtual content would behave exactly like real objects.This necessitates a correct and precise estimation of the user’s viewpoint(or that of a camera)with regard to the virtual content’s coordinate sys-tem.Therefore,the real-time establishment of 3-dimension(3D)maps in real scenes is particularly important for augmented reality technology.So in this paper,we integrate Simultaneous Localization and Mapping(SLAM)technology into augmented reality.Our research is to implement an augmented reality system without markers using the ORB-SLAM2 framework algorithm.In this paper we propose an improved method for Oriented FAST and Rotated BRIEF(ORB)feature extraction and optimized key frame selection,as well as the use of the Progressive Sample Consensus(PROSAC)algorithm for planar estimation of augmented reality implementations,thus solving the problem of increased sys-tem runtime because of the loss of large amounts of texture information in images.In this paper,we get better results by comparing experiments and data analysis.However,there are some improved methods of PROSAC algorithm which are more suitable for the detection of plane feature points.展开更多
文摘Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.
文摘BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolving with the employment of augmented reality.Yet,the accuracy of augmented reality navigation systems has not been determined.AIM To examine the accuracy of component alignment and restoration of the affected limb’s mechanical axis in primary total knee arthroplasty(TKA),utilizing an augmented reality navigation system and to assess whether such systems are conspicuously fruitful for an accomplished knee surgeon.METHODS From May 2021 to December 2021,30 patients,25 women and five men,under-went a primary unilateral TKA.Revision cases were excluded.A preoperative radiographic procedure was performed to evaluate the limb’s axial alignment.All patients were operated on by the same team,without a tourniquet,utilizing three distinct prostheses with the assistance of the Knee+™augmented reality navigation system in every operation.Postoperatively,the same radiographic exam protocol was executed to evaluate the implants’position,orientation and coronal plane alignment.We recorded measurements in 3 stages regarding femoral varus and flexion,tibial varus and posterior slope.Firstly,the expected values from the Augmented Reality system were documented.Then we calculated the same values after each cut and finally,the same measurements were recorded radiolo-gically after the operations.Concerning statistical analysis,Lin’s concordance correlation coefficient was estimated,while Wilcoxon Signed Rank Test was performed when needed.RESULTS A statistically significant difference was observed regarding mean expected values and radiographic mea-surements for femoral flexion measurements only(Z score=2.67,P value=0.01).Nonetheless,this difference was statistically significantly lower than 1 degree(Z score=-4.21,P value<0.01).In terms of discrepancies in the calculations of expected values and controlled measurements,a statistically significant difference between tibial varus values was detected(Z score=-2.33,P value=0.02),which was also statistically significantly lower than 1 degree(Z score=-4.99,P value<0.01).CONCLUSION The results indicate satisfactory postoperative coronal alignment without outliers across all three different implants utilized.Augmented reality navigation systems can bolster orthopaedic surgeons’accuracy in achieving precise axial alignment.However,further research is required to further evaluate their efficacy and potential.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
基金Guizhou University of Finance and Economics 2024 Student Self-Funded Research Project Funding(Project no.2024ZXSY001)。
文摘The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.
基金National Key Research and Development Program(2016YFC0106500800)NationalMajor Scientific Instruments and Equipments Development Project of National Natural Science Foundation of China(81627805)+3 种基金National Natural Science Foundation of China-Guangdong Joint Fund Key Program(U1401254)National Natural Science Foundation of China Mathematics Tianyuan Foundation(12026602)Guangdong Provincial Natural Science Foundation Team Project(6200171)Guangdong Provincial Health Appropriate Technology Promotion Project(20230319214525105,20230322152307666).
文摘Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomical,functional,and radical hepatectomy through tumor identification and localization of target hepatic segments,driving a transformative shift in themanagement of hepatic surgical diseases,moving away from traditional,empirical diagnostic and treatment approaches toward digital,intelligent ones.The Hepatic Surgery Group of the Surgery Branch of the Chinese Medical Association,Digital Medicine Branch of the Chinese Medical Association,Digital Intelligent Surgery Committee of the Chinese Society of ResearchHospitals,and Liver Cancer Committee of the Chinese Medical Doctor Association organized the relevant experts in China to formulate this consensus.This consensus provides a comprehensive outline of the principles,advantages,processes,and key considerations associated with the application of augmented reality and mixed-reality technology combined with indocyanine green fluorescence imaging technology for hepatic segmental and subsegmental resection.The purpose is to streamline and standardize the application of these technologies.
基金Air Force Research Laboratory(AFRL,Grant No.FA9453-18-2-0022)the New Mexico Consortium(NMC,Grant No.2RNA6)the US Department of Transportation Center:Transportation Consortium of South-Central States(TRANSET)Project 19STUNM02(TRANSET,Grant No.8-18-060ST)。
文摘Wireless smart sensors(WSS)process field data and inform inspectors about the infrastructure health and safety.In bridge engineering,inspectors need reliable data about changes in displacements under loads to make correct decisions about repairs and replacements.Access to displacement information in the field and in real-time remains a challenge as inspectors do not see the data in real time.Displacement data from WSS in the field undergoes additional processing and is seen at a different location.If inspectors were able to see structural displacements in real-time at the locations of interest,they could conduct additional observations,creating a new,information-based,decision-making reality in the field.This paper develops a new,human-centered interface that provides inspectors with real-time access to actionable structural data during inspection and monitoring enhanced by augmented reality(AR).It summarizes and evaluates the development and validation of the new human-infrastructure interface in laboratory experiments.The experiments demonstrate that the interface that processes all calculations in the AR device accurately estimates dynamic displacements in comparison with the laser.Using this new AR interface tool,inspectors can observe and compare displacement data,share it across space and time,visualize displacements in time history,and understand structural deflection more accurately through a displacement time history visualization.
文摘Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implementations of visual inspection are substantially time-consuming,labor-intensive,and error-prone because useful auxiliary tools that can instantly highlight defects or damage locations from images are not available.Therefore,an advanced building inspection framework is developed and implemented with augmented reality(AR)and real-time damage detection in this study.In this framework,engineers should walk around and film every corner of the building interior to generate the three-dimensional(3D)environment through ARKit.Meanwhile,a trained YOLOv5 model real-time detects defects during this process,even in a large-scale field,and the defect locations indicating the detected defects are then marked in this 3D environment.The defects areas can be measured with centimeter-level accuracy with the light detection and ranging(LiDAR)on devices.All required damage information,including defect positions and sizes,is collected at a time and can be rendered in the 2D and 3D views.Finally,this visual inspection can be efficiently conducted,and the previously generated environment can also be loaded to re-localize existing defect marks for future maintenance and change observation.Moreover,the proposed framework is also implemented and verified by an underground parking lot in a building to detect and quantify surface defects on concrete components.As seen in the results,the conventional building inspection is significantly improved with the aid of the proposed framework in terms of damage localization,damage quantification,and inspection efficiency.
文摘Objective:To evaluate the accuracy of our new three-dimensional(3D)automatic augmented reality(AAR)system guided by artificial intelligence in the identification of tumour’s location at the level of the preserved neurovascular bundle(NVB)at the end of the extirpative phase of nerve-sparing robot-assisted radical prostatectomy.Methods:In this prospective study,we enrolled patients with prostate cancer(clinical stages cT1ce3,cN0,and cM0)with a positive index lesion at target biopsy,suspicious for capsular contact or extracapsular extension at preoperative multiparametric magnetic resonance imaging.Patients underwent robot-assisted radical prostatectomy at San Luigi Gonzaga Hospital(Orbassano,Turin,Italy),from December 2020 to December 2021.At the end of extirpative phase,thanks to our new AAR artificial intelligence driven system,the virtual prostate 3D model allowed to identify the tumour’s location at the level of the preserved NVB and to perform a selective excisional biopsy,sparing the remaining portion of the bundle.Perioperative and postoperative data were evaluated,especially focusing on the positive surgical margin(PSM)rates,potency,continence recovery,and biochemical recurrence.Results:Thirty-four patients were enrolled.In 15(44.1%)cases,the target lesion was in contact with the prostatic capsule at multiparametric magnetic resonance imaging(Wheeler grade L2)while in 19(55.9%)cases extracapsular extension was detected(Wheeler grade L3).3D AAR guided biopsies were negative in all pathological tumour stage 2(pT2)patients while they revealed the presence of cancer in 14 cases in the pT3 cohort(14/16;87.5%).PSM rates were 0%and 7.1%in the pathological stages pT2 and pT3(<3 mm,Gleason score 3),respectively.Conclusion:With the proposed 3D AAR system,it is possible to correctly identify the lesion’s location on the NVB in 87.5%of pT3 patients and perform a 3D-guided tailored nerve-sparing even in locally advanced diseases,without compromising the oncological safety in terms of PSM rates.
基金supported by the Hainan Provincial Natural Science Foundation of China(project number:621QN269)the Sanya Science and Information Bureau Foundation(project number:2021GXYL251).
文摘Augmented Reality(AR)tries to seamlessly integrate virtual content into the real world of the user.Ideally,the virtual content would behave exactly like real objects.This necessitates a correct and precise estimation of the user’s viewpoint(or that of a camera)with regard to the virtual content’s coordinate sys-tem.Therefore,the real-time establishment of 3-dimension(3D)maps in real scenes is particularly important for augmented reality technology.So in this paper,we integrate Simultaneous Localization and Mapping(SLAM)technology into augmented reality.Our research is to implement an augmented reality system without markers using the ORB-SLAM2 framework algorithm.In this paper we propose an improved method for Oriented FAST and Rotated BRIEF(ORB)feature extraction and optimized key frame selection,as well as the use of the Progressive Sample Consensus(PROSAC)algorithm for planar estimation of augmented reality implementations,thus solving the problem of increased sys-tem runtime because of the loss of large amounts of texture information in images.In this paper,we get better results by comparing experiments and data analysis.However,there are some improved methods of PROSAC algorithm which are more suitable for the detection of plane feature points.