Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the instal...Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.展开更多
BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolvi...BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolving with the employment of augmented reality.Yet,the accuracy of augmented reality navigation systems has not been determined.AIM To examine the accuracy of component alignment and restoration of the affected limb’s mechanical axis in primary total knee arthroplasty(TKA),utilizing an augmented reality navigation system and to assess whether such systems are conspicuously fruitful for an accomplished knee surgeon.METHODS From May 2021 to December 2021,30 patients,25 women and five men,under-went a primary unilateral TKA.Revision cases were excluded.A preoperative radiographic procedure was performed to evaluate the limb’s axial alignment.All patients were operated on by the same team,without a tourniquet,utilizing three distinct prostheses with the assistance of the Knee+™augmented reality navigation system in every operation.Postoperatively,the same radiographic exam protocol was executed to evaluate the implants’position,orientation and coronal plane alignment.We recorded measurements in 3 stages regarding femoral varus and flexion,tibial varus and posterior slope.Firstly,the expected values from the Augmented Reality system were documented.Then we calculated the same values after each cut and finally,the same measurements were recorded radiolo-gically after the operations.Concerning statistical analysis,Lin’s concordance correlation coefficient was estimated,while Wilcoxon Signed Rank Test was performed when needed.RESULTS A statistically significant difference was observed regarding mean expected values and radiographic mea-surements for femoral flexion measurements only(Z score=2.67,P value=0.01).Nonetheless,this difference was statistically significantly lower than 1 degree(Z score=-4.21,P value<0.01).In terms of discrepancies in the calculations of expected values and controlled measurements,a statistically significant difference between tibial varus values was detected(Z score=-2.33,P value=0.02),which was also statistically significantly lower than 1 degree(Z score=-4.99,P value<0.01).CONCLUSION The results indicate satisfactory postoperative coronal alignment without outliers across all three different implants utilized.Augmented reality navigation systems can bolster orthopaedic surgeons’accuracy in achieving precise axial alignment.However,further research is required to further evaluate their efficacy and potential.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent s...The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.展开更多
Virtual reality(VR)and augmented reality(AR)technologies have become increasingly important instruments in the field of art education as information technology develops quickly,transforming the conventional art educat...Virtual reality(VR)and augmented reality(AR)technologies have become increasingly important instruments in the field of art education as information technology develops quickly,transforming the conventional art education approach.The present situation,benefits,difficulties,and potential development tendencies of VR and AR technologies in art education will be investigated in this study.By means of literature analysis and case studies,this paper presents the fundamental ideas of VR and AR technologies together with their several uses in art education,namely virtual museums,interactive art production,art history instruction,and distant art cooperation.The research examines how these technologies might improve students’immersion,raise their learning motivation,and encourage innovative ideas and multidisciplinary cooperation.Practical application concerns including technology costs,content production obstacles,user acceptance,privacy,and ethical questions also come under discussion.At last,the article offers ideas and suggestions to help VR and AR technologies be effectively integrated into art education through teacher training,curriculum design,technology infrastructure development,and multidisciplinary cooperation.This study offers useful advice for teachers of art as well as important references for legislators and technology developers working together to further the creative growth of art education.展开更多
Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overt...Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overtaking maneuvers,leading to accidents and fatalities.In this paper,we consider atrous convolution,a powerful tool for explicitly adjusting the field-of-view of a filter as well as controlling the resolution of feature responses generated by Deep Convolutional Neural Networks in the context of semantic image segmentation.This article explores the potential of seeing-through vehicles as a solution to enhance overtaking safety.See-through vehicles leverage advanced technologies such as cameras,sensors,and displays to provide drivers with a real-time view of the vehicle ahead,including the areas hidden from their direct line of sight.To address the problems of safe passing and occlusion by huge vehicles,we designed a see-through vehicle system in this study,we employed a windshield display in the back car together with cameras in both cars.The server within the back car was used to segment the car,and the segmented portion of the car displayed the video from the front car.Our see-through system improves the driver’s field of vision and helps him change lanes,cross a large car that is blocking their view,and safely overtake other vehicles.Our network was trained and tested on the Cityscape dataset using semantic segmentation.This transparent technique will instruct the driver on the concealed traffic situation that the front vehicle has obscured.For our findings,we have achieved 97.1% F1-score.The article also discusses the challenges and opportunities of implementing see-through vehicles in real-world scenarios,including technical,regulatory,and user acceptance factors.展开更多
Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomica...Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomical,functional,and radical hepatectomy through tumor identification and localization of target hepatic segments,driving a transformative shift in themanagement of hepatic surgical diseases,moving away from traditional,empirical diagnostic and treatment approaches toward digital,intelligent ones.The Hepatic Surgery Group of the Surgery Branch of the Chinese Medical Association,Digital Medicine Branch of the Chinese Medical Association,Digital Intelligent Surgery Committee of the Chinese Society of ResearchHospitals,and Liver Cancer Committee of the Chinese Medical Doctor Association organized the relevant experts in China to formulate this consensus.This consensus provides a comprehensive outline of the principles,advantages,processes,and key considerations associated with the application of augmented reality and mixed-reality technology combined with indocyanine green fluorescence imaging technology for hepatic segmental and subsegmental resection.The purpose is to streamline and standardize the application of these technologies.展开更多
Wireless smart sensors(WSS)process field data and inform inspectors about the infrastructure health and safety.In bridge engineering,inspectors need reliable data about changes in displacements under loads to make cor...Wireless smart sensors(WSS)process field data and inform inspectors about the infrastructure health and safety.In bridge engineering,inspectors need reliable data about changes in displacements under loads to make correct decisions about repairs and replacements.Access to displacement information in the field and in real-time remains a challenge as inspectors do not see the data in real time.Displacement data from WSS in the field undergoes additional processing and is seen at a different location.If inspectors were able to see structural displacements in real-time at the locations of interest,they could conduct additional observations,creating a new,information-based,decision-making reality in the field.This paper develops a new,human-centered interface that provides inspectors with real-time access to actionable structural data during inspection and monitoring enhanced by augmented reality(AR).It summarizes and evaluates the development and validation of the new human-infrastructure interface in laboratory experiments.The experiments demonstrate that the interface that processes all calculations in the AR device accurately estimates dynamic displacements in comparison with the laser.Using this new AR interface tool,inspectors can observe and compare displacement data,share it across space and time,visualize displacements in time history,and understand structural deflection more accurately through a displacement time history visualization.展开更多
Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implemen...Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implementations of visual inspection are substantially time-consuming,labor-intensive,and error-prone because useful auxiliary tools that can instantly highlight defects or damage locations from images are not available.Therefore,an advanced building inspection framework is developed and implemented with augmented reality(AR)and real-time damage detection in this study.In this framework,engineers should walk around and film every corner of the building interior to generate the three-dimensional(3D)environment through ARKit.Meanwhile,a trained YOLOv5 model real-time detects defects during this process,even in a large-scale field,and the defect locations indicating the detected defects are then marked in this 3D environment.The defects areas can be measured with centimeter-level accuracy with the light detection and ranging(LiDAR)on devices.All required damage information,including defect positions and sizes,is collected at a time and can be rendered in the 2D and 3D views.Finally,this visual inspection can be efficiently conducted,and the previously generated environment can also be loaded to re-localize existing defect marks for future maintenance and change observation.Moreover,the proposed framework is also implemented and verified by an underground parking lot in a building to detect and quantify surface defects on concrete components.As seen in the results,the conventional building inspection is significantly improved with the aid of the proposed framework in terms of damage localization,damage quantification,and inspection efficiency.展开更多
Objective:To evaluate the accuracy of our new three-dimensional(3D)automatic augmented reality(AAR)system guided by artificial intelligence in the identification of tumour’s location at the level of the preserved neu...Objective:To evaluate the accuracy of our new three-dimensional(3D)automatic augmented reality(AAR)system guided by artificial intelligence in the identification of tumour’s location at the level of the preserved neurovascular bundle(NVB)at the end of the extirpative phase of nerve-sparing robot-assisted radical prostatectomy.Methods:In this prospective study,we enrolled patients with prostate cancer(clinical stages cT1ce3,cN0,and cM0)with a positive index lesion at target biopsy,suspicious for capsular contact or extracapsular extension at preoperative multiparametric magnetic resonance imaging.Patients underwent robot-assisted radical prostatectomy at San Luigi Gonzaga Hospital(Orbassano,Turin,Italy),from December 2020 to December 2021.At the end of extirpative phase,thanks to our new AAR artificial intelligence driven system,the virtual prostate 3D model allowed to identify the tumour’s location at the level of the preserved NVB and to perform a selective excisional biopsy,sparing the remaining portion of the bundle.Perioperative and postoperative data were evaluated,especially focusing on the positive surgical margin(PSM)rates,potency,continence recovery,and biochemical recurrence.Results:Thirty-four patients were enrolled.In 15(44.1%)cases,the target lesion was in contact with the prostatic capsule at multiparametric magnetic resonance imaging(Wheeler grade L2)while in 19(55.9%)cases extracapsular extension was detected(Wheeler grade L3).3D AAR guided biopsies were negative in all pathological tumour stage 2(pT2)patients while they revealed the presence of cancer in 14 cases in the pT3 cohort(14/16;87.5%).PSM rates were 0%and 7.1%in the pathological stages pT2 and pT3(<3 mm,Gleason score 3),respectively.Conclusion:With the proposed 3D AAR system,it is possible to correctly identify the lesion’s location on the NVB in 87.5%of pT3 patients and perform a 3D-guided tailored nerve-sparing even in locally advanced diseases,without compromising the oncological safety in terms of PSM rates.展开更多
Augmented Reality(AR)tries to seamlessly integrate virtual content into the real world of the user.Ideally,the virtual content would behave exactly like real objects.This necessitates a correct and precise estimation ...Augmented Reality(AR)tries to seamlessly integrate virtual content into the real world of the user.Ideally,the virtual content would behave exactly like real objects.This necessitates a correct and precise estimation of the user’s viewpoint(or that of a camera)with regard to the virtual content’s coordinate sys-tem.Therefore,the real-time establishment of 3-dimension(3D)maps in real scenes is particularly important for augmented reality technology.So in this paper,we integrate Simultaneous Localization and Mapping(SLAM)technology into augmented reality.Our research is to implement an augmented reality system without markers using the ORB-SLAM2 framework algorithm.In this paper we propose an improved method for Oriented FAST and Rotated BRIEF(ORB)feature extraction and optimized key frame selection,as well as the use of the Progressive Sample Consensus(PROSAC)algorithm for planar estimation of augmented reality implementations,thus solving the problem of increased sys-tem runtime because of the loss of large amounts of texture information in images.In this paper,we get better results by comparing experiments and data analysis.However,there are some improved methods of PROSAC algorithm which are more suitable for the detection of plane feature points.展开更多
Background With an increasing number of vehicles becoming autonomous,intelligent,and connected,paying attention to the future usage of car human-machine interface with these vehicles should become more relevant.Severa...Background With an increasing number of vehicles becoming autonomous,intelligent,and connected,paying attention to the future usage of car human-machine interface with these vehicles should become more relevant.Several studies have addressed car HMI but were less attentive to designing and implementing interactive glazing for every day(autonomous)driving contexts.Methods Reflecting on the literature,we describe an engineering psychology practice and the design of six novel future user scenarios,which envision the application of a specific set of augmented reality(AR)support user interactions.Additionally,we conduct evaluations on specific scenarios and experiential prototypes,which reveal that these AR scenarios aid the target user groups in experiencing a new type of interaction.The overall evaluation is positive with valuable assessment results and suggestions.Conclusions This study can interest applied psychology educators who aspire to teach how AR can be operationalized in a human-centered design process to students with minimal pre-existing expertise or minimal scientific knowledge in engineering psychology.展开更多
This study comprehensively reviews the literature to deeply explore the role of computer science and internet technologies in addressing educational inequality and socio-psychological issues,with a particular focus on...This study comprehensively reviews the literature to deeply explore the role of computer science and internet technologies in addressing educational inequality and socio-psychological issues,with a particular focus on applications of 5G,artificial intelligence(AI),and augmented/virtual reality(AR/VR).By analyzing how these technologies are reshaping learning and their potential to ameliorate educational disparities,the study reveals challenges present in ensuring educational equity.The research methodology includes exhaustive reviews of applications of AI and machine learning,the Internet of Things and wearable technologies integration,big data analytics and data mining,and the effects of online platforms and social media on socio-psychological issues.Besides,the study discusses applications of these technologies in educational inequality and socio-psychological problem-solving through the lens of 5G,AI,and AR/VR,while also delineating challenges faced by these emerging technologies and future outlooks.The study finds that while computer science and internet technologies hold promise to bridge academic divides and address socio-psychological problems,the complexity of technology access and infrastructure,lack of digital literacy and skills,and critical ethical and privacy issues can impact widespread adoption and efficacy.Overall,the study provides a novel perspective to understand the potential of computer science and internet technologies in ameliorating educational inequality and socio-psychological issues,while pointing to new directions for future research.It also emphasizes the importance of cooperation among educational institutions,technology vendors,policymakers and researchers,and establishing comprehensive ethical guidelines and regulations to ensure the responsible use of these technologies.展开更多
Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the v...Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes.Data Sources: The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the Pub Med database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles.Results: In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery,which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology.Conclusions: With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling,and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.展开更多
文摘Six degrees of freedom(6DoF)input interfaces are essential formanipulating virtual objects through translation or rotation in three-dimensional(3D)space.A traditional outside-in tracking controller requires the installation of expensive hardware in advance.While inside-out tracking controllers have been proposed,they often suffer from limitations such as interaction limited to the tracking range of the sensor(e.g.,a sensor on the head-mounted display(HMD))or the need for pose value modification to function as an input interface(e.g.,a sensor on the controller).This study investigates 6DoF pose estimation methods without restricting the tracking range,using a smartphone as a controller in augmented reality(AR)environments.Our approach involves proposing methods for estimating the initial pose of the controller and correcting the pose using an inside-out tracking approach.In addition,seven pose estimation algorithms were presented as candidates depending on the tracking range of the device sensor,the tracking method(e.g.,marker recognition,visual-inertial odometry(VIO)),and whether modification of the initial pose is necessary.Through two experiments(discrete and continuous data),the performance of the algorithms was evaluated.The results demonstrate enhanced final pose accuracy achieved by correcting the initial pose.Furthermore,the importance of selecting the tracking algorithm based on the tracking range of the devices and the actual input value of the 3D interaction was emphasized.
文摘BACKGROUND Computer-assisted systems obtained an increased interest in orthopaedic surgery over the last years,as they enhance precision compared to conventional hardware.The expansion of computer assistance is evolving with the employment of augmented reality.Yet,the accuracy of augmented reality navigation systems has not been determined.AIM To examine the accuracy of component alignment and restoration of the affected limb’s mechanical axis in primary total knee arthroplasty(TKA),utilizing an augmented reality navigation system and to assess whether such systems are conspicuously fruitful for an accomplished knee surgeon.METHODS From May 2021 to December 2021,30 patients,25 women and five men,under-went a primary unilateral TKA.Revision cases were excluded.A preoperative radiographic procedure was performed to evaluate the limb’s axial alignment.All patients were operated on by the same team,without a tourniquet,utilizing three distinct prostheses with the assistance of the Knee+™augmented reality navigation system in every operation.Postoperatively,the same radiographic exam protocol was executed to evaluate the implants’position,orientation and coronal plane alignment.We recorded measurements in 3 stages regarding femoral varus and flexion,tibial varus and posterior slope.Firstly,the expected values from the Augmented Reality system were documented.Then we calculated the same values after each cut and finally,the same measurements were recorded radiolo-gically after the operations.Concerning statistical analysis,Lin’s concordance correlation coefficient was estimated,while Wilcoxon Signed Rank Test was performed when needed.RESULTS A statistically significant difference was observed regarding mean expected values and radiographic mea-surements for femoral flexion measurements only(Z score=2.67,P value=0.01).Nonetheless,this difference was statistically significantly lower than 1 degree(Z score=-4.21,P value<0.01).In terms of discrepancies in the calculations of expected values and controlled measurements,a statistically significant difference between tibial varus values was detected(Z score=-2.33,P value=0.02),which was also statistically significantly lower than 1 degree(Z score=-4.99,P value<0.01).CONCLUSION The results indicate satisfactory postoperative coronal alignment without outliers across all three different implants utilized.Augmented reality navigation systems can bolster orthopaedic surgeons’accuracy in achieving precise axial alignment.However,further research is required to further evaluate their efficacy and potential.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
基金Guizhou University of Finance and Economics 2024 Student Self-Funded Research Project Funding(Project no.2024ZXSY001)。
文摘The impact of augmented reality(AR)technology on consumer behavior has increasingly attracted academic attention.While early research has provided valuable insights,many challenges remain.This article reviews recent studies,analyzing AR’s technical features,marketing concepts,and action mechanisms from a consumer perspective.By refining existing frameworks and introducing a new model based on situation awareness theory,the paper offers a deeper exploration of AR marketing.Finally,it proposes directions for future research in this emerging field.
文摘Virtual reality(VR)and augmented reality(AR)technologies have become increasingly important instruments in the field of art education as information technology develops quickly,transforming the conventional art education approach.The present situation,benefits,difficulties,and potential development tendencies of VR and AR technologies in art education will be investigated in this study.By means of literature analysis and case studies,this paper presents the fundamental ideas of VR and AR technologies together with their several uses in art education,namely virtual museums,interactive art production,art history instruction,and distant art cooperation.The research examines how these technologies might improve students’immersion,raise their learning motivation,and encourage innovative ideas and multidisciplinary cooperation.Practical application concerns including technology costs,content production obstacles,user acceptance,privacy,and ethical questions also come under discussion.At last,the article offers ideas and suggestions to help VR and AR technologies be effectively integrated into art education through teacher training,curriculum design,technology infrastructure development,and multidisciplinary cooperation.This study offers useful advice for teachers of art as well as important references for legislators and technology developers working together to further the creative growth of art education.
基金financially supported by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D Program(Project No.P0016038)supported by the MSIT(Ministry of Sci-ence and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2022-RS-2022-00156354)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘Overtaking is a crucial maneuver in road transportation that requires a clear view of the road ahead.However,limited visibility of ahead vehicles can often make it challenging for drivers to assess the safety of overtaking maneuvers,leading to accidents and fatalities.In this paper,we consider atrous convolution,a powerful tool for explicitly adjusting the field-of-view of a filter as well as controlling the resolution of feature responses generated by Deep Convolutional Neural Networks in the context of semantic image segmentation.This article explores the potential of seeing-through vehicles as a solution to enhance overtaking safety.See-through vehicles leverage advanced technologies such as cameras,sensors,and displays to provide drivers with a real-time view of the vehicle ahead,including the areas hidden from their direct line of sight.To address the problems of safe passing and occlusion by huge vehicles,we designed a see-through vehicle system in this study,we employed a windshield display in the back car together with cameras in both cars.The server within the back car was used to segment the car,and the segmented portion of the car displayed the video from the front car.Our see-through system improves the driver’s field of vision and helps him change lanes,cross a large car that is blocking their view,and safely overtake other vehicles.Our network was trained and tested on the Cityscape dataset using semantic segmentation.This transparent technique will instruct the driver on the concealed traffic situation that the front vehicle has obscured.For our findings,we have achieved 97.1% F1-score.The article also discusses the challenges and opportunities of implementing see-through vehicles in real-world scenarios,including technical,regulatory,and user acceptance factors.
基金National Key Research and Development Program(2016YFC0106500800)NationalMajor Scientific Instruments and Equipments Development Project of National Natural Science Foundation of China(81627805)+3 种基金National Natural Science Foundation of China-Guangdong Joint Fund Key Program(U1401254)National Natural Science Foundation of China Mathematics Tianyuan Foundation(12026602)Guangdong Provincial Natural Science Foundation Team Project(6200171)Guangdong Provincial Health Appropriate Technology Promotion Project(20230319214525105,20230322152307666).
文摘Augmented-and mixed-reality technologies have pioneered the realization of real-time fusion and interactive projection for laparoscopic surgeries.Indocyanine green fluorescence imaging technology has enabled anatomical,functional,and radical hepatectomy through tumor identification and localization of target hepatic segments,driving a transformative shift in themanagement of hepatic surgical diseases,moving away from traditional,empirical diagnostic and treatment approaches toward digital,intelligent ones.The Hepatic Surgery Group of the Surgery Branch of the Chinese Medical Association,Digital Medicine Branch of the Chinese Medical Association,Digital Intelligent Surgery Committee of the Chinese Society of ResearchHospitals,and Liver Cancer Committee of the Chinese Medical Doctor Association organized the relevant experts in China to formulate this consensus.This consensus provides a comprehensive outline of the principles,advantages,processes,and key considerations associated with the application of augmented reality and mixed-reality technology combined with indocyanine green fluorescence imaging technology for hepatic segmental and subsegmental resection.The purpose is to streamline and standardize the application of these technologies.
基金Air Force Research Laboratory(AFRL,Grant No.FA9453-18-2-0022)the New Mexico Consortium(NMC,Grant No.2RNA6)the US Department of Transportation Center:Transportation Consortium of South-Central States(TRANSET)Project 19STUNM02(TRANSET,Grant No.8-18-060ST)。
文摘Wireless smart sensors(WSS)process field data and inform inspectors about the infrastructure health and safety.In bridge engineering,inspectors need reliable data about changes in displacements under loads to make correct decisions about repairs and replacements.Access to displacement information in the field and in real-time remains a challenge as inspectors do not see the data in real time.Displacement data from WSS in the field undergoes additional processing and is seen at a different location.If inspectors were able to see structural displacements in real-time at the locations of interest,they could conduct additional observations,creating a new,information-based,decision-making reality in the field.This paper develops a new,human-centered interface that provides inspectors with real-time access to actionable structural data during inspection and monitoring enhanced by augmented reality(AR).It summarizes and evaluates the development and validation of the new human-infrastructure interface in laboratory experiments.The experiments demonstrate that the interface that processes all calculations in the AR device accurately estimates dynamic displacements in comparison with the laser.Using this new AR interface tool,inspectors can observe and compare displacement data,share it across space and time,visualize displacements in time history,and understand structural deflection more accurately through a displacement time history visualization.
文摘Visual inspection is commonly adopted for building operation,maintenance,and safety.The durability and defects of components or materials in buildings can be quickly assessed through visual inspection.However,implementations of visual inspection are substantially time-consuming,labor-intensive,and error-prone because useful auxiliary tools that can instantly highlight defects or damage locations from images are not available.Therefore,an advanced building inspection framework is developed and implemented with augmented reality(AR)and real-time damage detection in this study.In this framework,engineers should walk around and film every corner of the building interior to generate the three-dimensional(3D)environment through ARKit.Meanwhile,a trained YOLOv5 model real-time detects defects during this process,even in a large-scale field,and the defect locations indicating the detected defects are then marked in this 3D environment.The defects areas can be measured with centimeter-level accuracy with the light detection and ranging(LiDAR)on devices.All required damage information,including defect positions and sizes,is collected at a time and can be rendered in the 2D and 3D views.Finally,this visual inspection can be efficiently conducted,and the previously generated environment can also be loaded to re-localize existing defect marks for future maintenance and change observation.Moreover,the proposed framework is also implemented and verified by an underground parking lot in a building to detect and quantify surface defects on concrete components.As seen in the results,the conventional building inspection is significantly improved with the aid of the proposed framework in terms of damage localization,damage quantification,and inspection efficiency.
文摘Objective:To evaluate the accuracy of our new three-dimensional(3D)automatic augmented reality(AAR)system guided by artificial intelligence in the identification of tumour’s location at the level of the preserved neurovascular bundle(NVB)at the end of the extirpative phase of nerve-sparing robot-assisted radical prostatectomy.Methods:In this prospective study,we enrolled patients with prostate cancer(clinical stages cT1ce3,cN0,and cM0)with a positive index lesion at target biopsy,suspicious for capsular contact or extracapsular extension at preoperative multiparametric magnetic resonance imaging.Patients underwent robot-assisted radical prostatectomy at San Luigi Gonzaga Hospital(Orbassano,Turin,Italy),from December 2020 to December 2021.At the end of extirpative phase,thanks to our new AAR artificial intelligence driven system,the virtual prostate 3D model allowed to identify the tumour’s location at the level of the preserved NVB and to perform a selective excisional biopsy,sparing the remaining portion of the bundle.Perioperative and postoperative data were evaluated,especially focusing on the positive surgical margin(PSM)rates,potency,continence recovery,and biochemical recurrence.Results:Thirty-four patients were enrolled.In 15(44.1%)cases,the target lesion was in contact with the prostatic capsule at multiparametric magnetic resonance imaging(Wheeler grade L2)while in 19(55.9%)cases extracapsular extension was detected(Wheeler grade L3).3D AAR guided biopsies were negative in all pathological tumour stage 2(pT2)patients while they revealed the presence of cancer in 14 cases in the pT3 cohort(14/16;87.5%).PSM rates were 0%and 7.1%in the pathological stages pT2 and pT3(<3 mm,Gleason score 3),respectively.Conclusion:With the proposed 3D AAR system,it is possible to correctly identify the lesion’s location on the NVB in 87.5%of pT3 patients and perform a 3D-guided tailored nerve-sparing even in locally advanced diseases,without compromising the oncological safety in terms of PSM rates.
基金supported by the Hainan Provincial Natural Science Foundation of China(project number:621QN269)the Sanya Science and Information Bureau Foundation(project number:2021GXYL251).
文摘Augmented Reality(AR)tries to seamlessly integrate virtual content into the real world of the user.Ideally,the virtual content would behave exactly like real objects.This necessitates a correct and precise estimation of the user’s viewpoint(or that of a camera)with regard to the virtual content’s coordinate sys-tem.Therefore,the real-time establishment of 3-dimension(3D)maps in real scenes is particularly important for augmented reality technology.So in this paper,we integrate Simultaneous Localization and Mapping(SLAM)technology into augmented reality.Our research is to implement an augmented reality system without markers using the ORB-SLAM2 framework algorithm.In this paper we propose an improved method for Oriented FAST and Rotated BRIEF(ORB)feature extraction and optimized key frame selection,as well as the use of the Progressive Sample Consensus(PROSAC)algorithm for planar estimation of augmented reality implementations,thus solving the problem of increased sys-tem runtime because of the loss of large amounts of texture information in images.In this paper,we get better results by comparing experiments and data analysis.However,there are some improved methods of PROSAC algorithm which are more suitable for the detection of plane feature points.
基金Supported by the‘Automotive Glazing Application in Intelligent Cockpit Human-Machine Interface’project(SKHX2021049)a collaboration between the Saint-Go Bain Research and the Beijing Normal University。
文摘Background With an increasing number of vehicles becoming autonomous,intelligent,and connected,paying attention to the future usage of car human-machine interface with these vehicles should become more relevant.Several studies have addressed car HMI but were less attentive to designing and implementing interactive glazing for every day(autonomous)driving contexts.Methods Reflecting on the literature,we describe an engineering psychology practice and the design of six novel future user scenarios,which envision the application of a specific set of augmented reality(AR)support user interactions.Additionally,we conduct evaluations on specific scenarios and experiential prototypes,which reveal that these AR scenarios aid the target user groups in experiencing a new type of interaction.The overall evaluation is positive with valuable assessment results and suggestions.Conclusions This study can interest applied psychology educators who aspire to teach how AR can be operationalized in a human-centered design process to students with minimal pre-existing expertise or minimal scientific knowledge in engineering psychology.
文摘This study comprehensively reviews the literature to deeply explore the role of computer science and internet technologies in addressing educational inequality and socio-psychological issues,with a particular focus on applications of 5G,artificial intelligence(AI),and augmented/virtual reality(AR/VR).By analyzing how these technologies are reshaping learning and their potential to ameliorate educational disparities,the study reveals challenges present in ensuring educational equity.The research methodology includes exhaustive reviews of applications of AI and machine learning,the Internet of Things and wearable technologies integration,big data analytics and data mining,and the effects of online platforms and social media on socio-psychological issues.Besides,the study discusses applications of these technologies in educational inequality and socio-psychological problem-solving through the lens of 5G,AI,and AR/VR,while also delineating challenges faced by these emerging technologies and future outlooks.The study finds that while computer science and internet technologies hold promise to bridge academic divides and address socio-psychological problems,the complexity of technology access and infrastructure,lack of digital literacy and skills,and critical ethical and privacy issues can impact widespread adoption and efficacy.Overall,the study provides a novel perspective to understand the potential of computer science and internet technologies in ameliorating educational inequality and socio-psychological issues,while pointing to new directions for future research.It also emphasizes the importance of cooperation among educational institutions,technology vendors,policymakers and researchers,and establishing comprehensive ethical guidelines and regulations to ensure the responsible use of these technologies.
基金supported by grants from the Mission Plan Program of Beijing Municipal Administration of Hospitals(SML20152201)Beijing Municipal Administration of Hospitals Clinical Medicine Development of Special Funding(ZYLX201712)+1 种基金the National Natural Science Foundation of China(81427803)Beijing Tsinghua Changgung Hospital Fund(12015C1039)
文摘Background: Augmented reality(AR) technology is used to reconstruct three-dimensional(3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes.Data Sources: The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the Pub Med database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles.Results: In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery,which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology.Conclusions: With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling,and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.