This study comprehensively reviews the literature to deeply explore the role of computer science and internet technologies in addressing educational inequality and socio-psychological issues,with a particular focus on...This study comprehensively reviews the literature to deeply explore the role of computer science and internet technologies in addressing educational inequality and socio-psychological issues,with a particular focus on applications of 5G,artificial intelligence(AI),and augmented/virtual reality(AR/VR).By analyzing how these technologies are reshaping learning and their potential to ameliorate educational disparities,the study reveals challenges present in ensuring educational equity.The research methodology includes exhaustive reviews of applications of AI and machine learning,the Internet of Things and wearable technologies integration,big data analytics and data mining,and the effects of online platforms and social media on socio-psychological issues.Besides,the study discusses applications of these technologies in educational inequality and socio-psychological problem-solving through the lens of 5G,AI,and AR/VR,while also delineating challenges faced by these emerging technologies and future outlooks.The study finds that while computer science and internet technologies hold promise to bridge academic divides and address socio-psychological problems,the complexity of technology access and infrastructure,lack of digital literacy and skills,and critical ethical and privacy issues can impact widespread adoption and efficacy.Overall,the study provides a novel perspective to understand the potential of computer science and internet technologies in ameliorating educational inequality and socio-psychological issues,while pointing to new directions for future research.It also emphasizes the importance of cooperation among educational institutions,technology vendors,policymakers and researchers,and establishing comprehensive ethical guidelines and regulations to ensure the responsible use of these technologies.展开更多
To effectively simulate the fracture propagation in shale,the bedding plane(BP)effect is incorporated into the augmented virtual internal bond(AVIB)constitutive relation through BP tensor.Comparing the BP-embedded AVI...To effectively simulate the fracture propagation in shale,the bedding plane(BP)effect is incorporated into the augmented virtual internal bond(AVIB)constitutive relation through BP tensor.Comparing the BP-embedded AVIB with the theory of transverse isotropy,it is found the approach can represent the anisotropic properties induced by parallel BPs.Through the simulation example,it is found that this method can simulate the stiffness anisotropy of shale and can represent the effect of BPs on hydraulic fracture propagation direction.Compared with the BP-embedded virtual internal bond(VIB),this method can account for the various Poisson’s ratio.It provides a feasible approach to simulate the fracture propagation in shale.展开更多
Teaching science through computer games,simulations,and artificial intelligence(AI)is an increasingly active research field.To this end,we conducted a systematic literature review on serious games for science educatio...Teaching science through computer games,simulations,and artificial intelligence(AI)is an increasingly active research field.To this end,we conducted a systematic literature review on serious games for science education to reveal research trends and patterns.We discussed the role of virtual reality(VR),AI,and augmented reality(AR)games in teaching science subjects like physics.Specifically,we covered the research spanning between 2011 and 2021,investigated country-wise concentration and most common evaluation methods,and discussed the positive and negative aspects of serious games in science education in particular and attitudes towards the use of serious games in education in general.展开更多
Nowadays, urban design faces complex demands. It has become a necessity to negotiate between stakeholder objectives, the expectations of citizens and the demands of planning. It is desirable to involve the stakeholder...Nowadays, urban design faces complex demands. It has become a necessity to negotiate between stakeholder objectives, the expectations of citizens and the demands of planning. It is desirable to involve the stakeholders and citizens from an early stage in the planning process to enable their different viewpoints to be successfully expressed and comprehended. Therefore, the basic aim of the study was how the MR (mixed reality) application is designed to encourage and improve communication on urban design among stakeholders and citizens? In this paper, we discuss new approaches to visualize urban building and environment alternatives to different stakeholders and provide them with tools to explore different approaches to urban planning in order to support citizen's participation in urban planning with augmented and mixed reality. The major finding of the study is that learning "how these participatory technologies may help build a community of practice around an urban project". And throughout the different experiences, we can learn to assist towards development of a methodology to use the mixed reality as a simulation tool in the enhancement of collaborative interaction in real-Egyptian project. So, we can determine a number of recommendations to deal with new participatory design tools for urban planning projects.展开更多
An Augmented virtual environment(AVE)is concerned with the fusion of real-time video with 3D models or scenes so as to augment the virtual environment.In this paper,a new approach to establish an AVE with a wide field...An Augmented virtual environment(AVE)is concerned with the fusion of real-time video with 3D models or scenes so as to augment the virtual environment.In this paper,a new approach to establish an AVE with a wide field of view is proposed,including real-time video projection,multiple video texture fusion and 3D visualization of moving objects.A new diagonally weighted algorithm is proposed to smooth the apparent gaps within the overlapping area between the two adjacent videos.A visualization method for the location and trajectory of a moving virtual object is proposed to display the moving object and its trajectory in the 3D virtual environment.The experimental results showed that the proposed set of algorithms are able to fuse multiple real-time videos with 3D models efficiently,and the experiment runs a 3D scene containing two million triangles and six real-time videos at around 55 frames per second on a laptop with 1GB of graphics card memory.In addition,a realistic AVE with a wide field of view was created based on the Digital Earth Science Platform by fusing three videos with a complex indoor virtual scene,visualizing a moving object and drawing its trajectory in the real time.展开更多
This paper describes an empirical study on an augmented virtuality (AV)-based system dedicated for tele-inspection of built environments. This system is regarded as a solution that allows users to experience the real ...This paper describes an empirical study on an augmented virtuality (AV)-based system dedicated for tele-inspection of built environments. This system is regarded as a solution that allows users to experience the real remote built environment without the need of physically stepping into that actual place. Such experience is realized by using AV technology to enrich the virtual counterparts of the place with captured real images from the real space. Those integrated into the AV environment are real photos that represent key landmarks/features of the real place, live video streams of on-site crew, and 3D virtual design geome-tries. The focus of this paper is the implementation and evaluation of the AV system in its current state as compared with traditional photo-based methods. Results from this preliminary empirical study show that the AV system achieves good overall satisfaction, although it involves certain general usability issues.展开更多
Augmented virtual environments(AVE)combine real-time videos with 3D scenes in a Digital Earth System or 3D GIS to present dynamic information and a virtual scene simultaneously.AVE can provide solutions for continuous...Augmented virtual environments(AVE)combine real-time videos with 3D scenes in a Digital Earth System or 3D GIS to present dynamic information and a virtual scene simultaneously.AVE can provide solutions for continuous tracking of moving objects,camera scheduling,and path planning in the real world.This paper proposes a novel approach for 3D path prediction of moving objects in a video-augmented indoor virtual environment.The study includes 3D motion analysis of moving objects,multi-path prediction,hierarchical visualization,and path-based multi-camera scheduling.The results show that these methods can give a closed-loop process of 3D path prediction and continuous tracking of moving objects in an AVE.The path analysis algorithms proved accurate and time-efficient,costing less than 1.3 ms to get the optimal path.The experiment ran a 3D scene containing 295,000 triangles at around 35 frames per second on a laptop with 1 GB of graphics card memory,which means the performance of the proposed methods is good enough to maintain high rendering efficiency for a video-augmented indoor virtual scene.展开更多
Wearable and flexible electronics are shaping our life with their unique advantages of light weight,good compliance,and desirable comfortability.With marching into the era of Internet of Things(IoT),numerous sensor no...Wearable and flexible electronics are shaping our life with their unique advantages of light weight,good compliance,and desirable comfortability.With marching into the era of Internet of Things(IoT),numerous sensor nodes are distributed throughout networks to capture,process,and transmit diverse sensory information,which gives rise to the demand on self-powered sensors to reduce the power consumption.Meanwhile,the rapid development of artificial intelligence(AI)and fifth-generation(5G)technologies provides an opportunity to enable smart-decision making and instantaneous data transmission in IoT systems.Due to continuously increased sensor and dataset number,conventional computing based on von Neumann architecture cannot meet the needs of brain-like high-efficient sensing and computing applications anymore.Neuromorphic electronics,drawing inspiration from the human brain,provide an alternative approach for efficient and low-power-consumption information processing.Hence,this review presents the general technology roadmap of self-powered sensors with detail discussion on their diversified applications in healthcare,human machine interactions,smart homes,etc.Via leveraging AI and virtual reality/augmented reality(VR/AR)techniques,the development of single sensors to intelligent integrated systems is reviewed in terms of step-by-step system integration and algorithm improvement.In order to realize efficient sensing and computing,brain-inspired neuromorphic electronics are next briefly discussed.Last,it concludes and highlights both challenges and opportunities from the aspects of materials,minimization,integration,multimodal information fusion,and artificial sensory system.展开更多
The metaverse is a visual world that blends the physical world and digital world.At present,the development of the metaverse is still in the early stage,and there lacks a framework for the visual construction and expl...The metaverse is a visual world that blends the physical world and digital world.At present,the development of the metaverse is still in the early stage,and there lacks a framework for the visual construction and exploration of the metaverse.In this paper,we propose a framework that summarizes how graphics,interaction,and visualization techniques support the visual construction of the metaverse and user-centric exploration.We introduce three kinds of visual elements that compose the metaverse and the two graphical construction methods in a pipeline.We propose a taxonomy of interaction technologies based on interaction tasks,user actions,feedback and various sensory channels,and a taxonomy of visualization techniques that assist user awareness.Current potential applications and future opportunities are discussed in the context of visual construction and exploration of the metaverse.We hope this paper can provide a stepping stone for further research in the area of graphics,interaction and visualization in the metaverse.展开更多
Wearable human-machine interface(HMI)is an advanced technology that has a wide range of applications from robotics to augmented/virtual reality(AR/VR).In this study,an optically driven wearable human-interactive smart...Wearable human-machine interface(HMI)is an advanced technology that has a wide range of applications from robotics to augmented/virtual reality(AR/VR).In this study,an optically driven wearable human-interactive smart textile is proposed by integrating a polydimethylsiloxane(PDMS)patch embedded with optical micro/nanofibers(MNF)array with a piece of textiles.Enabled by the highly sensitive pressure dependent bending loss of MNF,the smart textile shows high sensitivity(65.5 kPa^(−1))and fast response(25 ms)for touch sensing.Benefiting from the warp and weft structure of the textile,the optical smart textile can feel slight finger slip along the MNF.Furthermore,machine learning is utilized to classify the touch manners,achieving a recognition accuracy as high as 98.1%.As a proof-of-concept,a remote-control robotic hand and a smart interactive doll are demonstrated based on the optical smart textile.This optical smart textile represents an ideal HMI for AR/VR and robotics applications.展开更多
Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices,such as HTC VIVE,Oculus Rift,and Microsoft HoloLens.These immersive devices have the potential to significantly ext...Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices,such as HTC VIVE,Oculus Rift,and Microsoft HoloLens.These immersive devices have the potential to significantly extend the methodology of urban visual analytics by providing critical 3D context information and creating a sense of presence.In this paper,we propose a theoretical model to characterize the visualizations in immersive urban analytics.Furthermore,based on our comprehensive and concise model,we contribute a typology of combination methods of 2D and 3D visualizations that distinguishes between linked views,embedded views,and mixed views.We also propose a supporting guideline to assist users in selecting a proper view under certain circumstances by considering visual geometry and spatial distribution of the 2D and 3D visualizations.Finally,based on existing work,possible future research opportunities are explored and discussed.展开更多
Computed tomography(CT)generates cross-sectional images of the body.Visualizing CT images has been a challenging problem.The emergence of the augmented and virtual reality technology has provided promising solutions.H...Computed tomography(CT)generates cross-sectional images of the body.Visualizing CT images has been a challenging problem.The emergence of the augmented and virtual reality technology has provided promising solutions.However,existing solutions suffer from tethered display or wireless transmission latency.In this paper,we present ARSlice,a proof-of-concept prototype that can visualize CT images in an untethered manner without wireless transmission latency.Our ARSlice prototype consists of two parts,the user end and the projector end.By employing dynamic tracking and projection,the projector end can track the user-end equipment and project CT images onto it in real time.The user-end equipment is responsible for displaying these CT images into the 3D space.Its main feature is that the user-end equipment is a pure optical device with light weight,low cost,and no energy consumption.Our experiments demonstrate that our ARSlice prototype provides part of six degrees of freedom for the user,and a high frame rate.By interactively visualizing CT images into the 3D space,our ARSlice prototype can help untrained users better understand that CT images are slices of a body.展开更多
文摘This study comprehensively reviews the literature to deeply explore the role of computer science and internet technologies in addressing educational inequality and socio-psychological issues,with a particular focus on applications of 5G,artificial intelligence(AI),and augmented/virtual reality(AR/VR).By analyzing how these technologies are reshaping learning and their potential to ameliorate educational disparities,the study reveals challenges present in ensuring educational equity.The research methodology includes exhaustive reviews of applications of AI and machine learning,the Internet of Things and wearable technologies integration,big data analytics and data mining,and the effects of online platforms and social media on socio-psychological issues.Besides,the study discusses applications of these technologies in educational inequality and socio-psychological problem-solving through the lens of 5G,AI,and AR/VR,while also delineating challenges faced by these emerging technologies and future outlooks.The study finds that while computer science and internet technologies hold promise to bridge academic divides and address socio-psychological problems,the complexity of technology access and infrastructure,lack of digital literacy and skills,and critical ethical and privacy issues can impact widespread adoption and efficacy.Overall,the study provides a novel perspective to understand the potential of computer science and internet technologies in ameliorating educational inequality and socio-psychological issues,while pointing to new directions for future research.It also emphasizes the importance of cooperation among educational institutions,technology vendors,policymakers and researchers,and establishing comprehensive ethical guidelines and regulations to ensure the responsible use of these technologies.
基金This work is supported by the National Natural Science Foundation of China(Grant 11772190),which is gratefully acknowledged.
文摘To effectively simulate the fracture propagation in shale,the bedding plane(BP)effect is incorporated into the augmented virtual internal bond(AVIB)constitutive relation through BP tensor.Comparing the BP-embedded AVIB with the theory of transverse isotropy,it is found the approach can represent the anisotropic properties induced by parallel BPs.Through the simulation example,it is found that this method can simulate the stiffness anisotropy of shale and can represent the effect of BPs on hydraulic fracture propagation direction.Compared with the BP-embedded virtual internal bond(VIB),this method can account for the various Poisson’s ratio.It provides a feasible approach to simulate the fracture propagation in shale.
文摘Teaching science through computer games,simulations,and artificial intelligence(AI)is an increasingly active research field.To this end,we conducted a systematic literature review on serious games for science education to reveal research trends and patterns.We discussed the role of virtual reality(VR),AI,and augmented reality(AR)games in teaching science subjects like physics.Specifically,we covered the research spanning between 2011 and 2021,investigated country-wise concentration and most common evaluation methods,and discussed the positive and negative aspects of serious games in science education in particular and attitudes towards the use of serious games in education in general.
文摘Nowadays, urban design faces complex demands. It has become a necessity to negotiate between stakeholder objectives, the expectations of citizens and the demands of planning. It is desirable to involve the stakeholders and citizens from an early stage in the planning process to enable their different viewpoints to be successfully expressed and comprehended. Therefore, the basic aim of the study was how the MR (mixed reality) application is designed to encourage and improve communication on urban design among stakeholders and citizens? In this paper, we discuss new approaches to visualize urban building and environment alternatives to different stakeholders and provide them with tools to explore different approaches to urban planning in order to support citizen's participation in urban planning with augmented and mixed reality. The major finding of the study is that learning "how these participatory technologies may help build a community of practice around an urban project". And throughout the different experiences, we can learn to assist towards development of a methodology to use the mixed reality as a simulation tool in the enhancement of collaborative interaction in real-Egyptian project. So, we can determine a number of recommendations to deal with new participatory design tools for urban planning projects.
基金Research presented in this paper was funded by the National Key Research and Development Program of China[grant numbers 2016YFB0501503 and 2016YFB0501502]Hainan Provincial Department of Science and Technology[grant number ZDKJ2016021].
文摘An Augmented virtual environment(AVE)is concerned with the fusion of real-time video with 3D models or scenes so as to augment the virtual environment.In this paper,a new approach to establish an AVE with a wide field of view is proposed,including real-time video projection,multiple video texture fusion and 3D visualization of moving objects.A new diagonally weighted algorithm is proposed to smooth the apparent gaps within the overlapping area between the two adjacent videos.A visualization method for the location and trajectory of a moving virtual object is proposed to display the moving object and its trajectory in the 3D virtual environment.The experimental results showed that the proposed set of algorithms are able to fuse multiple real-time videos with 3D models efficiently,and the experiment runs a 3D scene containing two million triangles and six real-time videos at around 55 frames per second on a laptop with 1GB of graphics card memory.In addition,a realistic AVE with a wide field of view was created based on the Digital Earth Science Platform by fusing three videos with a complex indoor virtual scene,visualizing a moving object and drawing its trajectory in the real time.
文摘This paper describes an empirical study on an augmented virtuality (AV)-based system dedicated for tele-inspection of built environments. This system is regarded as a solution that allows users to experience the real remote built environment without the need of physically stepping into that actual place. Such experience is realized by using AV technology to enrich the virtual counterparts of the place with captured real images from the real space. Those integrated into the AV environment are real photos that represent key landmarks/features of the real place, live video streams of on-site crew, and 3D virtual design geome-tries. The focus of this paper is the implementation and evaluation of the AV system in its current state as compared with traditional photo-based methods. Results from this preliminary empirical study show that the AV system achieves good overall satisfaction, although it involves certain general usability issues.
基金supported by the National Natural Science Foundation of China[grant number 41901328 and 41974108]the Strategic Priority Research Program of the Chinese Academy of Sciences[grant number XDA19080101]the National Key Research and Development Program of China[grant number 2016YFB0501503 and 2016YFB0501502].
文摘Augmented virtual environments(AVE)combine real-time videos with 3D scenes in a Digital Earth System or 3D GIS to present dynamic information and a virtual scene simultaneously.AVE can provide solutions for continuous tracking of moving objects,camera scheduling,and path planning in the real world.This paper proposes a novel approach for 3D path prediction of moving objects in a video-augmented indoor virtual environment.The study includes 3D motion analysis of moving objects,multi-path prediction,hierarchical visualization,and path-based multi-camera scheduling.The results show that these methods can give a closed-loop process of 3D path prediction and continuous tracking of moving objects in an AVE.The path analysis algorithms proved accurate and time-efficient,costing less than 1.3 ms to get the optimal path.The experiment ran a 3D scene containing 295,000 triangles at around 35 frames per second on a laptop with 1 GB of graphics card memory,which means the performance of the proposed methods is good enough to maintain high rendering efficiency for a video-augmented indoor virtual scene.
基金supported by the Reimagine Research Scheme(RRSC)grant(“Scalable AI Phenome Platform towards Fast-Forward Plant Breeding(Sensor)”,Nos.A-0009037-02-00 and A-0009037-03-00)at NUS,Singaporethe Reimagine Research Scheme(RRSC)grant(“Under-utilised Potential of Micro-biomes(soil)in Sustainable Urban Agriculture”,No.A-0009454-01-00)at NUS,Singaporethe RIE advanced manufacturing and engineering(AME)programmatic grant(“Nanosystems at the Edge”,No.A18A4b0055)at NUS,Singapore.
文摘Wearable and flexible electronics are shaping our life with their unique advantages of light weight,good compliance,and desirable comfortability.With marching into the era of Internet of Things(IoT),numerous sensor nodes are distributed throughout networks to capture,process,and transmit diverse sensory information,which gives rise to the demand on self-powered sensors to reduce the power consumption.Meanwhile,the rapid development of artificial intelligence(AI)and fifth-generation(5G)technologies provides an opportunity to enable smart-decision making and instantaneous data transmission in IoT systems.Due to continuously increased sensor and dataset number,conventional computing based on von Neumann architecture cannot meet the needs of brain-like high-efficient sensing and computing applications anymore.Neuromorphic electronics,drawing inspiration from the human brain,provide an alternative approach for efficient and low-power-consumption information processing.Hence,this review presents the general technology roadmap of self-powered sensors with detail discussion on their diversified applications in healthcare,human machine interactions,smart homes,etc.Via leveraging AI and virtual reality/augmented reality(VR/AR)techniques,the development of single sensors to intelligent integrated systems is reviewed in terms of step-by-step system integration and algorithm improvement.In order to realize efficient sensing and computing,brain-inspired neuromorphic electronics are next briefly discussed.Last,it concludes and highlights both challenges and opportunities from the aspects of materials,minimization,integration,multimodal information fusion,and artificial sensory system.
基金Shanghai Municipal Science and Technology Major Project(No.2018SHZDZX01,2021SHZDZX0103)and ZJLab.This work is also supported by Shanghai Sailing Program No.21YF1402900,Science and Technology Commission of Shanghai Municipality(Grant No.21ZR1403300)+1 种基金by Open Research Fund of Beijing Key Laboratory of Big Data Technology for Food Safety(Project No.BTBD-2021KF03)Beijing Technology and Business University and NSFC No.61972010.
文摘The metaverse is a visual world that blends the physical world and digital world.At present,the development of the metaverse is still in the early stage,and there lacks a framework for the visual construction and exploration of the metaverse.In this paper,we propose a framework that summarizes how graphics,interaction,and visualization techniques support the visual construction of the metaverse and user-centric exploration.We introduce three kinds of visual elements that compose the metaverse and the two graphical construction methods in a pipeline.We propose a taxonomy of interaction technologies based on interaction tasks,user actions,feedback and various sensory channels,and a taxonomy of visualization techniques that assist user awareness.Current potential applications and future opportunities are discussed in the context of visual construction and exploration of the metaverse.We hope this paper can provide a stepping stone for further research in the area of graphics,interaction and visualization in the metaverse.
基金We acknowledge funding from the National Natural Science Foundation of China(No.61975173)Major Scientific Research Project of Zhejiang Lab(No.2019MC0AD01)+1 种基金Key Research and Development Project of Zhejiang Province(No.2021C05003)the CIE-Tencent Robotics X Rhino-Bird Focused Research Program(No.2020-01-006).
文摘Wearable human-machine interface(HMI)is an advanced technology that has a wide range of applications from robotics to augmented/virtual reality(AR/VR).In this study,an optically driven wearable human-interactive smart textile is proposed by integrating a polydimethylsiloxane(PDMS)patch embedded with optical micro/nanofibers(MNF)array with a piece of textiles.Enabled by the highly sensitive pressure dependent bending loss of MNF,the smart textile shows high sensitivity(65.5 kPa^(−1))and fast response(25 ms)for touch sensing.Benefiting from the warp and weft structure of the textile,the optical smart textile can feel slight finger slip along the MNF.Furthermore,machine learning is utilized to classify the touch manners,achieving a recognition accuracy as high as 98.1%.As a proof-of-concept,a remote-control robotic hand and a smart interactive doll are demonstrated based on the optical smart textile.This optical smart textile represents an ideal HMI for AR/VR and robotics applications.
基金The work was supported by National 973 Program of China(2015CB352503)National Natural Science Foundation of China(61772456,U1609217)+5 种基金NSFC-Zhejiang Joint Fund for the Integration of Industrialization and Informatization(U1609217)NSFC(61502416)Zhejiang Provincial Natural Science Foundation(LR18F020001)the Fundamental Research Funds for Central Universities(2016QNA5014)the research fund of the Ministry of Education of China(188170-170160502)the 100 Talents Program of Zhejiang University.This project is also partially funded by Microsoft Research Asia.
文摘Recent years have witnessed the rapid development and wide adoption of immersive head-mounted devices,such as HTC VIVE,Oculus Rift,and Microsoft HoloLens.These immersive devices have the potential to significantly extend the methodology of urban visual analytics by providing critical 3D context information and creating a sense of presence.In this paper,we propose a theoretical model to characterize the visualizations in immersive urban analytics.Furthermore,based on our comprehensive and concise model,we contribute a typology of combination methods of 2D and 3D visualizations that distinguishes between linked views,embedded views,and mixed views.We also propose a supporting guideline to assist users in selecting a proper view under certain circumstances by considering visual geometry and spatial distribution of the 2D and 3D visualizations.Finally,based on existing work,possible future research opportunities are explored and discussed.
基金the National Natural Science Foundation of China under Grant No.61872210the Guangdong Basic and Applied Basic Research Foundation under Grant Nos.2021A1515012596 and 2021B1515120064the Guangdong Academy of Sciences Special Foundation under Grant No.2021GDASYL-20210102006.
文摘Computed tomography(CT)generates cross-sectional images of the body.Visualizing CT images has been a challenging problem.The emergence of the augmented and virtual reality technology has provided promising solutions.However,existing solutions suffer from tethered display or wireless transmission latency.In this paper,we present ARSlice,a proof-of-concept prototype that can visualize CT images in an untethered manner without wireless transmission latency.Our ARSlice prototype consists of two parts,the user end and the projector end.By employing dynamic tracking and projection,the projector end can track the user-end equipment and project CT images onto it in real time.The user-end equipment is responsible for displaying these CT images into the 3D space.Its main feature is that the user-end equipment is a pure optical device with light weight,low cost,and no energy consumption.Our experiments demonstrate that our ARSlice prototype provides part of six degrees of freedom for the user,and a high frame rate.By interactively visualizing CT images into the 3D space,our ARSlice prototype can help untrained users better understand that CT images are slices of a body.