This study promotes application of Augmented Reality (AR), whose properties can handle multiple sensory that heightens its quality of performances, affections in first time users, and introduces the possibility and ...This study promotes application of Augmented Reality (AR), whose properties can handle multiple sensory that heightens its quality of performances, affections in first time users, and introduces the possibility and guideline in mass production through the development of"Augmented Reality Band", a gadget based on AR system that allows users to interact with virtual music instruments to make music through the system that analyze printed marker on the shirt. The system is developed by using FLAR Toolkit and made accessible via AR band website. The user can run the system online from anywhere by wearing the shirt printed with embedded musical instruments AR code. In order to promote AR Band performances, the authors assessed the quality of the system from technical professionals. It has been found out that the quality level is very high (4.64/5.00) and the user satisfaction evaluated by sample group has been found in a satisfied level (4.22/5.00). In summary, AR Band has proven its performances to enhance user satisfaction and its further development and adaptation is under discussion and suggestion section provided at the end.展开更多
Nonlinear errors always exist in data obtained from tracker in augmented reality (AR), which badly influence the effect of AR. This paper proposes to rectify the errors using BP neural network. As BP neural network ...Nonlinear errors always exist in data obtained from tracker in augmented reality (AR), which badly influence the effect of AR. This paper proposes to rectify the errors using BP neural network. As BP neural network is prone to getting into local extrema and convergence is slow, genetic algorithm is employed to optimize the initial weights and threshold of neural network. This paper discusses how to set the crucial parameters in the algorithm. Experimental results show that the method ensures that the neural network achieves global convergence quickly and correctly. Tracking precision of AR system is improved after the tracker is rectified, and the third dimension of AR system is enhanced.展开更多
The spread of social media has increased contacts of members of communities on the lntemet. Members of these communities often use account names instead of real names. When they meet in the real world, they will find ...The spread of social media has increased contacts of members of communities on the lntemet. Members of these communities often use account names instead of real names. When they meet in the real world, they will find it useful to have a tool that enables them to associate the faces in fiont of them with the account names they know. This paper proposes a method that enables a person to identify the account name of the person ("target") in front of him/her using a smartphone. The attendees to a meeting exchange their identifiers (i.e., the account name) and GPS information using smartphones. When the user points his/her smartphone towards a target, the target's identifier is displayed near the target's head on the camera screen using AR (augmented reality). The position where the identifier is displayed is calculated from the differences in longitude and latitude between the user and the target and the azimuth direction of the target from the user. The target is identified based on this information, the face detection coordinates, and the distance between the two. The proposed method has been implemented using Android terminals, and identification accuracy has been examined through experiments.展开更多
Although notable progress has been made in the study of Steady-State Visual Evoked Potential(SSVEP)-based Brain-Computer Interface(BCI),several factors that limit the practical applications of BCIs still exist.One of ...Although notable progress has been made in the study of Steady-State Visual Evoked Potential(SSVEP)-based Brain-Computer Interface(BCI),several factors that limit the practical applications of BCIs still exist.One of these factors is the importability of the stimulator.In this study,Augmented Reality(AR)technology was introduced to present the visual stimuli of SSVEP-BCI,while the robot grasping experiment was designed to verify the applicability of the AR-BCI system.The offline experiment was designed to determine the best stimulus time,while the online experiment was used to complete the robot grasping task.The offline experiment revealed that better information transfer rate performance could be achieved when the stimulation time is 2 s.Results of the online experiment indicate that all 12 subjects could control the robot to complete the robot grasping task,which indicates the applicability of the AR-SSVEP-humanoid robot(NAO)system.This study verified the reliability of the AR-BCI system and indicated the applicability of the AR-SSVEP-NAO system in robot grasping tasks.展开更多
In the past two decades, augmented reality (AR) has received a growing amount of attention by researchers in the manufacturing technology community, because AR can be applied to address a wide range of problems thro...In the past two decades, augmented reality (AR) has received a growing amount of attention by researchers in the manufacturing technology community, because AR can be applied to address a wide range of problems throughout the assembly phase in the lifecycle of a product, e.g., planning, design, ergonomics assessment, operation guidance and training. However, to the best of authors' knowledge, there has not been any comprehensive review of AR-based assembly systems. This paper aims to provide a concise overview of the technical features, characteristics and broad range of applications of AR- based assembly systems published between 1990 and 2015. Among these selected articles, two thirds of them were published between 2005 and 2015, and they are considered as recent pertinent works which will be discussed in detail. In addition, the current limitation factors and future trends in the development will also be discussed.展开更多
We aim to develop a novel visualization tool for percutaneous renal puncture training based on augmented reality(AR)and compare the needle placement performance of this AR system with ultrasound-guidedfreehand naviga...We aim to develop a novel visualization tool for percutaneous renal puncture training based on augmented reality(AR)and compare the needle placement performance of this AR system with ultrasound-guidedfreehand navigation in phantoms.A head-mounted display-based AR navigation system was developed usingthe Unity3D software and Visual Studio to enable the overlay of the preoperative needle path and the complexanatomical structures onto a phantom in real time.The spatial location of the stationary phantom and the percutaneous instrument motion were traced by a Qualisys motion capture system.To evaluate the tracking accuracy,15 participants(7 males and 8 females)performed a single needle insertion using AR navigation(the number ofpunctures n=75)and ultrasound-guided freehand navigation(n=75).The needle placement error was measuredas the Euclidean distance between the actual needle tip and the virtual target by MicronTracker.All participantsdemonstrated a superior needle insertion efficiency when using the AR-assisted puncture method compared withthe ultrasound-guided freehand method.The needle insertion error of the ultrasound-guided method showed anincreased error compared with the AR method(5.54 mm±2.59 mm,4.34 mm±2.10 mm,respectively,p<0.05).The ultrasound-guided needle placements showed an increased time compared with the AR method(19.08 s±3.59 s,15.14 s±2.72 s,respectively,p<0.0001).Our AR training system facilitates the needle placement performance and solves hand-eye coordination problems.The system has the potential to increase efficiency andeffectiveness of percutaneous renal puncture training.展开更多
The recent fast development in computer vision and mobile sensor technology such as mobile LiDAR and RGB-D cameras is pushing the boundary of the technology to suit the need of real-life applications in the fields of ...The recent fast development in computer vision and mobile sensor technology such as mobile LiDAR and RGB-D cameras is pushing the boundary of the technology to suit the need of real-life applications in the fields of Augmented Reality(AR),robotics,indoor GIS and self-driving.Camera localization is often a key and enabling technology among these applications.In this paper,we developed a novel camera localization workflow based on a highly accurate 3D prior map optimized by our RGBD SLAM method in conjunction with a deep learning routine trained using consecutive video frames labeled with high precision camera pose.Furthermore,an AR registration method tightly coupled with a game engine is proposed,which incorporates the proposed localization algorithm and aligns the real Kinetic camera with a virtual camera of the game engine to facilitate AR application development in an integrated manner.The experimental results show that the localization accuracy can achieve an average error of 35 cm based on a fine-tuned prior 3D feature database at 3 cm accuracy compared against the ground-truth 3D LiDAR map.The influence of the localization accuracy on the visual effect of AR overlay is also demonstrated and the alignment of the real and virtual camera streamlines the implementation of AR fire emergency response demo in a Virtual Geographic Environment.展开更多
It is a challenging task for operators to program a remote robot for welding manipulation depending only on the visual information from the remote site. This paper proposes an intuitive user interface for programming ...It is a challenging task for operators to program a remote robot for welding manipulation depending only on the visual information from the remote site. This paper proposes an intuitive user interface for programming welding robots remotely using augmented reality (AR) with haptic feedback. The proposed system uses a depth camera to reconstruct the surfaces of workpieces. A haptic input device is used to allow users to define welding paths along these surfaces. An AR user interface is developed to allow users to visualize and adjust the orientation of the welding torch. Compared with the traditional robotic welding path programming methods which rely on prior CAD models or contact between the robot end-effector and the workpiece, this proposed approach allows for fast and intuitive remote robotic welding path programming with- out prior knowledge of CAD models of the workpieces. The experimental results show that the proposed approach is a user-friendly interface and can assist users in obtaining an accurate welding path.展开更多
In order to study the role of the new technological concept of shared experiences in the digital interactive experience of cultural heritage and apply it to the digital interactive experience of cultural heritage to s...In order to study the role of the new technological concept of shared experiences in the digital interactive experience of cultural heritage and apply it to the digital interactive experience of cultural heritage to solve the current problems in this field,starting from the mixed reality(MR) technology that the shared experiences rely on,proper software and hardware platforms were investigated and selected,a universal shared experiences solution was designed,and an experimental project based on the proposed solution was made to verify its feasibility.In the end,a proven and workable shared experiences solution was obtained.This solution included a proposed MR spatial alignment method,and it integrated the existing MR content production process and standard network synchronization functions.Furthermore,it is concluded that the introduction and reasonable use of new technologies can help the development of the digital interactive experience of cultural heritage.The shared experiences solution for the digital interactive experience of cultural heritage balances investment issues in the exhibition,display effect,and user experience.It can speed up the promotion of cultural heritage and bring the vitality of MR technology to relevant projects.展开更多
When searching for a dynamic target in an unknown real world scene,search efficiency is greatly reduced if users lack information about the spatial structure of the scene.Most target search studies,especially in robot...When searching for a dynamic target in an unknown real world scene,search efficiency is greatly reduced if users lack information about the spatial structure of the scene.Most target search studies,especially in robotics,focus on determining either the shortest path when the target’s position is known,or a strategy to find the target as quickly as possible when the target’s position is unknown.However,the target’s position is often known intermittently in the real world,e.g.,in the case of using surveillance cameras.Our goal is to help user find a dynamic target efficiently in the real world when the target’s position is intermittently known.In order to achieve this purpose,we have designed an AR guidance assistance system to provide optimal current directional guidance to users,based on searching a prediction graph.We assume that a certain number of depth cameras are fixed in a real scene to obtain dynamic target’s position.The system automatically analyzes all possible meetings between the user and the target,and generates optimal directional guidance to help the user catch up with the target.A user study was used to evaluate our method,and its results showed that compared to free search and a top-view method,our method significantly improves target search efficiency.展开更多
Wearable and flexible electronics are shaping our life with their unique advantages of light weight,good compliance,and desirable comfortability.With marching into the era of Internet of Things(IoT),numerous sensor no...Wearable and flexible electronics are shaping our life with their unique advantages of light weight,good compliance,and desirable comfortability.With marching into the era of Internet of Things(IoT),numerous sensor nodes are distributed throughout networks to capture,process,and transmit diverse sensory information,which gives rise to the demand on self-powered sensors to reduce the power consumption.Meanwhile,the rapid development of artificial intelligence(AI)and fifth-generation(5G)technologies provides an opportunity to enable smart-decision making and instantaneous data transmission in IoT systems.Due to continuously increased sensor and dataset number,conventional computing based on von Neumann architecture cannot meet the needs of brain-like high-efficient sensing and computing applications anymore.Neuromorphic electronics,drawing inspiration from the human brain,provide an alternative approach for efficient and low-power-consumption information processing.Hence,this review presents the general technology roadmap of self-powered sensors with detail discussion on their diversified applications in healthcare,human machine interactions,smart homes,etc.Via leveraging AI and virtual reality/augmented reality(VR/AR)techniques,the development of single sensors to intelligent integrated systems is reviewed in terms of step-by-step system integration and algorithm improvement.In order to realize efficient sensing and computing,brain-inspired neuromorphic electronics are next briefly discussed.Last,it concludes and highlights both challenges and opportunities from the aspects of materials,minimization,integration,multimodal information fusion,and artificial sensory system.展开更多
Intuitive and efficient interfaces for human- robot interaction (HRI) have been a challenging issue in robotics as it is essential for the prevalence of robots supporting humans in key areas of activities. This pape...Intuitive and efficient interfaces for human- robot interaction (HRI) have been a challenging issue in robotics as it is essential for the prevalence of robots supporting humans in key areas of activities. This paper presents a novel augmented reality (AR) based interface to facilitate human-virtual robot interaction. A number of human-virtual robot interaction methods have been for- mulated and implemented with respect to the various types of operations needed in different robotic applications. A Euclidean distance-based method is developed to assist the users in the interaction with the virtual robot and the spatial entities in an AR environment. A monitor-based visualization mode is adopted as it enables the users to perceive the virtual contents associated with different interaction methods, and the virtual content augmented in the real environment is informative and useful to the users during their interaction with the virtual robot. Case researches are presented to demonstrate the successful implementation of the AR-based HRI interface in planning robot pick-and-place operations and path following operations.展开更多
Camera pose estimation with respect to target scenes is an important technology for superimposing virtual information in augmented reality(AR). However, it is difficult to estimate the camera pose for all possible vie...Camera pose estimation with respect to target scenes is an important technology for superimposing virtual information in augmented reality(AR). However, it is difficult to estimate the camera pose for all possible view angles because feature descriptors such as SIFT are not completely invariant from every perspective. We propose a novel method of robust camera pose estimation using multiple feature descriptor databases generated for each partitioned viewpoint, in which the feature descriptor of each keypoint is almost invariant. Our method estimates the viewpoint class for each input image using deep learning based on a set of training images prepared for each viewpoint class. We give two ways to prepare these images for deep learning and generating databases. In the first method, images are generated using a projection matrix to ensure robust learning in a range of environments with changing backgrounds.The second method uses real images to learn a given environment around a planar pattern. Our evaluation results confirm that our approach increases the number of correct matches and the accuracy of camera pose estimation compared to the conventional method.展开更多
Robotic welding demands high accuracy and precision.However,robot programming is often a tedious and time-consuming process that requires expert knowledge.This paper presents an augmented reality assisted robot weldin...Robotic welding demands high accuracy and precision.However,robot programming is often a tedious and time-consuming process that requires expert knowledge.This paper presents an augmented reality assisted robot welding task programming(ARWP)system using a user-friendly augmented reality(AR)interface that simplifies and speeds up the programming of robotic welding tasks.The ARWP system makes the programming of robot welding tasks more user-friendly and reduces the need for trained programmers and expertise in specific robotic systems.The AR interface simplifies the definition of a welding path as well as the welding gun orientation,and the system;the system can locate the welding seam of a workpiece quickly and generate a viable welding path based on the user input.The developed system is integrated with the touch-sensing capability of welding robots in order to locate the welding path accurately based on the user input,for fillet welding.The system is applicable to other welding processes and methods of seam localization.The system implementation is described and evaluated with a case study.展开更多
Digital twin(DT)has garnered attention in both industry and academia.With advances in big data and internet of things(IoTs)technologies,the infrastructure for DT implementation is becoming more readily available.As an...Digital twin(DT)has garnered attention in both industry and academia.With advances in big data and internet of things(IoTs)technologies,the infrastructure for DT implementation is becoming more readily available.As an emerging technology,there are both potential and challenges.DT is a promising methodology to leverage the modern data explosion to aid engineers,managers,healthcare experts and politicians in managing production lines,patient health and smart cities by providing a comprehensive and high fidelity monitoring,prognostics and diagnostics tools.New research and surveys into the topic are published regularly,as interest in this technology is high although there is a lack of standardization to the definition of a DT.Due to the large amount of information present in a DT system and the dual cyber and physical nature of a DT,augmented reality(AR)is a suitable technology for data visualization and interaction with DTs.This paper seeks to classify different types of DT implementations that have been reported,highlights some researches that have used AR as data visualization tool in DT,and examines the more recent approaches to solve outstanding challenges in DT and the integration of DT and AR.展开更多
文摘This study promotes application of Augmented Reality (AR), whose properties can handle multiple sensory that heightens its quality of performances, affections in first time users, and introduces the possibility and guideline in mass production through the development of"Augmented Reality Band", a gadget based on AR system that allows users to interact with virtual music instruments to make music through the system that analyze printed marker on the shirt. The system is developed by using FLAR Toolkit and made accessible via AR band website. The user can run the system online from anywhere by wearing the shirt printed with embedded musical instruments AR code. In order to promote AR Band performances, the authors assessed the quality of the system from technical professionals. It has been found out that the quality level is very high (4.64/5.00) and the user satisfaction evaluated by sample group has been found in a satisfied level (4.22/5.00). In summary, AR Band has proven its performances to enhance user satisfaction and its further development and adaptation is under discussion and suggestion section provided at the end.
基金Project supported by Science Foundation of Shanghai Municipal Commission of Science and Technology (Grant No .025115008)
文摘Nonlinear errors always exist in data obtained from tracker in augmented reality (AR), which badly influence the effect of AR. This paper proposes to rectify the errors using BP neural network. As BP neural network is prone to getting into local extrema and convergence is slow, genetic algorithm is employed to optimize the initial weights and threshold of neural network. This paper discusses how to set the crucial parameters in the algorithm. Experimental results show that the method ensures that the neural network achieves global convergence quickly and correctly. Tracking precision of AR system is improved after the tracker is rectified, and the third dimension of AR system is enhanced.
文摘The spread of social media has increased contacts of members of communities on the lntemet. Members of these communities often use account names instead of real names. When they meet in the real world, they will find it useful to have a tool that enables them to associate the faces in fiont of them with the account names they know. This paper proposes a method that enables a person to identify the account name of the person ("target") in front of him/her using a smartphone. The attendees to a meeting exchange their identifiers (i.e., the account name) and GPS information using smartphones. When the user points his/her smartphone towards a target, the target's identifier is displayed near the target's head on the camera screen using AR (augmented reality). The position where the identifier is displayed is calculated from the differences in longitude and latitude between the user and the target and the azimuth direction of the target from the user. The target is identified based on this information, the face detection coordinates, and the distance between the two. The proposed method has been implemented using Android terminals, and identification accuracy has been examined through experiments.
基金Research was supported in part by the National Natural Science Foundation of China(No.62171473)Beijing Science and Technology Program(No.Z201100004420015)Fundamental Research Funds for the Central Universities of China(No.FRF-TP-20-017A1).
文摘Although notable progress has been made in the study of Steady-State Visual Evoked Potential(SSVEP)-based Brain-Computer Interface(BCI),several factors that limit the practical applications of BCIs still exist.One of these factors is the importability of the stimulator.In this study,Augmented Reality(AR)technology was introduced to present the visual stimuli of SSVEP-BCI,while the robot grasping experiment was designed to verify the applicability of the AR-BCI system.The offline experiment was designed to determine the best stimulus time,while the online experiment was used to complete the robot grasping task.The offline experiment revealed that better information transfer rate performance could be achieved when the stimulation time is 2 s.Results of the online experiment indicate that all 12 subjects could control the robot to complete the robot grasping task,which indicates the applicability of the AR-SSVEP-humanoid robot(NAO)system.This study verified the reliability of the AR-BCI system and indicated the applicability of the AR-SSVEP-NAO system in robot grasping tasks.
文摘In the past two decades, augmented reality (AR) has received a growing amount of attention by researchers in the manufacturing technology community, because AR can be applied to address a wide range of problems throughout the assembly phase in the lifecycle of a product, e.g., planning, design, ergonomics assessment, operation guidance and training. However, to the best of authors' knowledge, there has not been any comprehensive review of AR-based assembly systems. This paper aims to provide a concise overview of the technical features, characteristics and broad range of applications of AR- based assembly systems published between 1990 and 2015. Among these selected articles, two thirds of them were published between 2005 and 2015, and they are considered as recent pertinent works which will be discussed in detail. In addition, the current limitation factors and future trends in the development will also be discussed.
基金the National Natural Science Foundation of China(No.11502146)。
文摘We aim to develop a novel visualization tool for percutaneous renal puncture training based on augmented reality(AR)and compare the needle placement performance of this AR system with ultrasound-guidedfreehand navigation in phantoms.A head-mounted display-based AR navigation system was developed usingthe Unity3D software and Visual Studio to enable the overlay of the preoperative needle path and the complexanatomical structures onto a phantom in real time.The spatial location of the stationary phantom and the percutaneous instrument motion were traced by a Qualisys motion capture system.To evaluate the tracking accuracy,15 participants(7 males and 8 females)performed a single needle insertion using AR navigation(the number ofpunctures n=75)and ultrasound-guided freehand navigation(n=75).The needle placement error was measuredas the Euclidean distance between the actual needle tip and the virtual target by MicronTracker.All participantsdemonstrated a superior needle insertion efficiency when using the AR-assisted puncture method compared withthe ultrasound-guided freehand method.The needle insertion error of the ultrasound-guided method showed anincreased error compared with the AR method(5.54 mm±2.59 mm,4.34 mm±2.10 mm,respectively,p<0.05).The ultrasound-guided needle placements showed an increased time compared with the AR method(19.08 s±3.59 s,15.14 s±2.72 s,respectively,p<0.0001).Our AR training system facilitates the needle placement performance and solves hand-eye coordination problems.The system has the potential to increase efficiency andeffectiveness of percutaneous renal puncture training.
基金This work was funded by the National Key Research and Development Program of China[grant number 2016YFB0502102]It was also partially funded by National Natural Science Foundation of China[grant number 41101436]the Scientific Research Foundation for the Returned Overseas Chinese Scholars,State Education Ministry。
文摘The recent fast development in computer vision and mobile sensor technology such as mobile LiDAR and RGB-D cameras is pushing the boundary of the technology to suit the need of real-life applications in the fields of Augmented Reality(AR),robotics,indoor GIS and self-driving.Camera localization is often a key and enabling technology among these applications.In this paper,we developed a novel camera localization workflow based on a highly accurate 3D prior map optimized by our RGBD SLAM method in conjunction with a deep learning routine trained using consecutive video frames labeled with high precision camera pose.Furthermore,an AR registration method tightly coupled with a game engine is proposed,which incorporates the proposed localization algorithm and aligns the real Kinetic camera with a virtual camera of the game engine to facilitate AR application development in an integrated manner.The experimental results show that the localization accuracy can achieve an average error of 35 cm based on a fine-tuned prior 3D feature database at 3 cm accuracy compared against the ground-truth 3D LiDAR map.The influence of the localization accuracy on the visual effect of AR overlay is also demonstrated and the alignment of the real and virtual camera streamlines the implementation of AR fire emergency response demo in a Virtual Geographic Environment.
文摘It is a challenging task for operators to program a remote robot for welding manipulation depending only on the visual information from the remote site. This paper proposes an intuitive user interface for programming welding robots remotely using augmented reality (AR) with haptic feedback. The proposed system uses a depth camera to reconstruct the surfaces of workpieces. A haptic input device is used to allow users to define welding paths along these surfaces. An AR user interface is developed to allow users to visualize and adjust the orientation of the welding torch. Compared with the traditional robotic welding path programming methods which rely on prior CAD models or contact between the robot end-effector and the workpiece, this proposed approach allows for fast and intuitive remote robotic welding path programming with- out prior knowledge of CAD models of the workpieces. The experimental results show that the proposed approach is a user-friendly interface and can assist users in obtaining an accurate welding path.
基金supported by the National Key Research and Development Program of China (2020YFF0305304)。
文摘In order to study the role of the new technological concept of shared experiences in the digital interactive experience of cultural heritage and apply it to the digital interactive experience of cultural heritage to solve the current problems in this field,starting from the mixed reality(MR) technology that the shared experiences rely on,proper software and hardware platforms were investigated and selected,a universal shared experiences solution was designed,and an experimental project based on the proposed solution was made to verify its feasibility.In the end,a proven and workable shared experiences solution was obtained.This solution included a proposed MR spatial alignment method,and it integrated the existing MR content production process and standard network synchronization functions.Furthermore,it is concluded that the introduction and reasonable use of new technologies can help the development of the digital interactive experience of cultural heritage.The shared experiences solution for the digital interactive experience of cultural heritage balances investment issues in the exhibition,display effect,and user experience.It can speed up the promotion of cultural heritage and bring the vitality of MR technology to relevant projects.
基金This work was supported by National Key R&D Program of China(2019YFC1521102)the National Natural Science Foundation of China(61932003 and 61772051).
文摘When searching for a dynamic target in an unknown real world scene,search efficiency is greatly reduced if users lack information about the spatial structure of the scene.Most target search studies,especially in robotics,focus on determining either the shortest path when the target’s position is known,or a strategy to find the target as quickly as possible when the target’s position is unknown.However,the target’s position is often known intermittently in the real world,e.g.,in the case of using surveillance cameras.Our goal is to help user find a dynamic target efficiently in the real world when the target’s position is intermittently known.In order to achieve this purpose,we have designed an AR guidance assistance system to provide optimal current directional guidance to users,based on searching a prediction graph.We assume that a certain number of depth cameras are fixed in a real scene to obtain dynamic target’s position.The system automatically analyzes all possible meetings between the user and the target,and generates optimal directional guidance to help the user catch up with the target.A user study was used to evaluate our method,and its results showed that compared to free search and a top-view method,our method significantly improves target search efficiency.
基金supported by the Reimagine Research Scheme(RRSC)grant(“Scalable AI Phenome Platform towards Fast-Forward Plant Breeding(Sensor)”,Nos.A-0009037-02-00 and A-0009037-03-00)at NUS,Singaporethe Reimagine Research Scheme(RRSC)grant(“Under-utilised Potential of Micro-biomes(soil)in Sustainable Urban Agriculture”,No.A-0009454-01-00)at NUS,Singaporethe RIE advanced manufacturing and engineering(AME)programmatic grant(“Nanosystems at the Edge”,No.A18A4b0055)at NUS,Singapore.
文摘Wearable and flexible electronics are shaping our life with their unique advantages of light weight,good compliance,and desirable comfortability.With marching into the era of Internet of Things(IoT),numerous sensor nodes are distributed throughout networks to capture,process,and transmit diverse sensory information,which gives rise to the demand on self-powered sensors to reduce the power consumption.Meanwhile,the rapid development of artificial intelligence(AI)and fifth-generation(5G)technologies provides an opportunity to enable smart-decision making and instantaneous data transmission in IoT systems.Due to continuously increased sensor and dataset number,conventional computing based on von Neumann architecture cannot meet the needs of brain-like high-efficient sensing and computing applications anymore.Neuromorphic electronics,drawing inspiration from the human brain,provide an alternative approach for efficient and low-power-consumption information processing.Hence,this review presents the general technology roadmap of self-powered sensors with detail discussion on their diversified applications in healthcare,human machine interactions,smart homes,etc.Via leveraging AI and virtual reality/augmented reality(VR/AR)techniques,the development of single sensors to intelligent integrated systems is reviewed in terms of step-by-step system integration and algorithm improvement.In order to realize efficient sensing and computing,brain-inspired neuromorphic electronics are next briefly discussed.Last,it concludes and highlights both challenges and opportunities from the aspects of materials,minimization,integration,multimodal information fusion,and artificial sensory system.
文摘Intuitive and efficient interfaces for human- robot interaction (HRI) have been a challenging issue in robotics as it is essential for the prevalence of robots supporting humans in key areas of activities. This paper presents a novel augmented reality (AR) based interface to facilitate human-virtual robot interaction. A number of human-virtual robot interaction methods have been for- mulated and implemented with respect to the various types of operations needed in different robotic applications. A Euclidean distance-based method is developed to assist the users in the interaction with the virtual robot and the spatial entities in an AR environment. A monitor-based visualization mode is adopted as it enables the users to perceive the virtual contents associated with different interaction methods, and the virtual content augmented in the real environment is informative and useful to the users during their interaction with the virtual robot. Case researches are presented to demonstrate the successful implementation of the AR-based HRI interface in planning robot pick-and-place operations and path following operations.
文摘Camera pose estimation with respect to target scenes is an important technology for superimposing virtual information in augmented reality(AR). However, it is difficult to estimate the camera pose for all possible view angles because feature descriptors such as SIFT are not completely invariant from every perspective. We propose a novel method of robust camera pose estimation using multiple feature descriptor databases generated for each partitioned viewpoint, in which the feature descriptor of each keypoint is almost invariant. Our method estimates the viewpoint class for each input image using deep learning based on a set of training images prepared for each viewpoint class. We give two ways to prepare these images for deep learning and generating databases. In the first method, images are generated using a projection matrix to ensure robust learning in a range of environments with changing backgrounds.The second method uses real images to learn a given environment around a planar pattern. Our evaluation results confirm that our approach increases the number of correct matches and the accuracy of camera pose estimation compared to the conventional method.
基金This research is supported by the Singapore A*STAR Agency for Science,Technology and Research,Science Engineering Research Council,Industrial Robotic Programme on Interface for Human Robot Interaction(Grant No.1225100001)the Public Sector Research Funding Programme on Embedding Powerful Computer Applications in Ubiquitous Augmented Reality Environments(Grant No.1521200081).
文摘Robotic welding demands high accuracy and precision.However,robot programming is often a tedious and time-consuming process that requires expert knowledge.This paper presents an augmented reality assisted robot welding task programming(ARWP)system using a user-friendly augmented reality(AR)interface that simplifies and speeds up the programming of robotic welding tasks.The ARWP system makes the programming of robot welding tasks more user-friendly and reduces the need for trained programmers and expertise in specific robotic systems.The AR interface simplifies the definition of a welding path as well as the welding gun orientation,and the system;the system can locate the welding seam of a workpiece quickly and generate a viable welding path based on the user input.The developed system is integrated with the touch-sensing capability of welding robots in order to locate the welding path accurately based on the user input,for fillet welding.The system is applicable to other welding processes and methods of seam localization.The system implementation is described and evaluated with a case study.
文摘Digital twin(DT)has garnered attention in both industry and academia.With advances in big data and internet of things(IoTs)technologies,the infrastructure for DT implementation is becoming more readily available.As an emerging technology,there are both potential and challenges.DT is a promising methodology to leverage the modern data explosion to aid engineers,managers,healthcare experts and politicians in managing production lines,patient health and smart cities by providing a comprehensive and high fidelity monitoring,prognostics and diagnostics tools.New research and surveys into the topic are published regularly,as interest in this technology is high although there is a lack of standardization to the definition of a DT.Due to the large amount of information present in a DT system and the dual cyber and physical nature of a DT,augmented reality(AR)is a suitable technology for data visualization and interaction with DTs.This paper seeks to classify different types of DT implementations that have been reported,highlights some researches that have used AR as data visualization tool in DT,and examines the more recent approaches to solve outstanding challenges in DT and the integration of DT and AR.