The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-d...The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-dimensions using a mobile platform. The system incorporates 4 ultrasonic sensors scanner system, an HD web camera as well as an inertial measurement unit (IMU). The whole platform is mountable on mobile facilities, such as a wheelchair. The proposed mapping approach took advantage of the precision of the 3D point clouds produced by the ultrasonic sensors system despite their scarcity to help build a more definite 3D scene. Using a robust iterative algorithm, it combined the structure from motion generated 3D point clouds with the ultrasonic sensors and IMU generated 3D point clouds to derive a much more precise point cloud using the depth measurements from the ultrasonic sensors. Because of their ability to recognize features of objects in the targeted scene, the ultrasonic generated point clouds performed feature extraction on the consecutive point cloud to ensure a perfect alignment. The range measured by ultrasonic sensors contributed to the depth correction of the generated 3D images (the 3D scenes). Experiments revealed that the system generated not only dense but precise 3D maps of the environments. The results showed that the designed 3D modeling platform is able to help in assistive living environment for self-navigation, obstacle alert, and other driving assisting tasks.展开更多
文摘The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-dimensions using a mobile platform. The system incorporates 4 ultrasonic sensors scanner system, an HD web camera as well as an inertial measurement unit (IMU). The whole platform is mountable on mobile facilities, such as a wheelchair. The proposed mapping approach took advantage of the precision of the 3D point clouds produced by the ultrasonic sensors system despite their scarcity to help build a more definite 3D scene. Using a robust iterative algorithm, it combined the structure from motion generated 3D point clouds with the ultrasonic sensors and IMU generated 3D point clouds to derive a much more precise point cloud using the depth measurements from the ultrasonic sensors. Because of their ability to recognize features of objects in the targeted scene, the ultrasonic generated point clouds performed feature extraction on the consecutive point cloud to ensure a perfect alignment. The range measured by ultrasonic sensors contributed to the depth correction of the generated 3D images (the 3D scenes). Experiments revealed that the system generated not only dense but precise 3D maps of the environments. The results showed that the designed 3D modeling platform is able to help in assistive living environment for self-navigation, obstacle alert, and other driving assisting tasks.