With the widespread use of digital cameras, imaging software, photo-sharing sites, social networks, and other related technolo- gies. media production and consumption patterns have become much more nmltifaceted and co...With the widespread use of digital cameras, imaging software, photo-sharing sites, social networks, and other related technolo- gies. media production and consumption patterns have become much more nmltifaceted and complex than they used to be. Us- er-generated content in particular has grown tremendously. As a resuh, quality of experience (QoE) and related quality assess- ment (QA) methods must also be looked at from a different an- gle. This paper contrasts some of the traditional quality assess- ment approaches with newer approaches designed for user-gen- erated content. It also describes some sample applications we have developed.展开更多
In this article we present our system for scalable,robust,and fast city-scale reconstruction from Internet photo collections(IPC)obtaining geo-registered dense 3D models.The major achievements of our system are the ef...In this article we present our system for scalable,robust,and fast city-scale reconstruction from Internet photo collections(IPC)obtaining geo-registered dense 3D models.The major achievements of our system are the efficient use of coarse appearance descriptors combined with strong geometric constraints to reduce the computational complexity of the image overlap search.This unique combination of recognition and geometric constraints allows our method to reduce from quadratic complexity in the number of images to almost linear complexity in the IPC size.Accordingly,our 3D-modeling framework is inherently better scalable than other state of the art methods and in fact is currently the only method to support modeling from millions of images.In addition,we propose a novel mechanism to overcome the inherent scale ambiguity of the reconstructed models by exploiting geo-tags of the Internet photo collection images and readily available StreetView panoramas for fully automatic geo-registration of the 3D model.Moreover,our system also exploits image appearance clustering to tackle the challenge of computing dense 3D models from an image collection that has significant variation in illumination between images along with a wide variety of sensors and their associated different radiometric camera parameters.Our algorithm exploits the redundancy of the data to suppress estimation noise through a novel depth map fusion.The fusion simultaneously exploits surface and free space constraints during the fusion of a large number of depth maps.Cost volume compression during the fusion achieves lower memory requirements for high-resolution models.We demonstrate our system on a variety of scenes from an Internet photo collection of Berlin containing almost three million images from which we compute dense models in less than the span of a day on a single computer.展开更多
We present a method for transferring lighting between photographs of a static scene. Our method takes as input a photo collection depicting a scene with varying viewpoints and lighting conditions.We cast lighting tran...We present a method for transferring lighting between photographs of a static scene. Our method takes as input a photo collection depicting a scene with varying viewpoints and lighting conditions.We cast lighting transfer as an edit propagation problem, where the transfer of local illumination across images is guided by sparse correspondences obtained through multi-view stereo. Instead of directly propagating color, we learn local color transforms from corresponding patches in pairs of images and propagate these transforms in an edge-aware manner to regions with no correspondences. Our color transforms model the large variability of appearance changes in local regions of the scene, and are robust to missing or inaccurate correspondences. The method is fully automatic and can transfer strong shadows between images. We show applications of our image relighting method for enhancing photographs, browsing photo collections with harmonized lighting, and generating synthetic time-lapse sequences.展开更多
基金supported by the Advanced Digital Sciences Center (ADSC) under a grant from the Agency for Science,Technology and Research of Singapore (A*STAR)
文摘With the widespread use of digital cameras, imaging software, photo-sharing sites, social networks, and other related technolo- gies. media production and consumption patterns have become much more nmltifaceted and complex than they used to be. Us- er-generated content in particular has grown tremendously. As a resuh, quality of experience (QoE) and related quality assess- ment (QA) methods must also be looked at from a different an- gle. This paper contrasts some of the traditional quality assess- ment approaches with newer approaches designed for user-gen- erated content. It also describes some sample applications we have developed.
文摘In this article we present our system for scalable,robust,and fast city-scale reconstruction from Internet photo collections(IPC)obtaining geo-registered dense 3D models.The major achievements of our system are the efficient use of coarse appearance descriptors combined with strong geometric constraints to reduce the computational complexity of the image overlap search.This unique combination of recognition and geometric constraints allows our method to reduce from quadratic complexity in the number of images to almost linear complexity in the IPC size.Accordingly,our 3D-modeling framework is inherently better scalable than other state of the art methods and in fact is currently the only method to support modeling from millions of images.In addition,we propose a novel mechanism to overcome the inherent scale ambiguity of the reconstructed models by exploiting geo-tags of the Internet photo collection images and readily available StreetView panoramas for fully automatic geo-registration of the 3D model.Moreover,our system also exploits image appearance clustering to tackle the challenge of computing dense 3D models from an image collection that has significant variation in illumination between images along with a wide variety of sensors and their associated different radiometric camera parameters.Our algorithm exploits the redundancy of the data to suppress estimation noise through a novel depth map fusion.The fusion simultaneously exploits surface and free space constraints during the fusion of a large number of depth maps.Cost volume compression during the fusion achieves lower memory requirements for high-resolution models.We demonstrate our system on a variety of scenes from an Internet photo collection of Berlin containing almost three million images from which we compute dense models in less than the span of a day on a single computer.
基金the National University of Singapore with support from the School of Computingsupported by the Being There Centre,a collaboration between Nanyang Technological University Singapore,Eidgenossische Technische Hochschule Zürich+2 种基金the University of North Carolina at Chapel Hillsupported by the Singapore National Research Foundation under its International Research Centre@Singapore Funding Initiativethe Interactive Digital Media Programme Office
文摘We present a method for transferring lighting between photographs of a static scene. Our method takes as input a photo collection depicting a scene with varying viewpoints and lighting conditions.We cast lighting transfer as an edit propagation problem, where the transfer of local illumination across images is guided by sparse correspondences obtained through multi-view stereo. Instead of directly propagating color, we learn local color transforms from corresponding patches in pairs of images and propagate these transforms in an edge-aware manner to regions with no correspondences. Our color transforms model the large variability of appearance changes in local regions of the scene, and are robust to missing or inaccurate correspondences. The method is fully automatic and can transfer strong shadows between images. We show applications of our image relighting method for enhancing photographs, browsing photo collections with harmonized lighting, and generating synthetic time-lapse sequences.