The accuracy of Digital Surface Models(DSMs)generated using stereo matching methods varies due to the varying acquisition conditions and configuration parameters of stereo images.It has been a good practice to fuse th...The accuracy of Digital Surface Models(DSMs)generated using stereo matching methods varies due to the varying acquisition conditions and configuration parameters of stereo images.It has been a good practice to fuse these DSMs generated from various stereo pairs to achieve enhanced,in which multiple DSMs are combined through computational approaches into a single,more accurate,and complete DSM.However,accurately characterizing detailed objects and their boundaries still present a challenge since most boundary-ware fusion methods still struggle to achieve sharpened depth discontinuities due to the averaging effects of different DSMs.Therefore,we propose a simple and efficient adaptive image-guided DSM fusion method that applies k-means clustering on small patches of the orthophoto to guide the pixel-level fusion adapted to the most consistent and relevant elevation points.The experiment results show that our proposed method has outperformed comparing methods in accuracy and the ability to preserve sharpened depth edges.展开更多
The process of modern photogrammetry converts images and/or LiDAR data into usable 2D/3D/4D products.The photogrammetric industry offers engineering-grade hardware and software components for various applications.Whil...The process of modern photogrammetry converts images and/or LiDAR data into usable 2D/3D/4D products.The photogrammetric industry offers engineering-grade hardware and software components for various applications.While some components of the data processing pipeline work already automatically,there is still substantial manual involvement required in order to obtain reliable and high-quality results.The recent development of machine learning techniques has attracted a great attention in its potential to address complex tasks that traditionally require manual inputs.It is therefore worth revisiting the role and existing efforts of machine learning techniques in the field of photogrammetry,as well as its neighboring field computer vision.This paper provides an overview of the state-of-the-art efforts in machine learning in bringing the automated and‘intelligent’component to photogrammetry,computer vision and(to a lesser degree)to remote sensing.We will primarily cover the relevant efforts following a typical 3D photogrammetric processing pipeline:(1)data acquisition(2)georeferencing/interest point matching(3)Digital Surface Model generation(4)semantic interpretations,followed by conclusions and our insights.展开更多
This study addresses the need of making reality-based 3D urban models more detailed.Our method combines the established workflows from photogrammetry and procedural modelling in order to exploit distinct advantages of...This study addresses the need of making reality-based 3D urban models more detailed.Our method combines the established workflows from photogrammetry and procedural modelling in order to exploit distinct advantages of both approaches.Our overall workflow uses photogrammetry for measuring geo-referenced satellite imagery to create 3D building models and textured roof geometry.The results are then used to create attributed building footprints,which can be applied in the procedural modelling part of the workflow.Thereby procedural building models and detailed façade structures,based on street-level photos,are created.The final step merges the textured roof geometry with the procedural façade geometry,resulting in an improved model compared with using each technique alone.The article details the individual workflow steps and exemplifies the approach by means of a concrete case study carried out in Singapore's Punggol area,where we modelled a newly developed part of Singapore,consisting mainly of 3D high-rise towers.展开更多
In this paper,we present a case study that performs an unmanned aerial vehicle(UAV)based fine-scale 3D change detection and monitoring of progressive collapse performance of a building during a demolition event.Multi-...In this paper,we present a case study that performs an unmanned aerial vehicle(UAV)based fine-scale 3D change detection and monitoring of progressive collapse performance of a building during a demolition event.Multi-temporal oblique photogrammetry images are collected with 3D point clouds generated at different stages of the demolition.The geometric accuracy of the generated point clouds has been evaluated against both airborne and terrestrial LiDAR point clouds,achieving an average distance of 12 cm and 16 cm for roof and façade respectively.We propose a hierarchical volumetric change detection framework that unifies multi-temporal UAV images for pose estimation(free of ground control points),reconstruction,and a coarse-to-fine 3D density change analysis.This work has provided a solution capable of addressing change detection on full 3D time-series datasets where dramatic scene content changes are presented progressively.Our change detection results on the building demolition event have been evaluated against the manually marked ground-truth changes and have achieved an F-1 score varying from 0.78 to 0.92,with consistently high precision(0.92–0.99).Volumetric changes through the demolition progress are derived from change detection and have been shown to favorably reflect the qualitative and quantitative building demolition progression.展开更多
基金John Hopkins University Applied Physics Lab to support the Imagery of the 2019 DFC datasets
文摘The accuracy of Digital Surface Models(DSMs)generated using stereo matching methods varies due to the varying acquisition conditions and configuration parameters of stereo images.It has been a good practice to fuse these DSMs generated from various stereo pairs to achieve enhanced,in which multiple DSMs are combined through computational approaches into a single,more accurate,and complete DSM.However,accurately characterizing detailed objects and their boundaries still present a challenge since most boundary-ware fusion methods still struggle to achieve sharpened depth discontinuities due to the averaging effects of different DSMs.Therefore,we propose a simple and efficient adaptive image-guided DSM fusion method that applies k-means clustering on small patches of the orthophoto to guide the pixel-level fusion adapted to the most consistent and relevant elevation points.The experiment results show that our proposed method has outperformed comparing methods in accuracy and the ability to preserve sharpened depth edges.
基金supported by the Office of Naval Research[Award No.N000141712928].
文摘The process of modern photogrammetry converts images and/or LiDAR data into usable 2D/3D/4D products.The photogrammetric industry offers engineering-grade hardware and software components for various applications.While some components of the data processing pipeline work already automatically,there is still substantial manual involvement required in order to obtain reliable and high-quality results.The recent development of machine learning techniques has attracted a great attention in its potential to address complex tasks that traditionally require manual inputs.It is therefore worth revisiting the role and existing efforts of machine learning techniques in the field of photogrammetry,as well as its neighboring field computer vision.This paper provides an overview of the state-of-the-art efforts in machine learning in bringing the automated and‘intelligent’component to photogrammetry,computer vision and(to a lesser degree)to remote sensing.We will primarily cover the relevant efforts following a typical 3D photogrammetric processing pipeline:(1)data acquisition(2)georeferencing/interest point matching(3)Digital Surface Model generation(4)semantic interpretations,followed by conclusions and our insights.
基金This study was established at the Singapore-ETH Centre for Global Environmental Sustainability(SEC),co-funded by the Singapore National Research Foundation(NRF)and ETH Zurich.
文摘This study addresses the need of making reality-based 3D urban models more detailed.Our method combines the established workflows from photogrammetry and procedural modelling in order to exploit distinct advantages of both approaches.Our overall workflow uses photogrammetry for measuring geo-referenced satellite imagery to create 3D building models and textured roof geometry.The results are then used to create attributed building footprints,which can be applied in the procedural modelling part of the workflow.Thereby procedural building models and detailed façade structures,based on street-level photos,are created.The final step merges the textured roof geometry with the procedural façade geometry,resulting in an improved model compared with using each technique alone.The article details the individual workflow steps and exemplifies the approach by means of a concrete case study carried out in Singapore's Punggol area,where we modelled a newly developed part of Singapore,consisting mainly of 3D high-rise towers.
基金supported by the National Science Foundation[grant number 2036193]supported in part by Office of Naval Research[grant numbers N00014-17-l-2928,N00014-20-1-2141].
文摘In this paper,we present a case study that performs an unmanned aerial vehicle(UAV)based fine-scale 3D change detection and monitoring of progressive collapse performance of a building during a demolition event.Multi-temporal oblique photogrammetry images are collected with 3D point clouds generated at different stages of the demolition.The geometric accuracy of the generated point clouds has been evaluated against both airborne and terrestrial LiDAR point clouds,achieving an average distance of 12 cm and 16 cm for roof and façade respectively.We propose a hierarchical volumetric change detection framework that unifies multi-temporal UAV images for pose estimation(free of ground control points),reconstruction,and a coarse-to-fine 3D density change analysis.This work has provided a solution capable of addressing change detection on full 3D time-series datasets where dramatic scene content changes are presented progressively.Our change detection results on the building demolition event have been evaluated against the manually marked ground-truth changes and have achieved an F-1 score varying from 0.78 to 0.92,with consistently high precision(0.92–0.99).Volumetric changes through the demolition progress are derived from change detection and have been shown to favorably reflect the qualitative and quantitative building demolition progression.