The European Sentinel missions and the latest generation of the United States Landsat satellites provide new opportunities for global environmental monitoring.They acquire imagery at spatial resolutions between 10 and...The European Sentinel missions and the latest generation of the United States Landsat satellites provide new opportunities for global environmental monitoring.They acquire imagery at spatial resolutions between 10 and 60 m in a temporal and spatial coverage that could before only be realized on the basis of lower resolution Earth observation data(>250 m).However,images gathered by these modern missions rapidly add up to data volume that can no longer be handled with standard work stations and software solutions.Hence,this contribution introduces the TimeScan concept which combines pre-existing tools to an exemplary modular pipeline for the flexible and scalable processing of massive image data collections on a variety of(private or public)computing clusters.The TimeScan framework covers solutions for data access to arbitrary mission archives(with different data provisioning policies)and data ingestion into a processing environment(EO2Data module),mission specific pre-processing of multi-temporal data collections(Data2TimeS module),and the generation of a final TimeScan baseline product(TimeS2Stats module)providing a spectrally and temporally harmonized representation of the observed surfaces.Technically,a TimeScan layer aggregates the information content of hundreds or thousands of single images available for the area and time period of interest(i.e.up to hundreds of TBs or even PBs of data)into a higher level product with significantly reduced volume.In first test,the TimeScan pipeline has been used to process a global coverage of 452,799 multispectral Landsat–8 scenes acquired from 2013 to 2015,a global data-set of 25,550 Envisat ASAR radar images collected 2010–2012,and regional Sentinel–1 and Sentinel–2 collections of∼1500 images acquired from 2014 to 2016.The resulting TimeScan products have already been successfully used in various studies related to the large-scale monitoring of environmental processes and their temporal dynamics.展开更多
Current global urbanisation processes are leading to new forms of massive urban constellations. The conceptualisations and classifications of these, however, are often ambiguous, overlap or lag behind in scientific li...Current global urbanisation processes are leading to new forms of massive urban constellations. The conceptualisations and classifications of these, however, are often ambiguous, overlap or lag behind in scientific literature. This article examines whether there is a common denominator to define and delimitate–and ultimately map–these new dimensions of cityscapes. In an extensive literature review we analysed and juxtaposed some of the most common concepts such as megacity, megaregion or megalopolis. We observed that many concepts are abstract or unspecific, and for those concepts for which physical parameters exist, the parameters are neither properly defined nor used in standardised ways. While understandably concepts originate from various disciplines, the authors identify a need for more precise definition and use of parameters. We conclude that often, spatial patterns of large urban areas resemble each other considerably but the definitions vary so widely that these differences may surpass any inconsistencies in the spatial delimitation process. In other words, today we have tools such as earth observation data and Geographic Information Systems to parameterise if clear definitions are provided. This appears not to be the case. The limiting factor when delineating large urban areas seems to be a commonly agreed ontology.展开更多
The increased number of free and open Sentinel satellite images has led to new applications of these data.Among them is the systematic classification of land cover/use types based on patterns of settlements or agricul...The increased number of free and open Sentinel satellite images has led to new applications of these data.Among them is the systematic classification of land cover/use types based on patterns of settlements or agriculture recorded by these images,in particular,the identification and quantification of their temporal changes.In this paper,we will present guidelines and practical examples of how to obtain rapid and reliable image patch labelling results and their validation based on data mining techniques for detecting these temporal changes,and presenting these as classification maps and/or statistical analytics.This represents a new systematic validation approach for semantic image content verification.We will focus on a number of different scenarios proposed by the user community using Sentinel data.From a large number of potential use cases,we selected three main cases,namely forest monitoring,flood monitoring,and macro-economics/urban monitoring.展开更多
This article presents and analyses the modular architecture and capabilities of CODE-DE(Copernicus Data and Exploitation Platform–Deutschland,www.code-de.org),the integrated German operational environment for accessi...This article presents and analyses the modular architecture and capabilities of CODE-DE(Copernicus Data and Exploitation Platform–Deutschland,www.code-de.org),the integrated German operational environment for accessing and processing Copernicus data and products,as well as the methodology to establish and operate the system.Since March 2017,CODE-DE has been online with access to Sentinel-1 and Sentinel-2 data,to Sentinel-3 data shortly after this time,and since March 2019 with access to Sentinel-5P data.These products are available and accessed by 1,682 registered users as of March 2019.During this period 654,895 products were downloaded and a global catalogue was continuously updated,featuring a data volume of 814 TByte based on a rolling archive concept supported by a reload mechanism from a long-term archive.Since November 2017,the element for big data processing has been operational,where registered users can process and analyse data themselves specifically assisted by methods for value-added product generation.Utilizing 195,467 core and 696,406 memory hours,982,948 products of different applications were fully automatically generated in the cloud environment and made available as of March 2019.Special features include an improved visualization of available Sentinel-2 products,which are presented within the catalogue client at full 10 m resolution.展开更多
基金The authors also thank the European Space Agency(ESA)for funding the project“Urban Thematic Exploitation Platform-TEP Urban”(ESRIN/Contract No.4000113707/15/I-NB)since the processing of the global TimeScan product based on Landsat-8 data was realized in the context of this initiative.
文摘The European Sentinel missions and the latest generation of the United States Landsat satellites provide new opportunities for global environmental monitoring.They acquire imagery at spatial resolutions between 10 and 60 m in a temporal and spatial coverage that could before only be realized on the basis of lower resolution Earth observation data(>250 m).However,images gathered by these modern missions rapidly add up to data volume that can no longer be handled with standard work stations and software solutions.Hence,this contribution introduces the TimeScan concept which combines pre-existing tools to an exemplary modular pipeline for the flexible and scalable processing of massive image data collections on a variety of(private or public)computing clusters.The TimeScan framework covers solutions for data access to arbitrary mission archives(with different data provisioning policies)and data ingestion into a processing environment(EO2Data module),mission specific pre-processing of multi-temporal data collections(Data2TimeS module),and the generation of a final TimeScan baseline product(TimeS2Stats module)providing a spectrally and temporally harmonized representation of the observed surfaces.Technically,a TimeScan layer aggregates the information content of hundreds or thousands of single images available for the area and time period of interest(i.e.up to hundreds of TBs or even PBs of data)into a higher level product with significantly reduced volume.In first test,the TimeScan pipeline has been used to process a global coverage of 452,799 multispectral Landsat–8 scenes acquired from 2013 to 2015,a global data-set of 25,550 Envisat ASAR radar images collected 2010–2012,and regional Sentinel–1 and Sentinel–2 collections of∼1500 images acquired from 2014 to 2016.The resulting TimeScan products have already been successfully used in various studies related to the large-scale monitoring of environmental processes and their temporal dynamics.
文摘Current global urbanisation processes are leading to new forms of massive urban constellations. The conceptualisations and classifications of these, however, are often ambiguous, overlap or lag behind in scientific literature. This article examines whether there is a common denominator to define and delimitate–and ultimately map–these new dimensions of cityscapes. In an extensive literature review we analysed and juxtaposed some of the most common concepts such as megacity, megaregion or megalopolis. We observed that many concepts are abstract or unspecific, and for those concepts for which physical parameters exist, the parameters are neither properly defined nor used in standardised ways. While understandably concepts originate from various disciplines, the authors identify a need for more precise definition and use of parameters. We conclude that often, spatial patterns of large urban areas resemble each other considerably but the definitions vary so widely that these differences may surpass any inconsistencies in the spatial delimitation process. In other words, today we have tools such as earth observation data and Geographic Information Systems to parameterise if clear definitions are provided. This appears not to be the case. The limiting factor when delineating large urban areas seems to be a commonly agreed ontology.
基金The work was supported by the European Commission’s H2020 CANDELA project under Grant Agreement No.776193.
文摘The increased number of free and open Sentinel satellite images has led to new applications of these data.Among them is the systematic classification of land cover/use types based on patterns of settlements or agriculture recorded by these images,in particular,the identification and quantification of their temporal changes.In this paper,we will present guidelines and practical examples of how to obtain rapid and reliable image patch labelling results and their validation based on data mining techniques for detecting these temporal changes,and presenting these as classification maps and/or statistical analytics.This represents a new systematic validation approach for semantic image content verification.We will focus on a number of different scenarios proposed by the user community using Sentinel data.From a large number of potential use cases,we selected three main cases,namely forest monitoring,flood monitoring,and macro-economics/urban monitoring.
基金funding from the German Federal Ministry of Transport and Digital Infrastructure(BMVI).
文摘This article presents and analyses the modular architecture and capabilities of CODE-DE(Copernicus Data and Exploitation Platform–Deutschland,www.code-de.org),the integrated German operational environment for accessing and processing Copernicus data and products,as well as the methodology to establish and operate the system.Since March 2017,CODE-DE has been online with access to Sentinel-1 and Sentinel-2 data,to Sentinel-3 data shortly after this time,and since March 2019 with access to Sentinel-5P data.These products are available and accessed by 1,682 registered users as of March 2019.During this period 654,895 products were downloaded and a global catalogue was continuously updated,featuring a data volume of 814 TByte based on a rolling archive concept supported by a reload mechanism from a long-term archive.Since November 2017,the element for big data processing has been operational,where registered users can process and analyse data themselves specifically assisted by methods for value-added product generation.Utilizing 195,467 core and 696,406 memory hours,982,948 products of different applications were fully automatically generated in the cloud environment and made available as of March 2019.Special features include an improved visualization of available Sentinel-2 products,which are presented within the catalogue client at full 10 m resolution.