Background: As the global novel coronavirus pneumonia (NCP) remains severe, elderly people are at high risk for NCP and osteoporotic vertebral compression fractures, with high complications and mortality. How to treat...Background: As the global novel coronavirus pneumonia (NCP) remains severe, elderly people are at high risk for NCP and osteoporotic vertebral compression fractures, with high complications and mortality. How to treat patients and protect medical staff from infection, and at the same time strictly prevent the occurrence of clustered transmission events in the hospital, the establishment of perfect pre-hospital emergency measures and infection prevention and control strategy is the first element to ensure success. Objective: To establish the diagnosis and treatment and infection protection strategy for Osteoporotic vertebral compression fractures (OVCF) patients undergoing minimally invasive percutaneous kyphoplasty (PKP) surgery during the prevention and control of COVID-19, so as to ensure the stable, orderly and safe medical treatment. Methods: A total of 583 OVCF patients were admitted to the First Affiliated Hospital of Hebei North University during the epidemic prevention and control period from January 2020 to July 2022. After urgent and outpatient strict standardized screening, 382 patients met the inclusion criteria, including 112 males and 270 females, aged (70.50 ± 5.49) years. The preoperative visual analogue scale (VAS) score was 6.92 ± 1.86. Preoperative Oswestry disability index (ODI) was 74.67 ± 4.84. The satisfaction rate was (45.89 ± 3.67) %. According to the clinical diagnostic criteria and classification, 367 patients were diagnosed as ordinary OVCF, including 156 cases of mild compression and 226 cases of moderate compression. The clinical classification of 15 patients with OVCF diagnosed as COVID-19 was type I, including 10 cases of mild COVID-19 and 5 cases of common COVID-19. All patients were treated with PKP. Results: All patients were followed up at 1 day, 1 month and 3 months after operation, VAS (2.01 ± 0.56, 0.45 ± 0.11, 0 ± 0), ODI (45.41 ± 4.15, 10.22 ± 2.73, 4.03 ± 1.57) and satisfaction (90.12%, 95.57%, 99.23%) were significantly improved compared with those before operation (p < 0.05), and the original medical diseases were not aggravated. In this group, 15 cases of OVCF diagnosed with COVID-19 were given priority to treat COVID-19 under strict three-level protection in the designated isolation ward. PKP was carried out after the condition was stable, and the areas, items and personnel in contact with patients during the perioperative period must be strictly and thoroughly disinfected. The patient had a good prognosis, no complications, no cross-infection in the hospital, and no infection rate among medical staff. Conclusions: The implementation of the diagnosis and treatment and infection protection strategy for OVCF patients undergoing minimally invasive PKP surgery during the prevention and control of COVID-19 has a guiding role in preventing the spread of infection, improving the cure rate, promoting rapid recovery, reducing complications and reducing mortality.展开更多
Important?information pertaining to emergencies and responses to?the?emergencies is often distributed across numerous Internet sites. In the event of a disaster like an earthquake, rapid access to such information is ...Important?information pertaining to emergencies and responses to?the?emergencies is often distributed across numerous Internet sites. In the event of a disaster like an earthquake, rapid access to such information is critical. At such moments the general public usually has a hard time navigating through numerous sites to retrieve and integrate information, and this may?severely affect our capability to make?critical decisions in a timely manner. Common earthquake mashups often lack relevant information like locations of first responders and routing to important facilities (e.g. hospitals and fire stations) which could save important time and lives. To address the challenges, we developed an Earthquake Information Mashup prototype. This prototype demonstrates a mashup approach to providing?a Web visualization of real-time earthquake monitoring and complementary information, such as traffic conditions, the location of important facilities and routing to them. It also offers users the ability to communicate local condition. Users are thus able to better integrate information from various near real-time sources, obtain better situational awareness, and make?smarter?informed?critical decisions.展开更多
In recent years,our world has experienced significant disruptions due to the COviD-19 pandemic,and Russia's 2022 invasion of Ukraine,impacting human activities and the global environment.This paper explored air qu...In recent years,our world has experienced significant disruptions due to the COviD-19 pandemic,and Russia's 2022 invasion of Ukraine,impacting human activities and the global environment.This paper explored air quality changes in Ukraine due to COVID-19,and Russia's invasion of Ukraine using on-demand with a what-you-see-is-what-you-get approach.During the cOVID-19 pandemic,strict quarantine policies in Ukraine led to a 2%reduction in tropospheric NO_(2) concentration before the lockdown and 4%during the lockdown period.Cities like Kyiv,Donetsk,and Dnipro exhibited reductions of 5%,11%,and 16%,respectively.Total SO_(2) column concentration decreased by 6%before the lockdown and 2.5%during the lockdown period,except in high population density areas.Kyiv showed the highest reduction of 17%in SO_(2) concentration,while Donetsk and Dnipro exhibited an 11%reduction.However,during the Russian invasion,there was a significant increase in tropospheric NO_(2) concentration in heavily destroyed Kharkiv while most eastern regions experienced a reduction.The total SO_(2) column was 48%higher before the war but reduced throughout the country after the war,except for in Kyiv and a few central regions.These findings can contribute to analyzing air pollution and building digital twin simulations for future reconstruction scenarios.展开更多
With the advancement of Artificial Intelligence(Al)technologies and accumulation of big Earth data,Deep Learning(DL)has become an important method to discover patterns and understand Earth science processes in the pas...With the advancement of Artificial Intelligence(Al)technologies and accumulation of big Earth data,Deep Learning(DL)has become an important method to discover patterns and understand Earth science processes in the past several years.While successful in many Earth science areas,Al/DL applications are often challenging for computing devices.In recent years,Graphics Processing Unit(GPU)devices have been leveraged to speed up Al/DL applications,yet computational performance still poses a major barrier for DL-based Earth science applications.To address these computational challenges,we selected five existing sample Earth science Al applications,revised the DL-based models/algorithms,and tested the performance of multiple GPU computing platforms to support the applications.Application softwarepackages,performance comparisonsacross different platforms,along with other results,are summarized.This article can help understand how various Al/ML Earth science applications can be supported by GPU computing and help researchers in the Earth science domain better adopt GPU computing(such as supermicro,GPU clusters,and cloud computing-based)for their Al/ML applications,and to optimize their science applications to better leverage the computing device.展开更多
The geospatial sciences face grand information technology(IT)challenges in the twenty-first century:data intensity,computing intensity,concurrent access intensity and spatiotemporal intensity.These challenges require ...The geospatial sciences face grand information technology(IT)challenges in the twenty-first century:data intensity,computing intensity,concurrent access intensity and spatiotemporal intensity.These challenges require the readiness of a computing infrastructure that can:(1)better support discovery,access and utilization of data and data processing so as to relieve scientists and engineers of IT tasks and focus on scientific discoveries;(2)provide real-time IT resources to enable real-time applications,such as emergency response;(3)deal with access spikes;and(4)provide more reliable and scalable service for massive numbers of concurrent users to advance public knowledge.The emergence of cloud computing provides a potential solution with an elastic,on-demand computing platform to integrateobservation systems,parameter extracting algorithms,phenomena simulations,analytical visualization and decision support,and to provide social impact and user feedbackthe essential elements of the geospatial sciences.We discuss the utilization of cloud computing to support the intensities of geospatial sciences by reporting from our investigations on how cloud computing could enable the geospatial sciences and how spatiotemporal principles,the kernel of the geospatial sciences,could be utilized to ensure the benefits of cloud computing.Four research examples are presented to analyze how to:(1)search,access and utilize geospatial data;(2)configure computing infrastructure to enable the computability of intensive simulation models;(3)disseminate and utilize research results for massive numbers of concurrent users;and(4)adopt spatiotemporal principles to support spatiotemporal intensive applications.The paper concludes with a discussion of opportunities and challenges for spatial cloud computing(SCC).展开更多
Big Data has emerged in the past few years as a new paradigm providing abundant data and opportunities to improve and/or enable research and decision-support applications with unprecedented value for digital earth app...Big Data has emerged in the past few years as a new paradigm providing abundant data and opportunities to improve and/or enable research and decision-support applications with unprecedented value for digital earth applications including business,sciences and engineering.At the same time,Big Data presents challenges for digital earth to store,transport,process,mine and serve the data.Cloud computing provides fundamental support to address the challenges with shared computing resources including computing,storage,networking and analytical software;the application of these resources has fostered impressive Big Data advancements.This paper surveys the two frontiers–Big Data and cloud computing–and reviews the advantages and consequences of utilizing cloud computing to tackling Big Data in the digital earth and relevant science domains.From the aspects of a general introduction,sources,challenges,technology status and research opportunities,the following observations are offered:(i)cloud computing and Big Data enable science discoveries and application developments;(ii)cloud computing provides major solutions for Big Data;(iii)Big Data,spatiotemporal thinking and various application domains drive the advancement of cloud computing and relevant technologies with new requirements;(iv)intrinsic spatiotemporal principles of Big Data and geospatial sciences provide the source for finding technical and theoretical solutions to optimize cloud computing and processing Big Data;(v)open availability of Big Data and processing capability pose social challenges of geospatial significance and(vi)a weave of innovations is transforming Big Data into geospatial research,engineering and business values.This review introduces future innovations and a research agenda for cloud computing supporting the transformation of the volume,velocity,variety and veracity into values of Big Data for local to global digital earth science and applications.展开更多
Earth observations and model simulations are generating big multidimensional array-based raster data.However,it is difficult to efficiently query these big raster data due to the inconsistency among the geospatial ras...Earth observations and model simulations are generating big multidimensional array-based raster data.However,it is difficult to efficiently query these big raster data due to the inconsistency among the geospatial raster data model,distributed physical data storage model,and the data pipeline in distributed computing frameworks.To efficiently process big geospatial data,this paper proposes a three-layer hierarchical indexing strategy to optimize Apache Spark with Hadoop Distributed File System(HDFS)from the following aspects:(1)improve I/O efficiency by adopting the chunking data structure;(2)keep the workload balance and high data locality by building the global index(k-d tree);(3)enable Spark and HDFS to natively support geospatial raster data formats(e.g.,HDF4,NetCDF4,GeoTiff)by building the local index(hash table);(4)index the in-memory data to further improve geospatial data queries;(5)develop a data repartition strategy to tune the query parallelism while keeping high data locality.The above strategies are implemented by developing the customized RDDs,and evaluated by comparing the performance with that of Spark SQL and SciSpark.The proposed indexing strategy can be applied to other distributed frameworks or cloud-based computing systems to natively support big geospatial data query with high efficiency.展开更多
Big Earth data are produced from satellite observations,Internet-ofThings,model simulations,and other sources.The data embed unprecedented insights and spatiotemporal stamps of relevant Earth phenomena for improving o...Big Earth data are produced from satellite observations,Internet-ofThings,model simulations,and other sources.The data embed unprecedented insights and spatiotemporal stamps of relevant Earth phenomena for improving our understanding,responding,and addressing challenges of Earth sciences and applications.In the past years,new technologies(such as cloud computing,big data and artificial intelligence)have gained momentum in addressing the challenges of using big Earth data for scientific studies and geospatial applications historically intractable.This paper reviews the big Earth data analytics from several aspects to capture the latest advancements in this fast-growing domain.We first introduce the concepts of big Earth data.The architecture,various functionalities,and supporting modules are then reviewed from a generic methodology aspect.Analytical methods supporting the functionalities are surveyed and analyzed in the context of different tools.The driven questions are exemplified through cutting-edge Earth science researches and applications.A list of challenges and opportunities are proposed for different stakeholders to collaboratively advance big Earth data analytics in the near future.展开更多
The sudden outbreak of the Coronavirus disease(COVID-19)swept across the world in early 2020,triggering the lockdowns of several billion people across many countries,including China,Spain,India,the U.K.,Italy,France,G...The sudden outbreak of the Coronavirus disease(COVID-19)swept across the world in early 2020,triggering the lockdowns of several billion people across many countries,including China,Spain,India,the U.K.,Italy,France,Germany,Brazil,Russia,and the U.S.The transmission of the virus accelerated rapidly with the most confirmed cases in the U.S.,India,Russia,and Brazil.In response to this national and global emergency,the NSF Spatiotemporal Innovation Center brought together a taskforce of international researchers and assembled implementation strategies to rapidly respond to this crisis,for supporting research,saving lives,and protecting the health of global citizens.This perspective paper presents our collective view on the global health emergency and our effort in collecting,analyzing,and sharing relevant data on global policy and government responses,human mobility,environmental impact,socioeconomical impact;in developing research capabilities and mitigation measures with global scientists,promoting collaborative research on outbreak dynamics,and reflecting on the dynamic responses from human societies.展开更多
This paper introduces a new concept,distributed geospatial information processing(DGIP),which refers to the process of geospatial information residing on computers geographically dispersed and connected through comput...This paper introduces a new concept,distributed geospatial information processing(DGIP),which refers to the process of geospatial information residing on computers geographically dispersed and connected through computer networks,and the contribution of DGIP to Digital Earth(DE).The DGIP plays a critical role in integrating the widely distributed geospatial resources to support the DE envisioned to utilise a wide variety of information.This paper addresses this role from three different aspects:1)sharing Earth data,information,and services through geospatial interoperability supported by standardisation of contents and interfaces;2)sharing computing and software resources through a GeoCyberinfrastructure supported by DGIP middleware;and 3)sharing knowledge within and across domains through ontology and semantic searches.Observing the long-term process for the research and development of an operational DE,we discuss and expect some practical contributions of the DGIP to the DE.展开更多
Social media platforms have been contributing to disaster management during the past several years.Text mining solutions using traditional machine learning techniques have been developed to categorize the messages int...Social media platforms have been contributing to disaster management during the past several years.Text mining solutions using traditional machine learning techniques have been developed to categorize the messages into different themes,such as caution and advice,to better understand the meaning and leverage useful information from the social media text content.However,these methods are mostly event specific and difficult to generalize for cross-event classifications.In other words,traditional classification models trained by historic datasets are not capable of categorizing social media messages from a future event.This research examines the capability of a convolutional neural network(CNN)model in cross-event Twitter topic classification based on three geo-tagged twitter datasets collected during Hurricanes Sandy,Harvey,and Irma.The performance of the CNN model is compared to two traditional machine learning methods:support vector machine(SVM)and logistic regression(LR).Experiment results showed that CNN models achieved a consistently better accuracy for both single event and crossevent evaluation scenarios whereas SVM and LR models had lower accuracy compared to their own single event accuracy results.This indicated that the CNN model has the capability of pre-training Twitter data from past events to classify for an upcoming event for situational awareness.展开更多
Global challenges(such as economy and natural hazards)and technology advancements have triggered international leaders and organizations to rethink geosciences and Digital Earth in the new decade.The next generation v...Global challenges(such as economy and natural hazards)and technology advancements have triggered international leaders and organizations to rethink geosciences and Digital Earth in the new decade.The next generation visions pose grand challenges for infrastructure,especially computing infrastructure.The gradual establishment of cloud computing as a primary infrastructure provides new capabilities to meet the challenges.This paper reviews research conducted using cloud computing to address geoscience and Digital Earth needs within the context of an integrated Earth system.We also introduce the five papers selected through a rigorous review process as exemplar research in using cloud capabilities to address the challenges.The literature and research demonstrate that spatial cloud computing provides unprecedented new capabilities to enable Digital Earth and geosciences in the twenty-first century in several aspects:(1)virtually unlimited computing power for addressing big data storage,sharing,processing,and knowledge discovering challenges,(2)elastic,flexible,and easy-to-use computing infrastructure to facilitate the building of the next generation geospatial cyberin-frastructure,CyberGIS,CloudGIS,and Digital Earth,(3)seamless integration environment that enables mashing up observation,data,models,problems,and citizens,(4)research opportunities triggered by global challenges that may lead to breakthroughs in relevant fields including infrastructure building,GIScience,computer science,and geosciences,and(5)collaboration supported by cloud computing and across science domains,agencies,countries to collectively address global challenges from policy,management,system engineering,acquisition,and operation aspects.展开更多
The advancements of sensing technologies,including remote sensing,in situ sensing,social sensing,and health sensing,have tremendously improved our capability to observe and record natural and social phenomena,such as ...The advancements of sensing technologies,including remote sensing,in situ sensing,social sensing,and health sensing,have tremendously improved our capability to observe and record natural and social phenomena,such as natural disasters,presidential elections,and infectious diseases.The observations have provided an unprecedented opportunity to better understand and respond to the spatiotemporal dynamics of the environment,urban settings,health and disease propagation,business decisions,and crisis and crime.Spatiotemporal event detection serves as a gateway to enable a better understanding by detecting events that represent the abnormal status of relevant phenomena.This paper reviews the literature for different sensing capabilities,spatiotemporal event extraction methods,and categories of applications for the detected events.The novelty of this review is to revisit the definition and requirements of event detection and to layout the overall workflow(from sensing and event extraction methods to the operations and decision-supporting processes based on the extracted events)as an agenda for future event detection research.Guidance is presented on the current challenges to this research agenda,and future directions are discussed for conducting spatiotemporal event detection in the era of big data,advanced sensing,and artificial intelligence.展开更多
The simulations and potential forecasting of dust storms are of significant interest to public health and environment sciences.Dust storms have interannual variabilities and are typical disruptive events.The computing...The simulations and potential forecasting of dust storms are of significant interest to public health and environment sciences.Dust storms have interannual variabilities and are typical disruptive events.The computing platform for a dust storm forecasting operational system should support a disruptive fashion by scaling up to enable high-resolution forecasting and massive public access when dust storms come and scaling down when no dust storm events occur to save energy and costs.With the capability of providing a large,elastic,and virtualized pool of computational resources,cloud computing becomes a new and advantageous computing paradigm to resolve scientific problems traditionally requiring a large-scale and high-performance cluster.This paper examines the viability for cloud computing to support dust storm forecasting.Through a holistic study by systematically comparing cloud computing using Amazon EC2 to traditional high performance computing(HPC)cluster,we find that cloud computing is emerging as a credible solution for(1)supporting dust storm forecasting in spinning off a large group of computing resources in a few minutes to satisfy the disruptive computing requirements of dust storm forecasting,(2)performing high-resolution dust storm forecasting when required,(3)supporting concurrent computing requirements,(4)supporting real dust storm event forecasting for a large geographic domain by using recent dust storm event in Phoniex,05 July 2011 as example,and(5)reducing cost by maintaining low computing support when there is no dust storm events while invoking a large amount of computing resource to perform high-resolution forecasting and responding to large amount of concurrent public accesses.展开更多
Under the global health crisis of COVID-19,timely,and accurate epi-demic data are important for observation,monitoring,analyzing,modeling,predicting,and mitigating impacts.Viral case data can be jointly analyzed with ...Under the global health crisis of COVID-19,timely,and accurate epi-demic data are important for observation,monitoring,analyzing,modeling,predicting,and mitigating impacts.Viral case data can be jointly analyzed with relevant factors for various applications in the context of the pandemic.Current COVID-19 case data are scattered across a variety of data sources which may consist of low data quality accompanied by inconsistent data structures.To address this short-coming,a multi-scale spatiotemporal data product is proposed as a public repository platform,based on a spatiotemporal cube,and allows the integration of different data sources by adopting various data standards.Within the spatiotemporal cube,a comprehensive data processing workflow gathers disparate COVID-19 epidemic data-sets at the global,national,provincial/state,county,and city levels.This proposed framework is supported by an automatic update with a 2-h frequency and the crowdsourcing validation team to produce and update data on a daily time step.This rapid-response dataset allows the integration of other relevant socio-economic and environ-mental factors for spatiotemporal analysis.The data is available in Harvard Dataverse platform(https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/8HGECN)and GitHub open source repository(https://github.com/stccenter/COVID-19-Data).展开更多
Current search engines in most geospatial data portals tend to induce users to focus on one single-data characteristic dimension(e.g.popularity and release date).This approach largely fails to take account of users’m...Current search engines in most geospatial data portals tend to induce users to focus on one single-data characteristic dimension(e.g.popularity and release date).This approach largely fails to take account of users’multidimensional preferences for geospatial data,and hence may likely result in a less than optimal user experience in discovering the most applicable dataset.This study reports a machine learning framework to address the ranking challenge,the fundamental obstacle in geospatial data discovery,by(1)identifying a number of ranking features of geospatial data to represent users’multidimensional preferences by considering semantics,user behavior,spatial similarity,and static dataset metadata attributes;(2)applying a machine learning method to automatically learn a ranking function;and(3)proposing a system architecture to combine existing search-oriented open source software,semantic knowledge base,ranking feature extraction,and machine learning algorithm.Results show that the machine learning approach outperforms other methods,in terms of both precision at K and normalized discounted cumulative gain.As an early attempt of utilizing machine learning to improve the search ranking in the geospatial domain,we expect this work to set an example for further research and open the door towards intelligent geospatial data discovery.展开更多
A spatial web portal(SWP)provides a web-based gateway to discover,access,manage,and integrate worldwide geospatial resources through the Internet and has the access characteristics of regional to global interest and s...A spatial web portal(SWP)provides a web-based gateway to discover,access,manage,and integrate worldwide geospatial resources through the Internet and has the access characteristics of regional to global interest and spiking.Although various technologies have been adopted to improve SWP performance,enabling high-speed resource access for global users to better support Digital Earth remains challenging because of the computing and communication intensities in the SWP operation and the dynamic distribution of end users.This paper proposes a cloud-enabled framework for high-speed SWP access by leveraging elastic resource pooling,dynamic workload balancing,and global deployment.Experimental results demonstrate that the new SWP framework outperforms the traditional computing infrastructure and better supports users of a global system such as Digital Earth.Reported methodologies and framework can be adopted to support operational geospatial systems,such as monitoring national geographic state and spanning across regional and global geographic extent.展开更多
文摘Background: As the global novel coronavirus pneumonia (NCP) remains severe, elderly people are at high risk for NCP and osteoporotic vertebral compression fractures, with high complications and mortality. How to treat patients and protect medical staff from infection, and at the same time strictly prevent the occurrence of clustered transmission events in the hospital, the establishment of perfect pre-hospital emergency measures and infection prevention and control strategy is the first element to ensure success. Objective: To establish the diagnosis and treatment and infection protection strategy for Osteoporotic vertebral compression fractures (OVCF) patients undergoing minimally invasive percutaneous kyphoplasty (PKP) surgery during the prevention and control of COVID-19, so as to ensure the stable, orderly and safe medical treatment. Methods: A total of 583 OVCF patients were admitted to the First Affiliated Hospital of Hebei North University during the epidemic prevention and control period from January 2020 to July 2022. After urgent and outpatient strict standardized screening, 382 patients met the inclusion criteria, including 112 males and 270 females, aged (70.50 ± 5.49) years. The preoperative visual analogue scale (VAS) score was 6.92 ± 1.86. Preoperative Oswestry disability index (ODI) was 74.67 ± 4.84. The satisfaction rate was (45.89 ± 3.67) %. According to the clinical diagnostic criteria and classification, 367 patients were diagnosed as ordinary OVCF, including 156 cases of mild compression and 226 cases of moderate compression. The clinical classification of 15 patients with OVCF diagnosed as COVID-19 was type I, including 10 cases of mild COVID-19 and 5 cases of common COVID-19. All patients were treated with PKP. Results: All patients were followed up at 1 day, 1 month and 3 months after operation, VAS (2.01 ± 0.56, 0.45 ± 0.11, 0 ± 0), ODI (45.41 ± 4.15, 10.22 ± 2.73, 4.03 ± 1.57) and satisfaction (90.12%, 95.57%, 99.23%) were significantly improved compared with those before operation (p < 0.05), and the original medical diseases were not aggravated. In this group, 15 cases of OVCF diagnosed with COVID-19 were given priority to treat COVID-19 under strict three-level protection in the designated isolation ward. PKP was carried out after the condition was stable, and the areas, items and personnel in contact with patients during the perioperative period must be strictly and thoroughly disinfected. The patient had a good prognosis, no complications, no cross-infection in the hospital, and no infection rate among medical staff. Conclusions: The implementation of the diagnosis and treatment and infection protection strategy for OVCF patients undergoing minimally invasive PKP surgery during the prevention and control of COVID-19 has a guiding role in preventing the spread of infection, improving the cure rate, promoting rapid recovery, reducing complications and reducing mortality.
文摘Important?information pertaining to emergencies and responses to?the?emergencies is often distributed across numerous Internet sites. In the event of a disaster like an earthquake, rapid access to such information is critical. At such moments the general public usually has a hard time navigating through numerous sites to retrieve and integrate information, and this may?severely affect our capability to make?critical decisions in a timely manner. Common earthquake mashups often lack relevant information like locations of first responders and routing to important facilities (e.g. hospitals and fire stations) which could save important time and lives. To address the challenges, we developed an Earthquake Information Mashup prototype. This prototype demonstrates a mashup approach to providing?a Web visualization of real-time earthquake monitoring and complementary information, such as traffic conditions, the location of important facilities and routing to them. It also offers users the ability to communicate local condition. Users are thus able to better integrate information from various near real-time sources, obtain better situational awareness, and make?smarter?informed?critical decisions.
基金supported by NSF I/UCRC and START programs(1841520)NASA Goddard CISTO,and NASA AIST programs.This research was,in part,carried out at the Jet Propulsion Laboratory,California Institute of Technology,under a contract with the National Aeronautics and Space Administration(80NM0018D0004).
文摘In recent years,our world has experienced significant disruptions due to the COviD-19 pandemic,and Russia's 2022 invasion of Ukraine,impacting human activities and the global environment.This paper explored air quality changes in Ukraine due to COVID-19,and Russia's invasion of Ukraine using on-demand with a what-you-see-is-what-you-get approach.During the cOVID-19 pandemic,strict quarantine policies in Ukraine led to a 2%reduction in tropospheric NO_(2) concentration before the lockdown and 4%during the lockdown period.Cities like Kyiv,Donetsk,and Dnipro exhibited reductions of 5%,11%,and 16%,respectively.Total SO_(2) column concentration decreased by 6%before the lockdown and 2.5%during the lockdown period,except in high population density areas.Kyiv showed the highest reduction of 17%in SO_(2) concentration,while Donetsk and Dnipro exhibited an 11%reduction.However,during the Russian invasion,there was a significant increase in tropospheric NO_(2) concentration in heavily destroyed Kharkiv while most eastern regions experienced a reduction.The total SO_(2) column was 48%higher before the war but reduced throughout the country after the war,except for in Kyiv and a few central regions.These findings can contribute to analyzing air pollution and building digital twin simulations for future reconstruction scenarios.
基金supported by NSF F I/UCRC(1841520),NASA Goddard CISTO,and NASA AIST programs.
文摘With the advancement of Artificial Intelligence(Al)technologies and accumulation of big Earth data,Deep Learning(DL)has become an important method to discover patterns and understand Earth science processes in the past several years.While successful in many Earth science areas,Al/DL applications are often challenging for computing devices.In recent years,Graphics Processing Unit(GPU)devices have been leveraged to speed up Al/DL applications,yet computational performance still poses a major barrier for DL-based Earth science applications.To address these computational challenges,we selected five existing sample Earth science Al applications,revised the DL-based models/algorithms,and tested the performance of multiple GPU computing platforms to support the applications.Application softwarepackages,performance comparisonsacross different platforms,along with other results,are summarized.This article can help understand how various Al/ML Earth science applications can be supported by GPU computing and help researchers in the Earth science domain better adopt GPU computing(such as supermicro,GPU clusters,and cloud computing-based)for their Al/ML applications,and to optimize their science applications to better leverage the computing device.
基金We thank Drs.Huadong Guo and Changlin Wang for inviting us to write this definition and field review paper.Research reported is partially supported by NASA(NNX07AD99G and SMD-09-1448),FGDC(G09AC00103)Environmental Informatics Framework of the Earth,Energy,and Environment Program at Microsoft Research Connection.We thank insightful comments from reviewers including Dr.Aijun Chen(NASA/GMU),Dr.Thomas Huang(NASA JPL),Dr.Cao Kang(Clark Univ.),Krishna Kumar(Microsoft),Dr.Wenwen Li(UCSB),Dr.Michael Peterson(University of Nebraska-Omaha),Dr.Xuan Shi(Geogia Tech),Dr.Tong Zhang(Wuhan University),Jinesh Varia(Amazon)and an anonymous reviewer.This paper is a result from the collaborations/discussions with colleagues from NASA,FGDC,USGS,EPA,GSA,Microsoft,ESIP,AAG CISG,CPGIS,UCGIS,GEO,and ISDE.
文摘The geospatial sciences face grand information technology(IT)challenges in the twenty-first century:data intensity,computing intensity,concurrent access intensity and spatiotemporal intensity.These challenges require the readiness of a computing infrastructure that can:(1)better support discovery,access and utilization of data and data processing so as to relieve scientists and engineers of IT tasks and focus on scientific discoveries;(2)provide real-time IT resources to enable real-time applications,such as emergency response;(3)deal with access spikes;and(4)provide more reliable and scalable service for massive numbers of concurrent users to advance public knowledge.The emergence of cloud computing provides a potential solution with an elastic,on-demand computing platform to integrateobservation systems,parameter extracting algorithms,phenomena simulations,analytical visualization and decision support,and to provide social impact and user feedbackthe essential elements of the geospatial sciences.We discuss the utilization of cloud computing to support the intensities of geospatial sciences by reporting from our investigations on how cloud computing could enable the geospatial sciences and how spatiotemporal principles,the kernel of the geospatial sciences,could be utilized to ensure the benefits of cloud computing.Four research examples are presented to analyze how to:(1)search,access and utilize geospatial data;(2)configure computing infrastructure to enable the computability of intensive simulation models;(3)disseminate and utilize research results for massive numbers of concurrent users;and(4)adopt spatiotemporal principles to support spatiotemporal intensive applications.The paper concludes with a discussion of opportunities and challenges for spatial cloud computing(SCC).
基金NASA AIST Program[NNX15AM85G]NCCS[NNG14HH38I]+2 种基金Goddard[NNG16PU001]NSF I/UCRC[1338925]EarthCube[ICER-1540998],CNS[1117300],Microsoft,Amazon,Northrop Grumman,Harris,and United Nations.
文摘Big Data has emerged in the past few years as a new paradigm providing abundant data and opportunities to improve and/or enable research and decision-support applications with unprecedented value for digital earth applications including business,sciences and engineering.At the same time,Big Data presents challenges for digital earth to store,transport,process,mine and serve the data.Cloud computing provides fundamental support to address the challenges with shared computing resources including computing,storage,networking and analytical software;the application of these resources has fostered impressive Big Data advancements.This paper surveys the two frontiers–Big Data and cloud computing–and reviews the advantages and consequences of utilizing cloud computing to tackling Big Data in the digital earth and relevant science domains.From the aspects of a general introduction,sources,challenges,technology status and research opportunities,the following observations are offered:(i)cloud computing and Big Data enable science discoveries and application developments;(ii)cloud computing provides major solutions for Big Data;(iii)Big Data,spatiotemporal thinking and various application domains drive the advancement of cloud computing and relevant technologies with new requirements;(iv)intrinsic spatiotemporal principles of Big Data and geospatial sciences provide the source for finding technical and theoretical solutions to optimize cloud computing and processing Big Data;(v)open availability of Big Data and processing capability pose social challenges of geospatial significance and(vi)a weave of innovations is transforming Big Data into geospatial research,engineering and business values.This review introduces future innovations and a research agenda for cloud computing supporting the transformation of the volume,velocity,variety and veracity into values of Big Data for local to global digital earth science and applications.
基金This research is funded by NASA(National Aeronautics and Space Administration)NCCS and AIST(NNX15AM85G)NSF I/UCRC,CSSI,and EarthCube Programs(1338925 and 1835507).
文摘Earth observations and model simulations are generating big multidimensional array-based raster data.However,it is difficult to efficiently query these big raster data due to the inconsistency among the geospatial raster data model,distributed physical data storage model,and the data pipeline in distributed computing frameworks.To efficiently process big geospatial data,this paper proposes a three-layer hierarchical indexing strategy to optimize Apache Spark with Hadoop Distributed File System(HDFS)from the following aspects:(1)improve I/O efficiency by adopting the chunking data structure;(2)keep the workload balance and high data locality by building the global index(k-d tree);(3)enable Spark and HDFS to natively support geospatial raster data formats(e.g.,HDF4,NetCDF4,GeoTiff)by building the local index(hash table);(4)index the in-memory data to further improve geospatial data queries;(5)develop a data repartition strategy to tune the query parallelism while keeping high data locality.The above strategies are implemented by developing the customized RDDs,and evaluated by comparing the performance with that of Spark SQL and SciSpark.The proposed indexing strategy can be applied to other distributed frameworks or cloud-based computing systems to natively support big geospatial data query with high efficiency.
基金This work was supported by the National Science Foundation[OAC-1835507 and IIP-1841520]。
文摘Big Earth data are produced from satellite observations,Internet-ofThings,model simulations,and other sources.The data embed unprecedented insights and spatiotemporal stamps of relevant Earth phenomena for improving our understanding,responding,and addressing challenges of Earth sciences and applications.In the past years,new technologies(such as cloud computing,big data and artificial intelligence)have gained momentum in addressing the challenges of using big Earth data for scientific studies and geospatial applications historically intractable.This paper reviews the big Earth data analytics from several aspects to capture the latest advancements in this fast-growing domain.We first introduce the concepts of big Earth data.The architecture,various functionalities,and supporting modules are then reviewed from a generic methodology aspect.Analytical methods supporting the functionalities are surveyed and analyzed in the context of different tools.The driven questions are exemplified through cutting-edge Earth science researches and applications.A list of challenges and opportunities are proposed for different stakeholders to collaboratively advance big Earth data analytics in the near future.
基金NSF(1841520,1835507,1832465,2028791 and 2025783)the NSF Spatiotemporal Innovation Center members.
文摘The sudden outbreak of the Coronavirus disease(COVID-19)swept across the world in early 2020,triggering the lockdowns of several billion people across many countries,including China,Spain,India,the U.K.,Italy,France,Germany,Brazil,Russia,and the U.S.The transmission of the virus accelerated rapidly with the most confirmed cases in the U.S.,India,Russia,and Brazil.In response to this national and global emergency,the NSF Spatiotemporal Innovation Center brought together a taskforce of international researchers and assembled implementation strategies to rapidly respond to this crisis,for supporting research,saving lives,and protecting the health of global citizens.This perspective paper presents our collective view on the global health emergency and our effort in collecting,analyzing,and sharing relevant data on global policy and government responses,human mobility,environmental impact,socioeconomical impact;in developing research capabilities and mitigation measures with global scientists,promoting collaborative research on outbreak dynamics,and reflecting on the dynamic responses from human societies.
基金supported by a Chinese 973 project(2006CB701306)a NASA Geosciences Interoperability project(NNX07AD99G),and FGDC 2005 CAP award(05HQAG0115).
文摘This paper introduces a new concept,distributed geospatial information processing(DGIP),which refers to the process of geospatial information residing on computers geographically dispersed and connected through computer networks,and the contribution of DGIP to Digital Earth(DE).The DGIP plays a critical role in integrating the widely distributed geospatial resources to support the DE envisioned to utilise a wide variety of information.This paper addresses this role from three different aspects:1)sharing Earth data,information,and services through geospatial interoperability supported by standardisation of contents and interfaces;2)sharing computing and software resources through a GeoCyberinfrastructure supported by DGIP middleware;and 3)sharing knowledge within and across domains through ontology and semantic searches.Observing the long-term process for the research and development of an operational DE,we discuss and expect some practical contributions of the DGIP to the DE.
基金supported by National Science Foundation[grant number IIP-1338925].
文摘Social media platforms have been contributing to disaster management during the past several years.Text mining solutions using traditional machine learning techniques have been developed to categorize the messages into different themes,such as caution and advice,to better understand the meaning and leverage useful information from the social media text content.However,these methods are mostly event specific and difficult to generalize for cross-event classifications.In other words,traditional classification models trained by historic datasets are not capable of categorizing social media messages from a future event.This research examines the capability of a convolutional neural network(CNN)model in cross-event Twitter topic classification based on three geo-tagged twitter datasets collected during Hurricanes Sandy,Harvey,and Irma.The performance of the CNN model is compared to two traditional machine learning methods:support vector machine(SVM)and logistic regression(LR).Experiment results showed that CNN models achieved a consistently better accuracy for both single event and crossevent evaluation scenarios whereas SVM and LR models had lower accuracy compared to their own single event accuracy results.This indicated that the CNN model has the capability of pre-training Twitter data from past events to classify for an upcoming event for situational awareness.
基金Research is supported by State Administration of Foreign Experts Affairs(20120464001)NSF(IIP-1160979 and CNS-1117300)+1 种基金FGDC(GeoCloud and GEOSS Clearinghouse)Microsoft Research.
文摘Global challenges(such as economy and natural hazards)and technology advancements have triggered international leaders and organizations to rethink geosciences and Digital Earth in the new decade.The next generation visions pose grand challenges for infrastructure,especially computing infrastructure.The gradual establishment of cloud computing as a primary infrastructure provides new capabilities to meet the challenges.This paper reviews research conducted using cloud computing to address geoscience and Digital Earth needs within the context of an integrated Earth system.We also introduce the five papers selected through a rigorous review process as exemplar research in using cloud capabilities to address the challenges.The literature and research demonstrate that spatial cloud computing provides unprecedented new capabilities to enable Digital Earth and geosciences in the twenty-first century in several aspects:(1)virtually unlimited computing power for addressing big data storage,sharing,processing,and knowledge discovering challenges,(2)elastic,flexible,and easy-to-use computing infrastructure to facilitate the building of the next generation geospatial cyberin-frastructure,CyberGIS,CloudGIS,and Digital Earth,(3)seamless integration environment that enables mashing up observation,data,models,problems,and citizens,(4)research opportunities triggered by global challenges that may lead to breakthroughs in relevant fields including infrastructure building,GIScience,computer science,and geosciences,and(5)collaboration supported by cloud computing and across science domains,agencies,countries to collectively address global challenges from policy,management,system engineering,acquisition,and operation aspects.
基金supported by NSF[CNS 1841520 and ACI 1835507]NASA Goddard[80NSSC19P2033]the NSF Spatiotemporal I/UCRC IAB members.
文摘The advancements of sensing technologies,including remote sensing,in situ sensing,social sensing,and health sensing,have tremendously improved our capability to observe and record natural and social phenomena,such as natural disasters,presidential elections,and infectious diseases.The observations have provided an unprecedented opportunity to better understand and respond to the spatiotemporal dynamics of the environment,urban settings,health and disease propagation,business decisions,and crisis and crime.Spatiotemporal event detection serves as a gateway to enable a better understanding by detecting events that represent the abnormal status of relevant phenomena.This paper reviews the literature for different sensing capabilities,spatiotemporal event extraction methods,and categories of applications for the detected events.The novelty of this review is to revisit the definition and requirements of event detection and to layout the overall workflow(from sensing and event extraction methods to the operations and decision-supporting processes based on the extracted events)as an agenda for future event detection research.Guidance is presented on the current challenges to this research agenda,and future directions are discussed for conducting spatiotemporal event detection in the era of big data,advanced sensing,and artificial intelligence.
基金Research reported is supported by NSF(CSR-1117300 and IIP-1160979)NASA(NNX07AD99G)Microsoft Research.
文摘The simulations and potential forecasting of dust storms are of significant interest to public health and environment sciences.Dust storms have interannual variabilities and are typical disruptive events.The computing platform for a dust storm forecasting operational system should support a disruptive fashion by scaling up to enable high-resolution forecasting and massive public access when dust storms come and scaling down when no dust storm events occur to save energy and costs.With the capability of providing a large,elastic,and virtualized pool of computational resources,cloud computing becomes a new and advantageous computing paradigm to resolve scientific problems traditionally requiring a large-scale and high-performance cluster.This paper examines the viability for cloud computing to support dust storm forecasting.Through a holistic study by systematically comparing cloud computing using Amazon EC2 to traditional high performance computing(HPC)cluster,we find that cloud computing is emerging as a credible solution for(1)supporting dust storm forecasting in spinning off a large group of computing resources in a few minutes to satisfy the disruptive computing requirements of dust storm forecasting,(2)performing high-resolution dust storm forecasting when required,(3)supporting concurrent computing requirements,(4)supporting real dust storm event forecasting for a large geographic domain by using recent dust storm event in Phoniex,05 July 2011 as example,and(5)reducing cost by maintaining low computing support when there is no dust storm events while invoking a large amount of computing resource to perform high-resolution forecasting and responding to large amount of concurrent public accesses.
基金The research presented in this paper was funded by the National Science Foundation(1841520 and 1835507).
文摘Under the global health crisis of COVID-19,timely,and accurate epi-demic data are important for observation,monitoring,analyzing,modeling,predicting,and mitigating impacts.Viral case data can be jointly analyzed with relevant factors for various applications in the context of the pandemic.Current COVID-19 case data are scattered across a variety of data sources which may consist of low data quality accompanied by inconsistent data structures.To address this short-coming,a multi-scale spatiotemporal data product is proposed as a public repository platform,based on a spatiotemporal cube,and allows the integration of different data sources by adopting various data standards.Within the spatiotemporal cube,a comprehensive data processing workflow gathers disparate COVID-19 epidemic data-sets at the global,national,provincial/state,county,and city levels.This proposed framework is supported by an automatic update with a 2-h frequency and the crowdsourcing validation team to produce and update data on a daily time step.This rapid-response dataset allows the integration of other relevant socio-economic and environ-mental factors for spatiotemporal analysis.The data is available in Harvard Dataverse platform(https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/8HGECN)and GitHub open source repository(https://github.com/stccenter/COVID-19-Data).
基金NSF I/UCRC:[Grant Number IIP-1338925]NSF EarthCube:[Grant Number ICER-1540998]NASA AIST Program:[Grant Number NNX15AM85G].
文摘Current search engines in most geospatial data portals tend to induce users to focus on one single-data characteristic dimension(e.g.popularity and release date).This approach largely fails to take account of users’multidimensional preferences for geospatial data,and hence may likely result in a less than optimal user experience in discovering the most applicable dataset.This study reports a machine learning framework to address the ranking challenge,the fundamental obstacle in geospatial data discovery,by(1)identifying a number of ranking features of geospatial data to represent users’multidimensional preferences by considering semantics,user behavior,spatial similarity,and static dataset metadata attributes;(2)applying a machine learning method to automatically learn a ranking function;and(3)proposing a system architecture to combine existing search-oriented open source software,semantic knowledge base,ranking feature extraction,and machine learning algorithm.Results show that the machine learning approach outperforms other methods,in terms of both precision at K and normalized discounted cumulative gain.As an early attempt of utilizing machine learning to improve the search ranking in the geospatial domain,we expect this work to set an example for further research and open the door towards intelligent geospatial data discovery.
基金Research reported is partially supported by NSF[grant numbers PLR-1349259 and IIP-1338925],FGDC[grant number G13PG00091],and NASA[grant number NNG12PP37I].
文摘A spatial web portal(SWP)provides a web-based gateway to discover,access,manage,and integrate worldwide geospatial resources through the Internet and has the access characteristics of regional to global interest and spiking.Although various technologies have been adopted to improve SWP performance,enabling high-speed resource access for global users to better support Digital Earth remains challenging because of the computing and communication intensities in the SWP operation and the dynamic distribution of end users.This paper proposes a cloud-enabled framework for high-speed SWP access by leveraging elastic resource pooling,dynamic workload balancing,and global deployment.Experimental results demonstrate that the new SWP framework outperforms the traditional computing infrastructure and better supports users of a global system such as Digital Earth.Reported methodologies and framework can be adopted to support operational geospatial systems,such as monitoring national geographic state and spanning across regional and global geographic extent.