The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters...The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters may be concerned about the validity of the collected data.Hence,it is vital to evaluate the quality of the data collected by the task workers while protecting privacy in spatial crowdsourcing(SC)data collection tasks with IoT.To this end,this paper proposes a privacy-preserving data reliability evaluation for SC in IoT,named PARE.First,we design a data uploading format using blockchain and Paillier homomorphic cryptosystem,providing unchangeable and traceable data while overcoming privacy concerns.Secondly,based on the uploaded data,we propose a method to determine the approximate correct value region without knowing the exact value.Finally,we offer a data filtering mechanism based on the Paillier cryptosystem using this value region.The evaluation and analysis results show that PARE outperforms the existing solution in terms of performance and privacy protection.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
There are some limitations when we apply conventional methods to analyze the massive amounts of seismic data acquired with high-density spatial sampling since processors usually obtain the properties of raw data from ...There are some limitations when we apply conventional methods to analyze the massive amounts of seismic data acquired with high-density spatial sampling since processors usually obtain the properties of raw data from common shot gathers or other datasets located at certain points or along lines. We propose a novel method in this paper to observe seismic data on time slices from spatial subsets. The composition of a spatial subset and the unique character of orthogonal or oblique subsets are described and pre-stack subsets are shown by 3D visualization. In seismic data processing, spatial subsets can be used for the following aspects: (1) to check the trace distribution uniformity and regularity; (2) to observe the main features of ground-roll and linear noise; (3) to find abnormal traces from slices of datasets; and (4) to QC the results of pre-stack noise attenuation. The field data application shows that seismic data analysis in spatial subsets is an effective method that may lead to a better discrimination among various wavefields and help us obtain more information.展开更多
To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm ...To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm based on the Oracle spatial data model is proposed. The algorithm uses the Oracle road network data model to analyze the spatial relationships between massive GPS positioning points and freeway networks, builds an N-shortest path algorithm to find reasonable candidate routes between GPS positioning points efficiently, and uses the fuzzy logic inference system to determine the final matched traveling route. According to the implementation with field data from Los Angeles, the computation speed of the algorithm is about 135 GPS positioning points per second and the accuracy is 98.9%. The results demonstrate the effectiveness and accuracy of the proposed algorithm for mapping massive GPS positioning data onto freeway networks with complex geometric characteristics.展开更多
Clustering, in data mining, is a useful technique for discovering interesting data distributions and patterns in the underlying data, and has many application fields, such as statistical data analysis, pattern recogni...Clustering, in data mining, is a useful technique for discovering interesting data distributions and patterns in the underlying data, and has many application fields, such as statistical data analysis, pattern recognition, image processing, and etc. We combine sampling technique with DBSCAN algorithm to cluster large spatial databases, and two sampling based DBSCAN (SDBSCAN) algorithms are developed. One algorithm introduces sampling technique inside DBSCAN, and the other uses sampling procedure outside DBSCAN. Experimental results demonstrate that our algorithms are effective and efficient in clustering large scale spatial databases.展开更多
A novel Hilbert-curve is introduced for parallel spatial data partitioning, with consideration of the huge-amount property of spatial information and the variable-length characteristic of vector data items. Based on t...A novel Hilbert-curve is introduced for parallel spatial data partitioning, with consideration of the huge-amount property of spatial information and the variable-length characteristic of vector data items. Based on the improved Hilbert curve, the algorithm can be designed to achieve almost-uniform spatial data partitioning among multiple disks in parallel spatial databases. Thus, the phenomenon of data imbalance can be significantly avoided and search and query efficiency can be enhanced.展开更多
China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a...China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a new technology to increase the accuracy of seismic exploration. We briefly discuss point source and receiver technology, analyze the high density spatial sampling in situ method, introduce the symmetric sampling principles presented by Gijs J. O. Vermeer, and discuss high density spatial sampling technology from the point of view of wave field continuity. We emphasize the analysis of the high density spatial sampling characteristics, including the high density first break advantages for investigation of near surface structure, improving static correction precision, the use of dense receiver spacing at short offsets to increase the effective coverage at shallow depth, and the accuracy of reflection imaging. Coherent noise is not aliased and the noise analysis precision and suppression increases as a result. High density spatial sampling enhances wave field continuity and the accuracy of various mathematical transforms, which benefits wave field separation. Finally, we point out that the difficult part of high density spatial sampling technology is the data processing. More research needs to be done on the methods of analyzing and processing huge amounts of seismic data.展开更多
The efficacy of vegetation dynamics simulations in offline land surface models(LSMs)largely depends on the quality and spatial resolution of meteorological forcing data.In this study,the Princeton Global Meteorologica...The efficacy of vegetation dynamics simulations in offline land surface models(LSMs)largely depends on the quality and spatial resolution of meteorological forcing data.In this study,the Princeton Global Meteorological Forcing Data(PMFD)and the high spatial resolution and upscaled China Meteorological Forcing Data(CMFD)were used to drive the Simplified Simple Biosphere model version 4/Top-down Representation of Interactive Foliage and Flora Including Dynamics(SSiB4/TRIFFID)and investigate how meteorological forcing datasets with different spatial resolutions affect simulations over the Tibetan Plateau(TP),a region with complex topography and sparse observations.By comparing the monthly Leaf Area Index(LAI)and Gross Primary Production(GPP)against observations,we found that SSiB4/TRIFFID driven by upscaled CMFD improved the performance in simulating the spatial distributions of LAI and GPP over the TP,reducing RMSEs by 24.3%and 20.5%,respectively.The multi-year averaged GPP decreased from 364.68 gC m^(-2)yr^(-1)to 241.21 gC m^(-2)yr^(-1)with the percentage bias dropping from 50.2%to-1.7%.When using the high spatial resolution CMFD,the RMSEs of the spatial distributions of LAI and GPP simulations were further reduced by 7.5%and 9.5%,respectively.This study highlights the importance of more realistic and high-resolution forcing data in simulating vegetation growth and carbon exchange between the atmosphere and biosphere over the TP.展开更多
With the deepening informationization of Resources & Environment Remote Sensing geological survey conducted,some potential problems and deficiency are:(1) shortage of unified-planed running environment;(2) inconsi...With the deepening informationization of Resources & Environment Remote Sensing geological survey conducted,some potential problems and deficiency are:(1) shortage of unified-planed running environment;(2) inconsistent methods of data integration;and(3) disadvantages of different performing ways of data integration.This paper solves the above problems through overall planning and design,constructs unified running environment, consistent methods of data integration and system structure in order to advance the informationization展开更多
There are hundreds of villages in the western mountainous area of Beijing,of which quite a few have a profound history and form the settlement culture in the western part of Beijing.Taking dozens of ancient villages i...There are hundreds of villages in the western mountainous area of Beijing,of which quite a few have a profound history and form the settlement culture in the western part of Beijing.Taking dozens of ancient villages in Mentougou District as the research sample,the village space as the research object,based on ASTER GDEM database and quantitative analysis tools such as Global Mapper and ArcGIS,this study analyzed from the perspectives of altitude,topography,slope direction,and building density distribution,made a quantitative study on the spatial distribution and plane structure of ancient villages so that the law of village space with the characteristics of western Beijing was summarized to supplement and improve the relevant achievements in the research field of ancient villages in western Beijing.展开更多
This paper presents a conceptual data model, the STA-model, for handling spatial, temporal and attribute aspects of objects in GIS. The model is developed on the basis of object-oriented modeling approach. This model ...This paper presents a conceptual data model, the STA-model, for handling spatial, temporal and attribute aspects of objects in GIS. The model is developed on the basis of object-oriented modeling approach. This model includes two major parts: (a) modeling the signal objects by STA-object elements, and (b) modeling relationships between STA-objects. As an example, the STA-model is applied for modeling land cover change data with spatial, temporal and attribute components.展开更多
Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,...Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,and their quality significantly impacts the prediction performance of the model.However,non-fire point data obtained using existing sampling methods generally suffer from low representativeness.Therefore,this study proposes a non-fire point data sampling method based on geographical similarity to improve the quality of non-fire point samples.The method is based on the idea that the less similar the geographical environment between a sample point and an already occurred fire point,the greater the confidence in being a non-fire point sample.Yunnan Province,China,with a high frequency of forest fires,was used as the study area.We compared the prediction performance of traditional sampling methods and the proposed method using three commonly used forest fire risk prediction models:logistic regression(LR),support vector machine(SVM),and random forest(RF).The results show that the modeling and prediction accuracies of the forest fire prediction models established based on the proposed sampling method are significantly improved compared with those of the traditional sampling method.Specifically,in 2010,the modeling and prediction accuracies improved by 19.1%and 32.8%,respectively,and in 2020,they improved by 13.1%and 24.3%,respectively.Therefore,we believe that collecting non-fire point samples based on the principle of geographical similarity is an effective way to improve the quality of forest fire samples,and thus enhance the prediction of forest fire risk.展开更多
In order to solve the hidden regional relationship among garlic prices,this paper carries out spatial quantitative analysis of garlic price data based on ArcGIS technology.The specific analysis process is to collect p...In order to solve the hidden regional relationship among garlic prices,this paper carries out spatial quantitative analysis of garlic price data based on ArcGIS technology.The specific analysis process is to collect prices of garlic market from 2015 to 2017 in different regions of Shandong Province,using the Moran's Index to obtain monthly Moran indicators are positive,so as to analyze the overall positive relationship between garlic prices;then using the geostatistical analysis tool in ArcGIS to draw a spatial distribution Grid diagram,it was found that the price of garlic has a significant geographical agglomeration phenomenon and showed a multi-center distribution trend.The results showed that the agglomeration centers are Jining,Dongying,Qingdao,and Yantai.At the end of the article,according to the research results,constructive suggestions were made for the regulation of garlic price.Using Moran’s Index and geostatistical analysis tools to analyze the data of garlic price,which made up for the lack of position correlation in the traditional analysis methods and more intuitively and effectively reflected the trend of garlic price from low to high from west to east in Shandong Province and showed a pattern of circular distribution.展开更多
The mathematic theory for uncertainty model of line segment are summed up to achieve a general conception, and the line error hand model of εσ is a basic uncertainty model that can depict the line accuracy and quali...The mathematic theory for uncertainty model of line segment are summed up to achieve a general conception, and the line error hand model of εσ is a basic uncertainty model that can depict the line accuracy and quality efficiently while the model of εm and error entropy can be regarded as the supplement of it. The error band model will reflect and describe the influence of line uncertainty on polygon uncertainty. Therefore, the statistical characteristic of the line error is studied deeply by analyzing the probability that the line error falls into a certain range. Moreover, the theory accordance is achieved in the selecting the error buffer for line feature and the error indicator. The relationship of the accuracy of area for a polygon with the error loop for a polygon boundary is deduced and computed.展开更多
A general spatial interpolation method for tidal properties has been developed by solving a partial differential equation with a combination of different orders of harmonic operators using a mixed finite element metho...A general spatial interpolation method for tidal properties has been developed by solving a partial differential equation with a combination of different orders of harmonic operators using a mixed finite element method. Numerically, the equation is solved implicitly without iteration on an unstructured triangular mesh grid. The paper demonstrates the performance of the method for tidal property fields with different characteristics, boundary complexity, number of input data points, and data point distribution. The method has been successfully applied under several different tidal environments, including an idealized distribution in a square basin, coamplitude and cophase lines in the Taylor semi-infiite rotating channel, and tide coamplitude and cophase lines in the Bohai Sea and Chesapeake Bay. Compared to Laplace’s equation that NOAA/NOS currently uses for interpolation in hydrographic and oceanographic applications, the multiple-order harmonic equation method eliminates the problem of singularities at data points, and produces interpolation results with better accuracy and precision.展开更多
Parallel vector buffer analysis approaches can be classified into 2 types:algorithm-oriented parallel strategy and the data-oriented parallel strategy.These methods do not take its applicability on the existing geogra...Parallel vector buffer analysis approaches can be classified into 2 types:algorithm-oriented parallel strategy and the data-oriented parallel strategy.These methods do not take its applicability on the existing geographic information systems(GIS)platforms into consideration.In order to address the problem,a spatial decomposition approach for accelerating buffer analysis of vector data is proposed.The relationship between the number of vertices of each feature and the buffer analysis computing time is analyzed to generate computational intensity transformation functions(CITFs).Then,computational intensity grids(CIGs)of polyline and polygon are constructed based on the relative CITFs.Using the corresponding CIGs,a spatial decomposition method for parallel buffer analysis is developed.Based on the computational intensity of the features and the sub-domains generated in the decomposition,the features are averagely assigned within the sub-domains into parallel buffer analysis tasks for load balance.Compared with typical regular domain decomposition methods,the new approach accomplishes greater balanced decomposition of computational intensity for parallel buffer analysis and achieves near-linear speedups.展开更多
It is clearly stated in the 19th people's congress that we should make the environmental protection as our national policy. Therefore, it is of great importance to study this issue. This article is going to consid...It is clearly stated in the 19th people's congress that we should make the environmental protection as our national policy. Therefore, it is of great importance to study this issue. This article is going to consider 30 provinces of China as the cross-section, and utilize the data sample from 2006 to 2015 of these cross-sections to formulate a Spatial Panel Data Durbin Model to analyze the effect of FDI. By using these data, this article creates a comprehensive environmental pollution index with the help of entropy. The result indicates that the effect of FDI on environment has a non-linear and spatial spillover characteristic. Before reaching the critical value, FDI has a negative effect on environment; however, with the accumulation of FDI, it will create a significant positive effect on the environment.展开更多
The authors designed the spatial data mining system for ore-forming prediction based on the theory and methods of data mining as well as the technique of spatial database,in combination with the characteristics of geo...The authors designed the spatial data mining system for ore-forming prediction based on the theory and methods of data mining as well as the technique of spatial database,in combination with the characteristics of geological information data.The system consists of data management,data mining and knowledge discovery,knowledge representation.It can syncretize multi-source geosciences data effectively,such as geology,geochemistry,geophysics,RS.The system digitized geological information data as data layer files which consist of the two numerical values,to store these files in the system database.According to the combination of the characters of geological information,metallogenic prognosis was realized,as an example from some area in Heilongjiang Province.The prospect area of hydrothermal copper deposit was determined.展开更多
GeoStar is the registered trademark of GIS software made by WTUSM in China.By means of the GeoStar,multi_scale images,DEMs,graphics and attributes integrated in very large seamless databases can be created,and the mul...GeoStar is the registered trademark of GIS software made by WTUSM in China.By means of the GeoStar,multi_scale images,DEMs,graphics and attributes integrated in very large seamless databases can be created,and the multi_dimensional dynamic visualization and information extraction are also available.This paper describes the fundamental characteristics of such huge integrated databases,for instance,the data models,database structures and the spatial index strategies.At last,the typical applications of GeoStar for a few pilot projects like the Shanghai CyberCity and the Guangdong provincial spatial data infrastructure (SDI) are illustrated and several concluding remarks are stressed.展开更多
Spatial data, including geometrical data, attribute data, image data and DEM data, are huge in volume and relations among them are complex. How to effectively organize and manage those data is an important problem in ...Spatial data, including geometrical data, attribute data, image data and DEM data, are huge in volume and relations among them are complex. How to effectively organize and manage those data is an important problem in GIS. Several problems about space data organization and management in GeoStar which is a basic GIS software made in China are discussed in this paper. The paper emphasizes on object model of spatial vector, data organization, data management and how to realize the goal, and the like.展开更多
基金This work was supported by the National Natural Science Foundation of China under Grant 62233003the National Key Research and Development Program of China under Grant 2020YFB1708602.
文摘The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters may be concerned about the validity of the collected data.Hence,it is vital to evaluate the quality of the data collected by the task workers while protecting privacy in spatial crowdsourcing(SC)data collection tasks with IoT.To this end,this paper proposes a privacy-preserving data reliability evaluation for SC in IoT,named PARE.First,we design a data uploading format using blockchain and Paillier homomorphic cryptosystem,providing unchangeable and traceable data while overcoming privacy concerns.Secondly,based on the uploaded data,we propose a method to determine the approximate correct value region without knowing the exact value.Finally,we offer a data filtering mechanism based on the Paillier cryptosystem using this value region.The evaluation and analysis results show that PARE outperforms the existing solution in terms of performance and privacy protection.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
文摘There are some limitations when we apply conventional methods to analyze the massive amounts of seismic data acquired with high-density spatial sampling since processors usually obtain the properties of raw data from common shot gathers or other datasets located at certain points or along lines. We propose a novel method in this paper to observe seismic data on time slices from spatial subsets. The composition of a spatial subset and the unique character of orthogonal or oblique subsets are described and pre-stack subsets are shown by 3D visualization. In seismic data processing, spatial subsets can be used for the following aspects: (1) to check the trace distribution uniformity and regularity; (2) to observe the main features of ground-roll and linear noise; (3) to find abnormal traces from slices of datasets; and (4) to QC the results of pre-stack noise attenuation. The field data application shows that seismic data analysis in spatial subsets is an effective method that may lead to a better discrimination among various wavefields and help us obtain more information.
文摘To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm based on the Oracle spatial data model is proposed. The algorithm uses the Oracle road network data model to analyze the spatial relationships between massive GPS positioning points and freeway networks, builds an N-shortest path algorithm to find reasonable candidate routes between GPS positioning points efficiently, and uses the fuzzy logic inference system to determine the final matched traveling route. According to the implementation with field data from Los Angeles, the computation speed of the algorithm is about 135 GPS positioning points per second and the accuracy is 98.9%. The results demonstrate the effectiveness and accuracy of the proposed algorithm for mapping massive GPS positioning data onto freeway networks with complex geometric characteristics.
基金Supported by the Open Researches Fund Program of L IESMARS(WKL(0 0 ) 0 30 2 )
文摘Clustering, in data mining, is a useful technique for discovering interesting data distributions and patterns in the underlying data, and has many application fields, such as statistical data analysis, pattern recognition, image processing, and etc. We combine sampling technique with DBSCAN algorithm to cluster large spatial databases, and two sampling based DBSCAN (SDBSCAN) algorithms are developed. One algorithm introduces sampling technique inside DBSCAN, and the other uses sampling procedure outside DBSCAN. Experimental results demonstrate that our algorithms are effective and efficient in clustering large scale spatial databases.
基金Funded by the National 863 Program of China (No. 2005AA113150), and the National Natural Science Foundation of China (No.40701158).
文摘A novel Hilbert-curve is introduced for parallel spatial data partitioning, with consideration of the huge-amount property of spatial information and the variable-length characteristic of vector data items. Based on the improved Hilbert curve, the algorithm can be designed to achieve almost-uniform spatial data partitioning among multiple disks in parallel spatial databases. Thus, the phenomenon of data imbalance can be significantly avoided and search and query efficiency can be enhanced.
文摘China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a new technology to increase the accuracy of seismic exploration. We briefly discuss point source and receiver technology, analyze the high density spatial sampling in situ method, introduce the symmetric sampling principles presented by Gijs J. O. Vermeer, and discuss high density spatial sampling technology from the point of view of wave field continuity. We emphasize the analysis of the high density spatial sampling characteristics, including the high density first break advantages for investigation of near surface structure, improving static correction precision, the use of dense receiver spacing at short offsets to increase the effective coverage at shallow depth, and the accuracy of reflection imaging. Coherent noise is not aliased and the noise analysis precision and suppression increases as a result. High density spatial sampling enhances wave field continuity and the accuracy of various mathematical transforms, which benefits wave field separation. Finally, we point out that the difficult part of high density spatial sampling technology is the data processing. More research needs to be done on the methods of analyzing and processing huge amounts of seismic data.
基金the National Natural Science Foundation of China(Grant Nos.42130602,42175136)the Collaborative Innovation Center for Climate Change,Jiangsu Province,China.
文摘The efficacy of vegetation dynamics simulations in offline land surface models(LSMs)largely depends on the quality and spatial resolution of meteorological forcing data.In this study,the Princeton Global Meteorological Forcing Data(PMFD)and the high spatial resolution and upscaled China Meteorological Forcing Data(CMFD)were used to drive the Simplified Simple Biosphere model version 4/Top-down Representation of Interactive Foliage and Flora Including Dynamics(SSiB4/TRIFFID)and investigate how meteorological forcing datasets with different spatial resolutions affect simulations over the Tibetan Plateau(TP),a region with complex topography and sparse observations.By comparing the monthly Leaf Area Index(LAI)and Gross Primary Production(GPP)against observations,we found that SSiB4/TRIFFID driven by upscaled CMFD improved the performance in simulating the spatial distributions of LAI and GPP over the TP,reducing RMSEs by 24.3%and 20.5%,respectively.The multi-year averaged GPP decreased from 364.68 gC m^(-2)yr^(-1)to 241.21 gC m^(-2)yr^(-1)with the percentage bias dropping from 50.2%to-1.7%.When using the high spatial resolution CMFD,the RMSEs of the spatial distributions of LAI and GPP simulations were further reduced by 7.5%and 9.5%,respectively.This study highlights the importance of more realistic and high-resolution forcing data in simulating vegetation growth and carbon exchange between the atmosphere and biosphere over the TP.
文摘With the deepening informationization of Resources & Environment Remote Sensing geological survey conducted,some potential problems and deficiency are:(1) shortage of unified-planed running environment;(2) inconsistent methods of data integration;and(3) disadvantages of different performing ways of data integration.This paper solves the above problems through overall planning and design,constructs unified running environment, consistent methods of data integration and system structure in order to advance the informationization
基金Sponsored by National Natural Science Fund of China(51608007)Young Top-notch Talent Cultivation Project of North China University of Technology(2018)
文摘There are hundreds of villages in the western mountainous area of Beijing,of which quite a few have a profound history and form the settlement culture in the western part of Beijing.Taking dozens of ancient villages in Mentougou District as the research sample,the village space as the research object,based on ASTER GDEM database and quantitative analysis tools such as Global Mapper and ArcGIS,this study analyzed from the perspectives of altitude,topography,slope direction,and building density distribution,made a quantitative study on the spatial distribution and plane structure of ancient villages so that the law of village space with the characteristics of western Beijing was summarized to supplement and improve the relevant achievements in the research field of ancient villages in western Beijing.
文摘This paper presents a conceptual data model, the STA-model, for handling spatial, temporal and attribute aspects of objects in GIS. The model is developed on the basis of object-oriented modeling approach. This model includes two major parts: (a) modeling the signal objects by STA-object elements, and (b) modeling relationships between STA-objects. As an example, the STA-model is applied for modeling land cover change data with spatial, temporal and attribute components.
基金financially supported by the National Natural Science Fundation of China(Grant Nos.42161065 and 41461038)。
文摘Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,and their quality significantly impacts the prediction performance of the model.However,non-fire point data obtained using existing sampling methods generally suffer from low representativeness.Therefore,this study proposes a non-fire point data sampling method based on geographical similarity to improve the quality of non-fire point samples.The method is based on the idea that the less similar the geographical environment between a sample point and an already occurred fire point,the greater the confidence in being a non-fire point sample.Yunnan Province,China,with a high frequency of forest fires,was used as the study area.We compared the prediction performance of traditional sampling methods and the proposed method using three commonly used forest fire risk prediction models:logistic regression(LR),support vector machine(SVM),and random forest(RF).The results show that the modeling and prediction accuracies of the forest fire prediction models established based on the proposed sampling method are significantly improved compared with those of the traditional sampling method.Specifically,in 2010,the modeling and prediction accuracies improved by 19.1%and 32.8%,respectively,and in 2020,they improved by 13.1%and 24.3%,respectively.Therefore,we believe that collecting non-fire point samples based on the principle of geographical similarity is an effective way to improve the quality of forest fire samples,and thus enhance the prediction of forest fire risk.
文摘In order to solve the hidden regional relationship among garlic prices,this paper carries out spatial quantitative analysis of garlic price data based on ArcGIS technology.The specific analysis process is to collect prices of garlic market from 2015 to 2017 in different regions of Shandong Province,using the Moran's Index to obtain monthly Moran indicators are positive,so as to analyze the overall positive relationship between garlic prices;then using the geostatistical analysis tool in ArcGIS to draw a spatial distribution Grid diagram,it was found that the price of garlic has a significant geographical agglomeration phenomenon and showed a multi-center distribution trend.The results showed that the agglomeration centers are Jining,Dongying,Qingdao,and Yantai.At the end of the article,according to the research results,constructive suggestions were made for the regulation of garlic price.Using Moran’s Index and geostatistical analysis tools to analyze the data of garlic price,which made up for the lack of position correlation in the traditional analysis methods and more intuitively and effectively reflected the trend of garlic price from low to high from west to east in Shandong Province and showed a pattern of circular distribution.
基金Project supported by the National Natural Science Foundation of China (No.40301043) .
文摘The mathematic theory for uncertainty model of line segment are summed up to achieve a general conception, and the line error hand model of εσ is a basic uncertainty model that can depict the line accuracy and quality efficiently while the model of εm and error entropy can be regarded as the supplement of it. The error band model will reflect and describe the influence of line uncertainty on polygon uncertainty. Therefore, the statistical characteristic of the line error is studied deeply by analyzing the probability that the line error falls into a certain range. Moreover, the theory accordance is achieved in the selecting the error buffer for line feature and the error indicator. The relationship of the accuracy of area for a polygon with the error loop for a polygon boundary is deduced and computed.
文摘A general spatial interpolation method for tidal properties has been developed by solving a partial differential equation with a combination of different orders of harmonic operators using a mixed finite element method. Numerically, the equation is solved implicitly without iteration on an unstructured triangular mesh grid. The paper demonstrates the performance of the method for tidal property fields with different characteristics, boundary complexity, number of input data points, and data point distribution. The method has been successfully applied under several different tidal environments, including an idealized distribution in a square basin, coamplitude and cophase lines in the Taylor semi-infiite rotating channel, and tide coamplitude and cophase lines in the Bohai Sea and Chesapeake Bay. Compared to Laplace’s equation that NOAA/NOS currently uses for interpolation in hydrographic and oceanographic applications, the multiple-order harmonic equation method eliminates the problem of singularities at data points, and produces interpolation results with better accuracy and precision.
基金the National Natural Science Foundation of China(No.41971356,41701446)National Key Research and Development Program of China(No.2017YFB0503600,2018YFB0505500,2017YFC0602204).
文摘Parallel vector buffer analysis approaches can be classified into 2 types:algorithm-oriented parallel strategy and the data-oriented parallel strategy.These methods do not take its applicability on the existing geographic information systems(GIS)platforms into consideration.In order to address the problem,a spatial decomposition approach for accelerating buffer analysis of vector data is proposed.The relationship between the number of vertices of each feature and the buffer analysis computing time is analyzed to generate computational intensity transformation functions(CITFs).Then,computational intensity grids(CIGs)of polyline and polygon are constructed based on the relative CITFs.Using the corresponding CIGs,a spatial decomposition method for parallel buffer analysis is developed.Based on the computational intensity of the features and the sub-domains generated in the decomposition,the features are averagely assigned within the sub-domains into parallel buffer analysis tasks for load balance.Compared with typical regular domain decomposition methods,the new approach accomplishes greater balanced decomposition of computational intensity for parallel buffer analysis and achieves near-linear speedups.
基金supported by the Hubei Province Educational Division Social Science Research Project(Grant No.15G051)
文摘It is clearly stated in the 19th people's congress that we should make the environmental protection as our national policy. Therefore, it is of great importance to study this issue. This article is going to consider 30 provinces of China as the cross-section, and utilize the data sample from 2006 to 2015 of these cross-sections to formulate a Spatial Panel Data Durbin Model to analyze the effect of FDI. By using these data, this article creates a comprehensive environmental pollution index with the help of entropy. The result indicates that the effect of FDI on environment has a non-linear and spatial spillover characteristic. Before reaching the critical value, FDI has a negative effect on environment; however, with the accumulation of FDI, it will create a significant positive effect on the environment.
文摘The authors designed the spatial data mining system for ore-forming prediction based on the theory and methods of data mining as well as the technique of spatial database,in combination with the characteristics of geological information data.The system consists of data management,data mining and knowledge discovery,knowledge representation.It can syncretize multi-source geosciences data effectively,such as geology,geochemistry,geophysics,RS.The system digitized geological information data as data layer files which consist of the two numerical values,to store these files in the system database.According to the combination of the characters of geological information,metallogenic prognosis was realized,as an example from some area in Heilongjiang Province.The prospect area of hydrothermal copper deposit was determined.
文摘GeoStar is the registered trademark of GIS software made by WTUSM in China.By means of the GeoStar,multi_scale images,DEMs,graphics and attributes integrated in very large seamless databases can be created,and the multi_dimensional dynamic visualization and information extraction are also available.This paper describes the fundamental characteristics of such huge integrated databases,for instance,the data models,database structures and the spatial index strategies.At last,the typical applications of GeoStar for a few pilot projects like the Shanghai CyberCity and the Guangdong provincial spatial data infrastructure (SDI) are illustrated and several concluding remarks are stressed.
文摘Spatial data, including geometrical data, attribute data, image data and DEM data, are huge in volume and relations among them are complex. How to effectively organize and manage those data is an important problem in GIS. Several problems about space data organization and management in GeoStar which is a basic GIS software made in China are discussed in this paper. The paper emphasizes on object model of spatial vector, data organization, data management and how to realize the goal, and the like.