In the design of a graphic processing unit(GPU),the processing speed of triangle rasterization is an important factor that determines the performance of the GPU.An architecture of a multi-tile parallel-scan rasterizat...In the design of a graphic processing unit(GPU),the processing speed of triangle rasterization is an important factor that determines the performance of the GPU.An architecture of a multi-tile parallel-scan rasterization accelerator was proposed in this paper.The accelerator uses a bounding box algorithm to improve scanning efficiency.It rasterizes multiple tiles in parallel and scans multiple lines at the same time within each tile.This highly parallel approach drastically improves the performance of rasterization.Using the 65 nm process standard cell library of Semiconductor Manufacturing International Corporation(SMIC),the accelerator can be synthesized to a maximum clock frequency of 220 MHz.An implementation on the Genesys2 field programmable gate array(FPGA)board fully verifies the functionality of the accelerator.The implementation shows a significant improvement in rendering speed and efficiency and proves its suitability for high-performance rasterization.展开更多
We introduce CURDIS,a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc.In this template,algorithms proceed by finding the tangent quadra...We introduce CURDIS,a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc.In this template,algorithms proceed by finding the tangent quadrant at each point of the arc and determining which side the curve exits the pixel according to a tailored criterion.These two elements can be adapted for any type of curve,leading to algorithms dedicated to the shape of specific curves.While the calculation of the tangent quadrant for various curves,such as lines,conics,or cubics,is simple,it is more complex to analyze how pixels are traversed by the curve.In the case of conic arcs,we found a criterion for determining the pixel exit side.This leads us to present a new algorithm,called CURDIS-C,specific to the discretization of conics,for which we provide all the details.Surprisingly,the criterion for conics requires between one and three sign tests and four additions per pixel,making the algorithm efficient for resource-constrained systems and feasible for fixed-point or integer arithmetic implementations.Our algorithm also perfectly handles the pathological cases in which the conic intersects a pixel twice or changes quadrants multiple times within this pixel,achieving this generality at the cost of potentially computing up to two square roots per arc.We illustrate the use of CURDIS for the discretization of different curves,such as ellipses,hyperbolas,and parabolas,even when they degenerate into lines or corners.展开更多
Rasterization is a conversion process accompanied with information loss, which includes the loss of features' shape, structure, position, attribute and so on. Two chief factors that affect estimating attribute accura...Rasterization is a conversion process accompanied with information loss, which includes the loss of features' shape, structure, position, attribute and so on. Two chief factors that affect estimating attribute accuracy loss in rasterization are grid cell size and evaluating method. That is, attribute accuracy loss in rasterization has a close relationship with grid cell size; besides, it is also influenced by evaluating methods. Therefore, it is significant to analyze these two influencing factors comprehensively. Taking land cover data of Sichuan at the scale of 1:250,000 in 2005 as a case, in view of data volume and its processing time of the study region, this study selects 16 spatial scales from 600 m to 30 km, uses rasterizing method based on the Rule of Maximum Area (RMA) in ArcGIS and two evaluating methods of attribute accuracy loss, which are Normal Analysis Method (NAM) and a new Method Based on Grid Cell (MBGC), respectively, and analyzes the scale effect of attribute (it is area here) accuracy loss at 16 different scales by these two evaluating methods comparatively. The results show that: (1) At the same scale, average area accuracy loss of the entire study region evaluated by MBGC is significantly larger than the one estimated using NAM. Moreover, this discrepancy between the two is obvious in the range of 1 km to 10 km. When the grid cell is larger than 10 km, average area accuracy losses calculated by the two evaluating methods are stable, even tended to parallel. (2) MBGC can not only estimate RMA rasterization attribute accuracy loss accurately, but can express the spatial distribution of the loss objectively. (3) The suitable scale domain for RMA rasterization of land cover data of Sichuan at the scale of 1:250,000 in 2005 is better equal to or less than 800 m, in which the data volume is favorable and the processina time is not too Iona. as well as the area accuracv loss is less than 2.5%.展开更多
Abrupt near-surface temperature changes in mountainous areas are a special component of the mountain climate system.Fast and accurate measurements of the locations,intensity,and width of the near-surface changes are n...Abrupt near-surface temperature changes in mountainous areas are a special component of the mountain climate system.Fast and accurate measurements of the locations,intensity,and width of the near-surface changes are necessary but highly difficult due to the complicated environmental conditions and instrumental issues.This paper develops a spatial pattern recognition method to measure the near-surface high temperature increase(NSHTI),one of the lesser-attended changes.First,raster window measurement was proposed to calculate the temperature lapse rate using MODIS land surface temperature and SRTM DEM data.It fully considers the terrain heights of two neighboring cells on opposite or adjacent slopes with a moving window of 3×3 cell size.Second,a threshold selection was performed to identify the NSHTI cells using a threshold of-0.65℃/100 m.Then,the NSHTI strips were parameterized through raster vectorization and spatial analysis.Taking Yunnan,a mountainous province in southwestern China,as the study area,the results indicate that the NSHTI cells concentrate in a strip-like pattern along the mountains and valleys,and the strips are almost parallel to the altitude contours with a slight northward uplift.Also,they are located mostly at a 3/5 height of high mountains or within 400 m from the valley floors,where the controlling topographic index is the altitude of the terrain trend surface but not the absolute elevation and the topographic uplift height and cutting depth.Additionally,the NSHTI intensity varies with the geographic locations and the proportions increase with an exponential trend,and the horizontal width has a mean of about 1000 m and a maximum of over 5000 m.The result demonstrates that the proposed method can effectively recognize NSHTI boundaries over mountains,providing support for the modeling of weather and climate systems and the development of mountain resources.展开更多
Vector-to-raster conversion is a process accompanied with errors.The errors are classified into predicted errors before rasterization and actual errors after that.Accurate prediction of the errors is beneficial to dev...Vector-to-raster conversion is a process accompanied with errors.The errors are classified into predicted errors before rasterization and actual errors after that.Accurate prediction of the errors is beneficial to developing reasonable rasterization technical schemes and to making products of high quality.Analyzing and establishing a quantitative relationship between the error and its affecting factors is the key to error prediction.In this study,land cover data of China at a scale of 1:250 000 were taken as an example for analyzing the relationship between rasterization errors and the density of arc length(DA),the density of polygon(DP) and the size of grid cells(SG).Significant correlations were found between the errors and DA,DP and SG.The correlation coefficient(R2) of a model established based on samples collected in a small region(Beijing) reaches 0.95,and the value of R2 is equal to 0.91 while the model was validated with samples from the whole nation.On the other hand,the R2 of a model established based on nationwide samples reaches 0.96,and R2 is equal to 0.91 while it was validated with the samples in Beijing.These models depict well the relationships between rasterization errors and their affecting factors(DA,DP and SG).The analyzing method established in this study can be applied to effectively predicting rasterization errors in other cases as well.展开更多
Background This paper presents an intelligent path planner for lifting tasks by tower cranes in highly complex environments,such as old industrial plants that were built many decades ago and sites used as tentative st...Background This paper presents an intelligent path planner for lifting tasks by tower cranes in highly complex environments,such as old industrial plants that were built many decades ago and sites used as tentative storage spaces.Generally,these environments do not have workable digital models and 3 D representations are impractical.Methods The current investigation introduces the use of cutting edge laser scanning technology to convert real environments into virtualized versions of the construction sites or plants in the form of point clouds.The challenge is in dealing with the large point cloud datasets from the multiple scans needed to produce a complete virtualized model.The tower crane is also virtualized for the purpose of path planning.A parallelized genetic algorithm is employed to achieve intelligent path planning for the lifting task performed by tower cranes in complicated environments taking advantage of graphics processing unit technology,which has high computing performance yet low cost.Results Optimal lifting paths are generate d in several seconds.展开更多
As an important component of China’ transportation systems, for a long time, the insufficient performance of transport in QinghaiTibet Plateau(QTP) was a bottleneck restricting the economic growth and social developm...As an important component of China’ transportation systems, for a long time, the insufficient performance of transport in QinghaiTibet Plateau(QTP) was a bottleneck restricting the economic growth and social development in this area. Nevertheless, the implementation of the western development strategy has accelerated the preliminary construction of comprehensive transport network since 2000. Due to the large area and significant geographical heterogeneity, there is a growing need to understand the relationship between transportation and economic development based on the perspective of spatial difference. By using GIS-based raster analysis and Geographically Weighted Regression(GWR) model, we investigated the spatial-temporal distribution of highway, railway and airport accessibility, respectively, and estimated the correlation and heterogeneity between transport accessibility and the level of economic development. Results revealed that:(1) Transport accessibility in the QTP improved by 53.38% in the past 15 years, which is specifically embodied in the improvement of both highway and railway.(2) Accessibility presented prominent differentiation in the space, increasing from west to east and reducing with the rise of elevation, specifically, the best accessibility area of the highway is below 4000 m above sea level, while the area with an altitude of over 4000 m has the lowest aviation time cost.(3) In general, the long weighted average time cost to critical transport facilities posed significantly negative effect on county economic growth in QTP, more positively, the adverse effect gradually weakened over time.(4) Obvious heterogeneity exists at the influence of different transport accessibility factors on the level of economic development, reflecting both in the horizontal space and altitudinal belt. Therefore, region-specific policies should be addressed for the sustainable development of transport facilities as well as economy in the west mountain areas.展开更多
On the basis of Digital Elevation Model data, the raster flow vectors, watershed delineation, and spatial topological relationship are generated by the Martz and Garbrecht method for the upper area of Huangnizhuang st...On the basis of Digital Elevation Model data, the raster flow vectors, watershed delineation, and spatial topological relationship are generated by the Martz and Garbrecht method for the upper area of Huangnizhuang station in the Shihe Catchment with 805 km<SUP>2</SUP> of area, an intensified observation field for the HUBEX/GAME Project. Then, the Xin’anjiang Model is applied for runoff production in each grid element where rain data measured by radar at Fuyang station is utilized as the input of the hydrological model. The elements are connected by flow vectors to the outlet of the drainage catchment where runoff is routed by the Muskingum method from each grid element to the outlet according to the length between each grid and the outlet. The Nash-Sutcliffe model efficiency coefficient is 92.41% from 31 May to 3 August 1998, and 85.64%, 86.62%, 92.57%, and 83.91%, respectively for the 1st, 2nd, 3rd, and 4th flood events during the whole computational period. As compared with the case where rain-gauge data are used in simulating the hourly hydrograph at Huangnizhuang station in the Shihe Catchment, the index of model efficiency improvement is positive, ranging from 27.56% to 69.39%. This justifies the claim that radar-measured data are superior to rain-gauge data as inputs to hydrological modeling. As a result, the grid-based hydrological model provides a good platform for runoff computation when radar-measured rain data with highly spatiotemporal resolution are taken as the input of the hydrological model.展开更多
Forest inventories based on remote sensing often interpret stand characteristics for small raster cells instead of traditional stand compartments. This is the case for instance in the Lidar-based and multi-source fore...Forest inventories based on remote sensing often interpret stand characteristics for small raster cells instead of traditional stand compartments. This is the case for instance in the Lidar-based and multi-source forest inventories of Finland where the interpretation units are 16 m 9 16 m grid cells. Using these cells as simulation units in forest planning would lead to very large planning problems. This difficulty could be alleviated by aggregating the grid cells into larger homogeneous segments before planning calculations. This study developed a cellular automaton(CA) for aggregating grid cells into larger calculation units, which in this study were called stands. The criteria used in stand delineation were the shape and size of the stands, and homogeneity of stand attributes within the stand. The stand attributes were: main site type(upland or peatland forest), site fertility, mean tree diameter, mean tree height and stand basal area. In the CA, each cell was joined to one of its adjacent stands for several iterations,until the cells formed a compact layout of homogeneous stands. The CA had several parameters. Due to high number possible parameter combinations, particle swarm optimization was used to find the optimal set of parameter values. Parameter optimization aimed at minimizing within-stand variation and maximizing between-stand variation in stand attributes. When the CA was optimized without any restrictions for its parameters, the resulting stand delineation consisted of small and irregular stands. A clean layout of larger and compact stands was obtained when the CA parameters were optimized with constrained parameter values and so that the layout was penalized as a function of the number of small stands(<0.1 ha). However, there was within-stand variation in fertility class due to small-scale variation in the data. The stands delineated by the CA explained 66–87% of variation in stand basal area, mean tree height and mean diameter, and 41–92% of variation in the fertility class of the site. It was concluded that the CA developed in this study is a flexible new tool,which could be immediately used in forest planning.展开更多
Raster type of forest inventory data with site and growing stock variables interpreted for small squareshaped grid cells are increasingly available for forest planning.In Finland,there are two sources of this type of ...Raster type of forest inventory data with site and growing stock variables interpreted for small squareshaped grid cells are increasingly available for forest planning.In Finland,there are two sources of this type of lattice data:the multisource national forest inventory and the inventory that is based on airborne laser scanning(ALS).In both cases,stand variables are interpreted for 16 m×16 m cells.Both data sources cover all private forests of Finland and are freely available for forest planning.This study analyzed different ways to use the ALS raster data in forest planning.The analyses were conducted for a grid of 375×375 cells(140,625 cells,of which 97,893 were productive forest).The basic alternatives were to use the cells as calculation units throughout the planning process,or aggregate the cells into segments before planning calculations.The use of cells made it necessary to use spatial optimization to aggregate cuttings and other treatments into blocks that were large enough for the practical implementation of the plan.In addition,allowing premature cuttings in a part of the cells was a prerequisite for compact treatment areas.The use of segments led to 5–9%higher growth predictions than calculations based on cells.In addition,the areas of the most common fertility classes were overestimated and the areas of rare site classes were underestimated when segments were used.The shape of the treatment blocks was more irregular in cell-based planning.Using cells as calculation units instead of segments led to 20 times longer computing time of the whole planning process than the use of segments when the number of grid cells was approximately 100,000.展开更多
This paper studies urban waterlog_draining decision support system based on the 4D data fusion technique.4D data includes DEM,DOQ,DLG and DRG.It supplies entire databases for waterlog forecast and analysis together wi...This paper studies urban waterlog_draining decision support system based on the 4D data fusion technique.4D data includes DEM,DOQ,DLG and DRG.It supplies entire databases for waterlog forecast and analysis together with non_spatial fundamental database.Data composition and reasoning are two key steps of 4D data fusion.Finally,this paper gives a real case: Ezhou Waterlog_Draining Decision Support System (EWDSS) with two application models,i.e.,DEM application model,water generating and draining model.展开更多
The current GIS can only deal with 2-D or 2.5-D information on the earth surface. A new 3-D data structure and data model need to be designed for the 3-D GIS. This paper analyzes diverse 3-D spatial phenomena from min...The current GIS can only deal with 2-D or 2.5-D information on the earth surface. A new 3-D data structure and data model need to be designed for the 3-D GIS. This paper analyzes diverse 3-D spatial phenomena from mine to geology and their complicated relations, and proposes several new kinds of spatial objects including cross-section, column body and digital surface model to represent some special spatial phenomena like tunnels and irregular surfaces of an ore body. An integrated data structure including vector, raster and object-oriented data models is used to represent various 3-D spatial objects and their relations. The integrated data structure and object-oriented data model can be used as bases to design and realize a 3-D geographic information system.展开更多
A new Runge-Kutta (PK) fourth order with four stages embedded method with error control is presentea m this paper for raster simulation in cellular neural network (CNN) environment. Through versatile algorithm, si...A new Runge-Kutta (PK) fourth order with four stages embedded method with error control is presentea m this paper for raster simulation in cellular neural network (CNN) environment. Through versatile algorithm, single layer/raster CNN array is implemented by incorporating the proposed technique. Simulation results have been obtained, and comparison has also been carried out to show the efficiency of the proposed numerical integration algorithm. The analytic expressions for local truncation error and global truncation error are derived. It is seen that the RK-embedded root mean square outperforms the RK-embedded Heronian mean and RK-embedded harmonic mean.展开更多
Snow water equivalent(SWE)is an important factor reflecting the variability of snow.It is important to estimate SWE based on remote sensing data while taking spatial autocorrelation into account.Based on the segmentat...Snow water equivalent(SWE)is an important factor reflecting the variability of snow.It is important to estimate SWE based on remote sensing data while taking spatial autocorrelation into account.Based on the segmentation method,the relationship between SWE and environmental factors in the central part of the Tibetan Plateau was explored using the eigenvector spatial filtering(ESF)regression model,and the influence of different factors on the SWE was explored.Three sizes of 16×16,24×24 and 32×32 were selected to segment raster datasets into blocks.The eigenvectors of the spatial adjacency matrix of the segmented size were selected to be added into the model as spatial factors,and the ESF regression model was constructed for each block in parallel.Results show that precipitation has a great influence on SWE,while surface temperature and NDVI have little influence.Air temperature,elevation and surface temperature have completely different effects in different areas.Compared with the ordinary least square(OLS)linear regression model,geographically weighted regression(GWR)model,spatial lag model(SLM)and spatial error model(SEM),ESF model can eliminate spatial autocorrelation with the highest accuracy.As the segmentation size increases,the complexity of ESF model increases,but the accuracy is improved.展开更多
This paper presents a macroblock-level (MB-level) decoding and deblocking method for supporting the flexible macroblock ordering (FMO) and arbitrary slice ordering (ASO) bit streams in H.264 decoder and its SOC/ASIC i...This paper presents a macroblock-level (MB-level) decoding and deblocking method for supporting the flexible macroblock ordering (FMO) and arbitrary slice ordering (ASO) bit streams in H.264 decoder and its SOC/ASIC implementation. By searching the slice containing the current macroblock in the bit stream and switching slices correctly, MBs can be decoded in the raster scan order, while the decoding process can immediately begin as long as the slice containing the current MB is available. This architectural modification enables the MB-level decoding and deblocking 3-stage pipeline, and saves about 20% of SDRAM bandwidth. Implementation results showed that the design achieves real-time decoding of 1080HD (1920×1088@30 fps) at a system clock of 166 MHz.展开更多
基金the Scientific Research Program Funded by Shaanxi Provincial Education Department(20JY058)。
文摘In the design of a graphic processing unit(GPU),the processing speed of triangle rasterization is an important factor that determines the performance of the GPU.An architecture of a multi-tile parallel-scan rasterization accelerator was proposed in this paper.The accelerator uses a bounding box algorithm to improve scanning efficiency.It rasterizes multiple tiles in parallel and scans multiple lines at the same time within each tile.This highly parallel approach drastically improves the performance of rasterization.Using the 65 nm process standard cell library of Semiconductor Manufacturing International Corporation(SMIC),the accelerator can be synthesized to a maximum clock frequency of 220 MHz.An implementation on the Genesys2 field programmable gate array(FPGA)board fully verifies the functionality of the accelerator.The implementation shows a significant improvement in rendering speed and efficiency and proves its suitability for high-performance rasterization.
文摘We introduce CURDIS,a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc.In this template,algorithms proceed by finding the tangent quadrant at each point of the arc and determining which side the curve exits the pixel according to a tailored criterion.These two elements can be adapted for any type of curve,leading to algorithms dedicated to the shape of specific curves.While the calculation of the tangent quadrant for various curves,such as lines,conics,or cubics,is simple,it is more complex to analyze how pixels are traversed by the curve.In the case of conic arcs,we found a criterion for determining the pixel exit side.This leads us to present a new algorithm,called CURDIS-C,specific to the discretization of conics,for which we provide all the details.Surprisingly,the criterion for conics requires between one and three sign tests and four additions per pixel,making the algorithm efficient for resource-constrained systems and feasible for fixed-point or integer arithmetic implementations.Our algorithm also perfectly handles the pathological cases in which the conic intersects a pixel twice or changes quadrants multiple times within this pixel,achieving this generality at the cost of potentially computing up to two square roots per arc.We illustrate the use of CURDIS for the discretization of different curves,such as ellipses,hyperbolas,and parabolas,even when they degenerate into lines or corners.
基金The Independent Research of the State Key Laboratory of Resource and Environmental Information System,No.O88RA100SAThe Third Innovative and Cutting-edge Projects of Institute of Geographic Sciences andNatural Resources Research, CAS, No.O66U0309SZ
文摘Rasterization is a conversion process accompanied with information loss, which includes the loss of features' shape, structure, position, attribute and so on. Two chief factors that affect estimating attribute accuracy loss in rasterization are grid cell size and evaluating method. That is, attribute accuracy loss in rasterization has a close relationship with grid cell size; besides, it is also influenced by evaluating methods. Therefore, it is significant to analyze these two influencing factors comprehensively. Taking land cover data of Sichuan at the scale of 1:250,000 in 2005 as a case, in view of data volume and its processing time of the study region, this study selects 16 spatial scales from 600 m to 30 km, uses rasterizing method based on the Rule of Maximum Area (RMA) in ArcGIS and two evaluating methods of attribute accuracy loss, which are Normal Analysis Method (NAM) and a new Method Based on Grid Cell (MBGC), respectively, and analyzes the scale effect of attribute (it is area here) accuracy loss at 16 different scales by these two evaluating methods comparatively. The results show that: (1) At the same scale, average area accuracy loss of the entire study region evaluated by MBGC is significantly larger than the one estimated using NAM. Moreover, this discrepancy between the two is obvious in the range of 1 km to 10 km. When the grid cell is larger than 10 km, average area accuracy losses calculated by the two evaluating methods are stable, even tended to parallel. (2) MBGC can not only estimate RMA rasterization attribute accuracy loss accurately, but can express the spatial distribution of the loss objectively. (3) The suitable scale domain for RMA rasterization of land cover data of Sichuan at the scale of 1:250,000 in 2005 is better equal to or less than 800 m, in which the data volume is favorable and the processina time is not too Iona. as well as the area accuracv loss is less than 2.5%.
基金supported by the National Natural Science Foundation of China (Grant No. 42061004)the Joint Special Project of Agricultural Basic Research of Yunnan Province (Grant No. 202101BD070001093)the Youth Special Project of Xingdian Talent Support Program of Yunnan Province
文摘Abrupt near-surface temperature changes in mountainous areas are a special component of the mountain climate system.Fast and accurate measurements of the locations,intensity,and width of the near-surface changes are necessary but highly difficult due to the complicated environmental conditions and instrumental issues.This paper develops a spatial pattern recognition method to measure the near-surface high temperature increase(NSHTI),one of the lesser-attended changes.First,raster window measurement was proposed to calculate the temperature lapse rate using MODIS land surface temperature and SRTM DEM data.It fully considers the terrain heights of two neighboring cells on opposite or adjacent slopes with a moving window of 3×3 cell size.Second,a threshold selection was performed to identify the NSHTI cells using a threshold of-0.65℃/100 m.Then,the NSHTI strips were parameterized through raster vectorization and spatial analysis.Taking Yunnan,a mountainous province in southwestern China,as the study area,the results indicate that the NSHTI cells concentrate in a strip-like pattern along the mountains and valleys,and the strips are almost parallel to the altitude contours with a slight northward uplift.Also,they are located mostly at a 3/5 height of high mountains or within 400 m from the valley floors,where the controlling topographic index is the altitude of the terrain trend surface but not the absolute elevation and the topographic uplift height and cutting depth.Additionally,the NSHTI intensity varies with the geographic locations and the proportions increase with an exponential trend,and the horizontal width has a mean of about 1000 m and a maximum of over 5000 m.The result demonstrates that the proposed method can effectively recognize NSHTI boundaries over mountains,providing support for the modeling of weather and climate systems and the development of mountain resources.
基金Under the auspices of Strategic Priority Research Program of Chinese Academy of Sciences(No.XDA05050000)Special Program for Informatization of Chinese Academy of Sciences(No.INF0-115-C01-SDB3-02)
文摘Vector-to-raster conversion is a process accompanied with errors.The errors are classified into predicted errors before rasterization and actual errors after that.Accurate prediction of the errors is beneficial to developing reasonable rasterization technical schemes and to making products of high quality.Analyzing and establishing a quantitative relationship between the error and its affecting factors is the key to error prediction.In this study,land cover data of China at a scale of 1:250 000 were taken as an example for analyzing the relationship between rasterization errors and the density of arc length(DA),the density of polygon(DP) and the size of grid cells(SG).Significant correlations were found between the errors and DA,DP and SG.The correlation coefficient(R2) of a model established based on samples collected in a small region(Beijing) reaches 0.95,and the value of R2 is equal to 0.91 while the model was validated with samples from the whole nation.On the other hand,the R2 of a model established based on nationwide samples reaches 0.96,and R2 is equal to 0.91 while it was validated with the samples in Beijing.These models depict well the relationships between rasterization errors and their affecting factors(DA,DP and SG).The analyzing method established in this study can be applied to effectively predicting rasterization errors in other cases as well.
文摘Background This paper presents an intelligent path planner for lifting tasks by tower cranes in highly complex environments,such as old industrial plants that were built many decades ago and sites used as tentative storage spaces.Generally,these environments do not have workable digital models and 3 D representations are impractical.Methods The current investigation introduces the use of cutting edge laser scanning technology to convert real environments into virtualized versions of the construction sites or plants in the form of point clouds.The challenge is in dealing with the large point cloud datasets from the multiple scans needed to produce a complete virtualized model.The tower crane is also virtualized for the purpose of path planning.A parallelized genetic algorithm is employed to achieve intelligent path planning for the lifting task performed by tower cranes in complicated environments taking advantage of graphics processing unit technology,which has high computing performance yet low cost.Results Optimal lifting paths are generate d in several seconds.
基金jointly sponsored by Institute of Mountain Hazards and Environment,Research Center of Sichuan County Economy Developmentthe financial support from the National Natural Science Foundation of China(Grants No.41571523,41661144038,41671152)+1 种基金the National Key Technology Research and Development Program of the Ministry of Science and Technology of China(Grant No.2014BAC05B01)the Major Base Planning Projects of Sichuan Social Science(Grants No.SC18EZD050)
文摘As an important component of China’ transportation systems, for a long time, the insufficient performance of transport in QinghaiTibet Plateau(QTP) was a bottleneck restricting the economic growth and social development in this area. Nevertheless, the implementation of the western development strategy has accelerated the preliminary construction of comprehensive transport network since 2000. Due to the large area and significant geographical heterogeneity, there is a growing need to understand the relationship between transportation and economic development based on the perspective of spatial difference. By using GIS-based raster analysis and Geographically Weighted Regression(GWR) model, we investigated the spatial-temporal distribution of highway, railway and airport accessibility, respectively, and estimated the correlation and heterogeneity between transport accessibility and the level of economic development. Results revealed that:(1) Transport accessibility in the QTP improved by 53.38% in the past 15 years, which is specifically embodied in the improvement of both highway and railway.(2) Accessibility presented prominent differentiation in the space, increasing from west to east and reducing with the rise of elevation, specifically, the best accessibility area of the highway is below 4000 m above sea level, while the area with an altitude of over 4000 m has the lowest aviation time cost.(3) In general, the long weighted average time cost to critical transport facilities posed significantly negative effect on county economic growth in QTP, more positively, the adverse effect gradually weakened over time.(4) Obvious heterogeneity exists at the influence of different transport accessibility factors on the level of economic development, reflecting both in the horizontal space and altitudinal belt. Therefore, region-specific policies should be addressed for the sustainable development of transport facilities as well as economy in the west mountain areas.
基金The research is jointly supported financially by the National Natural Science Foundation of China under Grant No. 40171016 and 49794030.
文摘On the basis of Digital Elevation Model data, the raster flow vectors, watershed delineation, and spatial topological relationship are generated by the Martz and Garbrecht method for the upper area of Huangnizhuang station in the Shihe Catchment with 805 km<SUP>2</SUP> of area, an intensified observation field for the HUBEX/GAME Project. Then, the Xin’anjiang Model is applied for runoff production in each grid element where rain data measured by radar at Fuyang station is utilized as the input of the hydrological model. The elements are connected by flow vectors to the outlet of the drainage catchment where runoff is routed by the Muskingum method from each grid element to the outlet according to the length between each grid and the outlet. The Nash-Sutcliffe model efficiency coefficient is 92.41% from 31 May to 3 August 1998, and 85.64%, 86.62%, 92.57%, and 83.91%, respectively for the 1st, 2nd, 3rd, and 4th flood events during the whole computational period. As compared with the case where rain-gauge data are used in simulating the hourly hydrograph at Huangnizhuang station in the Shihe Catchment, the index of model efficiency improvement is positive, ranging from 27.56% to 69.39%. This justifies the claim that radar-measured data are superior to rain-gauge data as inputs to hydrological modeling. As a result, the grid-based hydrological model provides a good platform for runoff computation when radar-measured rain data with highly spatiotemporal resolution are taken as the input of the hydrological model.
文摘Forest inventories based on remote sensing often interpret stand characteristics for small raster cells instead of traditional stand compartments. This is the case for instance in the Lidar-based and multi-source forest inventories of Finland where the interpretation units are 16 m 9 16 m grid cells. Using these cells as simulation units in forest planning would lead to very large planning problems. This difficulty could be alleviated by aggregating the grid cells into larger homogeneous segments before planning calculations. This study developed a cellular automaton(CA) for aggregating grid cells into larger calculation units, which in this study were called stands. The criteria used in stand delineation were the shape and size of the stands, and homogeneity of stand attributes within the stand. The stand attributes were: main site type(upland or peatland forest), site fertility, mean tree diameter, mean tree height and stand basal area. In the CA, each cell was joined to one of its adjacent stands for several iterations,until the cells formed a compact layout of homogeneous stands. The CA had several parameters. Due to high number possible parameter combinations, particle swarm optimization was used to find the optimal set of parameter values. Parameter optimization aimed at minimizing within-stand variation and maximizing between-stand variation in stand attributes. When the CA was optimized without any restrictions for its parameters, the resulting stand delineation consisted of small and irregular stands. A clean layout of larger and compact stands was obtained when the CA parameters were optimized with constrained parameter values and so that the layout was penalized as a function of the number of small stands(<0.1 ha). However, there was within-stand variation in fertility class due to small-scale variation in the data. The stands delineated by the CA explained 66–87% of variation in stand basal area, mean tree height and mean diameter, and 41–92% of variation in the fertility class of the site. It was concluded that the CA developed in this study is a flexible new tool,which could be immediately used in forest planning.
基金Open access funding provided by University of Eastern Finland (UEF) including Kuopio University Hospital
文摘Raster type of forest inventory data with site and growing stock variables interpreted for small squareshaped grid cells are increasingly available for forest planning.In Finland,there are two sources of this type of lattice data:the multisource national forest inventory and the inventory that is based on airborne laser scanning(ALS).In both cases,stand variables are interpreted for 16 m×16 m cells.Both data sources cover all private forests of Finland and are freely available for forest planning.This study analyzed different ways to use the ALS raster data in forest planning.The analyses were conducted for a grid of 375×375 cells(140,625 cells,of which 97,893 were productive forest).The basic alternatives were to use the cells as calculation units throughout the planning process,or aggregate the cells into segments before planning calculations.The use of cells made it necessary to use spatial optimization to aggregate cuttings and other treatments into blocks that were large enough for the practical implementation of the plan.In addition,allowing premature cuttings in a part of the cells was a prerequisite for compact treatment areas.The use of segments led to 5–9%higher growth predictions than calculations based on cells.In addition,the areas of the most common fertility classes were overestimated and the areas of rare site classes were underestimated when segments were used.The shape of the treatment blocks was more irregular in cell-based planning.Using cells as calculation units instead of segments led to 20 times longer computing time of the whole planning process than the use of segments when the number of grid cells was approximately 100,000.
文摘This paper studies urban waterlog_draining decision support system based on the 4D data fusion technique.4D data includes DEM,DOQ,DLG and DRG.It supplies entire databases for waterlog forecast and analysis together with non_spatial fundamental database.Data composition and reasoning are two key steps of 4D data fusion.Finally,this paper gives a real case: Ezhou Waterlog_Draining Decision Support System (EWDSS) with two application models,i.e.,DEM application model,water generating and draining model.
基金Project supported by the National Natural Science Foundation of China (No.49871066)
文摘The current GIS can only deal with 2-D or 2.5-D information on the earth surface. A new 3-D data structure and data model need to be designed for the 3-D GIS. This paper analyzes diverse 3-D spatial phenomena from mine to geology and their complicated relations, and proposes several new kinds of spatial objects including cross-section, column body and digital surface model to represent some special spatial phenomena like tunnels and irregular surfaces of an ore body. An integrated data structure including vector, raster and object-oriented data models is used to represent various 3-D spatial objects and their relations. The integrated data structure and object-oriented data model can be used as bases to design and realize a 3-D geographic information system.
基金supported as a part of Technical Quality Improvement Programme (TEQIP)
文摘A new Runge-Kutta (PK) fourth order with four stages embedded method with error control is presentea m this paper for raster simulation in cellular neural network (CNN) environment. Through versatile algorithm, single layer/raster CNN array is implemented by incorporating the proposed technique. Simulation results have been obtained, and comparison has also been carried out to show the efficiency of the proposed numerical integration algorithm. The analytic expressions for local truncation error and global truncation error are derived. It is seen that the RK-embedded root mean square outperforms the RK-embedded Heronian mean and RK-embedded harmonic mean.
基金funded by the National Key S&T Special Projects of China(grant number:2018YFB0505302)the National Nature Science Foundation of China(grant number:41671380)。
文摘Snow water equivalent(SWE)is an important factor reflecting the variability of snow.It is important to estimate SWE based on remote sensing data while taking spatial autocorrelation into account.Based on the segmentation method,the relationship between SWE and environmental factors in the central part of the Tibetan Plateau was explored using the eigenvector spatial filtering(ESF)regression model,and the influence of different factors on the SWE was explored.Three sizes of 16×16,24×24 and 32×32 were selected to segment raster datasets into blocks.The eigenvectors of the spatial adjacency matrix of the segmented size were selected to be added into the model as spatial factors,and the ESF regression model was constructed for each block in parallel.Results show that precipitation has a great influence on SWE,while surface temperature and NDVI have little influence.Air temperature,elevation and surface temperature have completely different effects in different areas.Compared with the ordinary least square(OLS)linear regression model,geographically weighted regression(GWR)model,spatial lag model(SLM)and spatial error model(SEM),ESF model can eliminate spatial autocorrelation with the highest accuracy.As the segmentation size increases,the complexity of ESF model increases,but the accuracy is improved.
基金Project (No. 2002AA1Z1190) supported by the National Hi-Tech Research and Development Program (863) of China
文摘This paper presents a macroblock-level (MB-level) decoding and deblocking method for supporting the flexible macroblock ordering (FMO) and arbitrary slice ordering (ASO) bit streams in H.264 decoder and its SOC/ASIC implementation. By searching the slice containing the current macroblock in the bit stream and switching slices correctly, MBs can be decoded in the raster scan order, while the decoding process can immediately begin as long as the slice containing the current MB is available. This architectural modification enables the MB-level decoding and deblocking 3-stage pipeline, and saves about 20% of SDRAM bandwidth. Implementation results showed that the design achieves real-time decoding of 1080HD (1920×1088@30 fps) at a system clock of 166 MHz.