Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,...Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,and their quality significantly impacts the prediction performance of the model.However,non-fire point data obtained using existing sampling methods generally suffer from low representativeness.Therefore,this study proposes a non-fire point data sampling method based on geographical similarity to improve the quality of non-fire point samples.The method is based on the idea that the less similar the geographical environment between a sample point and an already occurred fire point,the greater the confidence in being a non-fire point sample.Yunnan Province,China,with a high frequency of forest fires,was used as the study area.We compared the prediction performance of traditional sampling methods and the proposed method using three commonly used forest fire risk prediction models:logistic regression(LR),support vector machine(SVM),and random forest(RF).The results show that the modeling and prediction accuracies of the forest fire prediction models established based on the proposed sampling method are significantly improved compared with those of the traditional sampling method.Specifically,in 2010,the modeling and prediction accuracies improved by 19.1%and 32.8%,respectively,and in 2020,they improved by 13.1%and 24.3%,respectively.Therefore,we believe that collecting non-fire point samples based on the principle of geographical similarity is an effective way to improve the quality of forest fire samples,and thus enhance the prediction of forest fire risk.展开更多
In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit so...In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit software to extract geometric information about the indoor environment.Furthermore,we proposed a method for constructing indoor elements based on parametric components.The research outcomes of this paper will offer new methods and tools for indoor space modeling and design.The approach of indoor space modeling based on 3D laser point cloud data and parametric component construction can enhance modeling efficiency and accuracy,providing architects,interior designers,and decorators with a better working platform and design reference.展开更多
For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method...For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method, and examine these performances by simulation. By comparing this method with the nonlinear least square fitting (NLSF) method and the linear regression of the sum (LRS) method in derivations and simulations, we find that this method can achieve the same or even better precision, comparable accuracy, and lower computation cost. We test this method by experimental decay signals. The results are in agreement with the ones obtained from the nonlinear least square fitting method.展开更多
Multi-view laser radar (ladar) data registration in obscure environments is an important research field of obscured target detection from air to ground. There are few overlap regions of the observational data in dif...Multi-view laser radar (ladar) data registration in obscure environments is an important research field of obscured target detection from air to ground. There are few overlap regions of the observational data in different views because of the occluder, so the multi-view data registration is rather difficult. Through indepth analyses of the typical methods and problems, it is obtained that the sequence registration is more appropriate, but needs to improve the registration accuracy. On this basis, a multi-view data registration algorithm based on aggregating the adjacent frames, which are already registered, is proposed. It increases the overlap region between the pending registration frames by aggregation and further improves the registration accuracy. The experiment results show that the proposed algorithm can effectively register the multi-view ladar data in the obscure environment, and it also has a greater robustness and a higher registration accuracy compared with the sequence registration under the condition of equivalent operating efficiency.展开更多
Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minute...Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.展开更多
Taking AutoCAD2000 as platform, an algorithm for the reconstruction ofsurface from scattered data points based on VBA is presented. With this core technology customerscan be free from traditional AutoCAD as an electro...Taking AutoCAD2000 as platform, an algorithm for the reconstruction ofsurface from scattered data points based on VBA is presented. With this core technology customerscan be free from traditional AutoCAD as an electronic board and begin to create actual presentationof real-world objects. VBA is not only a very powerful tool of development, but with very simplesyntax. Associating with those solids, objects and commands of AutoCAD 2000, VBA notably simplifiesprevious complex algorithms, graphical presentations and processing, etc. Meanwhile, it can avoidappearance of complex data structure and data format in reverse design with other modeling software.Applying VBA to reverse engineering can greatly improve modeling efficiency and facilitate surfacereconstruction.展开更多
According to the requirement of heterogeneous object modeling in additive manufacturing(AM),the Non-Uniform Rational B-Spline(NURBS)method has been applied to the digital representation of heterogeneous object in this...According to the requirement of heterogeneous object modeling in additive manufacturing(AM),the Non-Uniform Rational B-Spline(NURBS)method has been applied to the digital representation of heterogeneous object in this paper.By putting forward the NURBS material data structure and establishing heterogeneous NURBS object model,the accurate mathematical unified representation of analytical and free heterogeneous objects have been realized.With the inverse modeling of heterogeneous NURBS objects,the geometry and material distribution can be better designed to meet the actual needs.Radical Basis Function(RBF)method based on global surface reconstruction and the tensor product surface interpolation method are combined to RBF-NURBS inverse construction method.The geometric and/or material information of regular mesh points is obtained by RBF interpolation of scattered data,and the heterogeneous NURBS surface or object model is obtained by tensor product interpolation.The examples have shown that the heterogeneous objects fitting to scattered data points can be generated effectively by the inverse construction methods in this paper and 3D CAD models for additive manufacturing can be provided.展开更多
Parameterization is one of the key problems in the construction of a curve to interpolate a set of ordered points. We propose a new local parameterization method based on the curvature model in this paper. The new met...Parameterization is one of the key problems in the construction of a curve to interpolate a set of ordered points. We propose a new local parameterization method based on the curvature model in this paper. The new method determines the knots by mi- nimizing the maximum curvature of quadratic curve. When the knots by the new method are used to construct interpolation curve, the constructed curve have good precision. We also give some comparisons of the new method with existing methods, and our method can perform better in interpolation error, and the interpolated curve is more fairing.展开更多
Plant height can be used for assessing plant vigor and predicting biomass and yield. Manual measurement of plant height is time-consuming and labor-intensive. We describe a method for measuring maize plant height usin...Plant height can be used for assessing plant vigor and predicting biomass and yield. Manual measurement of plant height is time-consuming and labor-intensive. We describe a method for measuring maize plant height using an RGB-D camera that captures a color image and depth information of plants under field conditions. The color image was first processed to locate its central area using the S component in HSV color space and the Density-Based Spatial Clustering of Applications with Noise algorithm. Testing showed that the central areas of plants could be accurately located. The point cloud data were then clustered and the plant was extracted based on the located central area. The point cloud data were further processed to generate skeletons, whose end points were detected and used to extract the highest points of the central leaves. Finally, the height differences between the ground and the highest points of the central leaves were calculated to determine plant heights. The coefficients of determination for plant heights manually measured and estimated by the proposed approach were all greater than 0.95. The method can effectively extract the plant from overlapping leaves and estimate its plant height. The proposed method may facilitate maize height measurement and monitoring under field conditions.展开更多
Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between le...Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between leaves and nonphotosynthetic structures to retrieve the leaf normal has been challenging. We used here an approach to accurately retrieve the leaf normals of camphorwood(Cinnamomum camphora) using TLS point cloud data.First, nonphotosynthetic structures were filtered by using the curvature threshold of each point. Then, the point cloud data were segmented by a voxel method and clustered by a Gaussian mixture model in each voxel. Finally, the normal vector of each cluster was computed by principal component analysis to obtain the leaf normal distribution. We collected leaf inclination angles and estimated the distribution, which we compared with the retrieved leaf normal distribution. The correlation coefficient between measurements and obtained results was 0.96, indicating a good coincidence.展开更多
As point cloud of one whole vehicle body has the traits of large geometric dimension, huge data and rigorous reverse precision, one pretreatment algorithm on automobile body point cloud is put forward. The basic idea ...As point cloud of one whole vehicle body has the traits of large geometric dimension, huge data and rigorous reverse precision, one pretreatment algorithm on automobile body point cloud is put forward. The basic idea of the registration algorithm based on the skeleton points is to construct the skeleton points of the whole vehicle model and the mark points of the separate point cloud, to search the mapped relationship between skeleton points and mark points using congruence triangle method and to match the whole vehicle point cloud using the improved iterative closed point (ICP) algorithm. The data reduction algorithm, based on average square root of distance, condenses data by three steps, computing datasets' average square root of distance in sampling cube grid, sorting order according to the value computed from the first step, choosing sampling percentage. The accuracy of the two algorithms above is proved by a registration and reduction example of whole vehicle point cloud of a certain light truck.展开更多
Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for s...Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for stem volume calculation.In this study,the performance of Structure from Motion photogrammetry for estimating individual tree stem volume in relation to traditional approaches was evaluated.We selected 30 trees from five savanna species growing at the periphery of the W National Park in northern Benin and measured their circumferences at different heights using traditional tape and clinometer.Stem volumes of sample trees were estimated from the measured circumferences using nine volumetric formulae for solids of revolution,including cylinder,cone,paraboloid,neiloid and their respective fustrums.Each tree was photographed and stem volume determined using a taper function derived from tri-dimensional stem models.This reference volume was compared with the results of formulaic estimations.Tree stem profiles were further decomposed into different portions,approximately corresponding to the stump,butt logs and logs,and the suitability of each solid of revolution was assessed for simulating the resulting shapes.Stem volumes calculated using the fustrums of paraboloid and neiloid formulae were the closest to reference volumes with a bias and root mean square error of 8.0%and 24.4%,respectively.Stems closely resembled fustrums of a paraboloid and a neiloid.Individual stem portions assumed different solids as follows:fustrums of paraboloid and neiloid were more prevalent from the stump to breast height,while a paraboloid closely matched stem shapes beyond this point.Therefore,a more accurate stem volumetric estimate was attained when stems were considered as a composite of at least three geometric solids.展开更多
Metadata,data about other digital objects,play an important role in FAIR with a direct relation to all FAIR principles.In this paper we present and discuss the FAIR Data Point(FDP),a software architecture aiming to de...Metadata,data about other digital objects,play an important role in FAIR with a direct relation to all FAIR principles.In this paper we present and discuss the FAIR Data Point(FDP),a software architecture aiming to define a common approach to publish semantically-rich and machine-actionable metadata according to the FAIR principles.We present the core components and features of the FDP,its approach to metadata provision,the criteria to evaluate whether an application adheres to the FDP specifications and the service to register,index and allow users to search for metadata content of available FDPs.展开更多
A parallel algorithm of circulation numerical model based on message passing interface(MPI) is developed using serialization and an irregular rectangle decomposition scheme. Neighboring point exchange strategy(NPES...A parallel algorithm of circulation numerical model based on message passing interface(MPI) is developed using serialization and an irregular rectangle decomposition scheme. Neighboring point exchange strategy(NPES) is adopted to further enhance the computational efficiency. Two experiments are conducted on HP C7000 Blade System, the numerical results show that the parallel version with NPES(PVN) produces higher efficiency than the original parallel version(PV). The PVN achieves parallel efficiency in excess of 0.9 in the second experiment when the number of processors increases to 100, while the efficiency of PV decreases to 0.39 rapidly. The PVN of ocean circulation model is used in a fine-resolution regional simulation, which produces better results. The capability of universal implementation of this algorithm makes it applicable in many other ocean models potentially.展开更多
A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken b...A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.展开更多
Rice variety selection and quality inspection are key links in rice planting.Compared with two-dimensional images,three-dimensional information on rice seeds shows the appearance characteristics of rice seeds more com...Rice variety selection and quality inspection are key links in rice planting.Compared with two-dimensional images,three-dimensional information on rice seeds shows the appearance characteristics of rice seeds more comprehensively and accurately.This study proposed a rice variety classification method using three-dimensional point cloud data of the surface of rice seeds combined with a deep learning network to achieve the rapid and accurate identification of rice varieties.First,a point cloud collection platform was set up with a Raytrix light field camera as the core to collect three-dimensional point cloud data on the surface of rice seeds;then,the collected point cloud was filled,filtered and smoothed;after that,the point cloud segmentation is based on the RANSAC algorithm,and the point cloud downsampling is based on a combination of random sampling algorithm and voxel grid filtering algorithm.Finally,the processed point cloud was input to the improved PointNet network for feature extraction and species classification.The improved PointNet network added a cross-level feature connection structure,made full use of features at different levels,and better extracted the surface structure features of rice seeds.After testing,the improved PointNet model had an average classification accuracy of 89.4%for eight varieties of rice,which was 1.2%higher than that of the PointNet model.The method proposed in this study combined deep learning and point cloud data to achieve the efficient classification of rice varieties.展开更多
Surfaces of stored grain bulk are often reconstructed from organized point sets with noise by 3-D laser scanner in an online measuring system.As a result,denoising is an essential procedure in processing point cloud d...Surfaces of stored grain bulk are often reconstructed from organized point sets with noise by 3-D laser scanner in an online measuring system.As a result,denoising is an essential procedure in processing point cloud data for more accurate surface reconstruction and grain volume calculation.A classified denoising method was presented in this research for noise removal from point cloud data of the grain bulk surface.Based on the distribution characteristics of cloud point data,the noisy points were divided into three types:The first and second types of the noisy points were either sparse points or small point cloud data deviating and suspending from the main point cloud data,which could be deleted directly by a grid method;the third type of the noisy points was mixed with the main body of point cloud data,which were most difficult to distinguish.The point cloud data with those noisy points were projected into a horizontal plane.An image denoising method,discrete wavelet threshold(DWT)method,was applied to delete the third type of the noisy points.Three kinds of denoising methods including average filtering method,median filtering method and DWT method were applied respectively and compared for denoising the point cloud data.Experimental results show that the proposed method remains the most of the details and obtains the lowest average value of RMSE(Root Mean Square Error,0.219)as well as the lowest relative error of grain volume(0.086%)compared with the other two methods.Furthermore,the proposed denoising method could not only achieve the aim of removing noisy points,but also improve self-adaptive ability according to the characteristics of point cloud data of grain bulk surface.The results from this research also indicate that the proposed method is effective for denoising noisy points and provides more accurate data for calculating grain volume.展开更多
The field of health data management poses unique challenges in relation to data ownership, the privacy of data subjects, and the reusability of data. The FAIR Guidelines have been developed to address these challenges...The field of health data management poses unique challenges in relation to data ownership, the privacy of data subjects, and the reusability of data. The FAIR Guidelines have been developed to address these challenges. The Virus Outbreak Data Network(VODAN) architecture builds on these principles, using the European Union’s General Data Protection Regulation(GDPR) framework to ensure compliance with local data regulations, while using information knowledge management concepts to further improve data provenance and interoperability. In this article we provide an overview of the terminology used in the field of FAIR data management, with a specific focus on FAIR compliant health information management, as implemented in the VODAN architecture.展开更多
An approximating algorithm on handling 3-D points cloud data was discussed for reconstruction of complicated curved surface. In this algorithm, the coordinate information of nodes both in internal and external regions...An approximating algorithm on handling 3-D points cloud data was discussed for reconstruction of complicated curved surface. In this algorithm, the coordinate information of nodes both in internal and external regions of partition interpolation was used to realize minimized least squares approximation error of surface fitting. The changes between internal and external interpolation regions are continuous and smooth. Meanwhile, surface shape has properties of local controllability, variation reduction, and convex hull. The practical example shows that this algorithm possesses a higher accuracy of curved surface reconstruction and also improves the distortion of curved surface reconstruction when typical approximating algorithms and unstable operation are used.展开更多
基金financially supported by the National Natural Science Fundation of China(Grant Nos.42161065 and 41461038)。
文摘Understanding the mechanisms and risks of forest fires by building a spatial prediction model is an important means of controlling forest fires.Non-fire point data are important training data for constructing a model,and their quality significantly impacts the prediction performance of the model.However,non-fire point data obtained using existing sampling methods generally suffer from low representativeness.Therefore,this study proposes a non-fire point data sampling method based on geographical similarity to improve the quality of non-fire point samples.The method is based on the idea that the less similar the geographical environment between a sample point and an already occurred fire point,the greater the confidence in being a non-fire point sample.Yunnan Province,China,with a high frequency of forest fires,was used as the study area.We compared the prediction performance of traditional sampling methods and the proposed method using three commonly used forest fire risk prediction models:logistic regression(LR),support vector machine(SVM),and random forest(RF).The results show that the modeling and prediction accuracies of the forest fire prediction models established based on the proposed sampling method are significantly improved compared with those of the traditional sampling method.Specifically,in 2010,the modeling and prediction accuracies improved by 19.1%and 32.8%,respectively,and in 2020,they improved by 13.1%and 24.3%,respectively.Therefore,we believe that collecting non-fire point samples based on the principle of geographical similarity is an effective way to improve the quality of forest fire samples,and thus enhance the prediction of forest fire risk.
基金supported by the Innovation and Entrepreneurship Training Program Topic for College Students of North China University of Technology in 2023.
文摘In order to enhance modeling efficiency and accuracy,we utilized 3D laser point cloud data for indoor space modeling.Point cloud data was obtained with a 3D laser scanner and optimized with Autodesk Recap and Revit software to extract geometric information about the indoor environment.Furthermore,we proposed a method for constructing indoor elements based on parametric components.The research outcomes of this paper will offer new methods and tools for indoor space modeling and design.The approach of indoor space modeling based on 3D laser point cloud data and parametric component construction can enhance modeling efficiency and accuracy,providing architects,interior designers,and decorators with a better working platform and design reference.
基金supported by the Preeminent Youth Fund of Sichuan Province,China(Grant No.2012JQ0012)the National Natural Science Foundation of China(Grant Nos.11173008,10974202,and 60978049)the National Key Scientific and Research Equipment Development Project of China(Grant No.ZDYZ2013-2)
文摘For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method, and examine these performances by simulation. By comparing this method with the nonlinear least square fitting (NLSF) method and the linear regression of the sum (LRS) method in derivations and simulations, we find that this method can achieve the same or even better precision, comparable accuracy, and lower computation cost. We test this method by experimental decay signals. The results are in agreement with the ones obtained from the nonlinear least square fitting method.
文摘Multi-view laser radar (ladar) data registration in obscure environments is an important research field of obscured target detection from air to ground. There are few overlap regions of the observational data in different views because of the occluder, so the multi-view data registration is rather difficult. Through indepth analyses of the typical methods and problems, it is obtained that the sequence registration is more appropriate, but needs to improve the registration accuracy. On this basis, a multi-view data registration algorithm based on aggregating the adjacent frames, which are already registered, is proposed. It increases the overlap region between the pending registration frames by aggregation and further improves the registration accuracy. The experiment results show that the proposed algorithm can effectively register the multi-view ladar data in the obscure environment, and it also has a greater robustness and a higher registration accuracy compared with the sequence registration under the condition of equivalent operating efficiency.
基金National Natural Science Foundation of China(No.41801379)Fundamental Research Funds for the Central Universities(No.2019B08414)National Key R&D Program of China(No.2016YFC0401801)。
文摘Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.
文摘Taking AutoCAD2000 as platform, an algorithm for the reconstruction ofsurface from scattered data points based on VBA is presented. With this core technology customerscan be free from traditional AutoCAD as an electronic board and begin to create actual presentationof real-world objects. VBA is not only a very powerful tool of development, but with very simplesyntax. Associating with those solids, objects and commands of AutoCAD 2000, VBA notably simplifiesprevious complex algorithms, graphical presentations and processing, etc. Meanwhile, it can avoidappearance of complex data structure and data format in reverse design with other modeling software.Applying VBA to reverse engineering can greatly improve modeling efficiency and facilitate surfacereconstruction.
文摘According to the requirement of heterogeneous object modeling in additive manufacturing(AM),the Non-Uniform Rational B-Spline(NURBS)method has been applied to the digital representation of heterogeneous object in this paper.By putting forward the NURBS material data structure and establishing heterogeneous NURBS object model,the accurate mathematical unified representation of analytical and free heterogeneous objects have been realized.With the inverse modeling of heterogeneous NURBS objects,the geometry and material distribution can be better designed to meet the actual needs.Radical Basis Function(RBF)method based on global surface reconstruction and the tensor product surface interpolation method are combined to RBF-NURBS inverse construction method.The geometric and/or material information of regular mesh points is obtained by RBF interpolation of scattered data,and the heterogeneous NURBS surface or object model is obtained by tensor product interpolation.The examples have shown that the heterogeneous objects fitting to scattered data points can be generated effectively by the inverse construction methods in this paper and 3D CAD models for additive manufacturing can be provided.
基金Supported by National Research Foundation for the Doctoral Program of Higher Education of China(20110131130004)Independent Innovation Foundation of Shandong University,IIFSDU(2012TB013)
文摘Parameterization is one of the key problems in the construction of a curve to interpolate a set of ordered points. We propose a new local parameterization method based on the curvature model in this paper. The new method determines the knots by mi- nimizing the maximum curvature of quadratic curve. When the knots by the new method are used to construct interpolation curve, the constructed curve have good precision. We also give some comparisons of the new method with existing methods, and our method can perform better in interpolation error, and the interpolated curve is more fairing.
基金supported by the Key Project of Intergovernmental Collaboration for Science and Technology Innovation under the National Key R&D Plan (2019YFE0103800)CAU Special Fund to Build World-class University (in disciplines) and Guide Distinctive Development (2021AC006)。
文摘Plant height can be used for assessing plant vigor and predicting biomass and yield. Manual measurement of plant height is time-consuming and labor-intensive. We describe a method for measuring maize plant height using an RGB-D camera that captures a color image and depth information of plants under field conditions. The color image was first processed to locate its central area using the S component in HSV color space and the Density-Based Spatial Clustering of Applications with Noise algorithm. Testing showed that the central areas of plants could be accurately located. The point cloud data were then clustered and the plant was extracted based on the located central area. The point cloud data were further processed to generate skeletons, whose end points were detected and used to extract the highest points of the central leaves. Finally, the height differences between the ground and the highest points of the central leaves were calculated to determine plant heights. The coefficients of determination for plant heights manually measured and estimated by the proposed approach were all greater than 0.95. The method can effectively extract the plant from overlapping leaves and estimate its plant height. The proposed method may facilitate maize height measurement and monitoring under field conditions.
文摘Leaf normal distribution is an important structural characteristic of the forest canopy. Although terrestrial laser scanners(TLS) have potential for estimating canopy structural parameters, distinguishing between leaves and nonphotosynthetic structures to retrieve the leaf normal has been challenging. We used here an approach to accurately retrieve the leaf normals of camphorwood(Cinnamomum camphora) using TLS point cloud data.First, nonphotosynthetic structures were filtered by using the curvature threshold of each point. Then, the point cloud data were segmented by a voxel method and clustered by a Gaussian mixture model in each voxel. Finally, the normal vector of each cluster was computed by principal component analysis to obtain the leaf normal distribution. We collected leaf inclination angles and estimated the distribution, which we compared with the retrieved leaf normal distribution. The correlation coefficient between measurements and obtained results was 0.96, indicating a good coincidence.
基金This project is supported by Provincial Technology Cooperation Program of Yunnan,China(No.2003EAAAA00D043).
文摘As point cloud of one whole vehicle body has the traits of large geometric dimension, huge data and rigorous reverse precision, one pretreatment algorithm on automobile body point cloud is put forward. The basic idea of the registration algorithm based on the skeleton points is to construct the skeleton points of the whole vehicle model and the mark points of the separate point cloud, to search the mapped relationship between skeleton points and mark points using congruence triangle method and to match the whole vehicle point cloud using the improved iterative closed point (ICP) algorithm. The data reduction algorithm, based on average square root of distance, condenses data by three steps, computing datasets' average square root of distance in sampling cube grid, sorting order according to the value computed from the first step, choosing sampling percentage. The accuracy of the two algorithms above is proved by a registration and reduction example of whole vehicle point cloud of a certain light truck.
基金The work was supported by the International Foundation for Science(Grant No:I-1-D-60661).
文摘Recent applications of digital photogrammetry in forestry have highlighted its utility as a viable mensuration technique.However,in tropical regions little research has been done on the accuracy of this approach for stem volume calculation.In this study,the performance of Structure from Motion photogrammetry for estimating individual tree stem volume in relation to traditional approaches was evaluated.We selected 30 trees from five savanna species growing at the periphery of the W National Park in northern Benin and measured their circumferences at different heights using traditional tape and clinometer.Stem volumes of sample trees were estimated from the measured circumferences using nine volumetric formulae for solids of revolution,including cylinder,cone,paraboloid,neiloid and their respective fustrums.Each tree was photographed and stem volume determined using a taper function derived from tri-dimensional stem models.This reference volume was compared with the results of formulaic estimations.Tree stem profiles were further decomposed into different portions,approximately corresponding to the stump,butt logs and logs,and the suitability of each solid of revolution was assessed for simulating the resulting shapes.Stem volumes calculated using the fustrums of paraboloid and neiloid formulae were the closest to reference volumes with a bias and root mean square error of 8.0%and 24.4%,respectively.Stems closely resembled fustrums of a paraboloid and a neiloid.Individual stem portions assumed different solids as follows:fustrums of paraboloid and neiloid were more prevalent from the stump to breast height,while a paraboloid closely matched stem shapes beyond this point.Therefore,a more accurate stem volumetric estimate was attained when stems were considered as a composite of at least three geometric solids.
文摘Metadata,data about other digital objects,play an important role in FAIR with a direct relation to all FAIR principles.In this paper we present and discuss the FAIR Data Point(FDP),a software architecture aiming to define a common approach to publish semantically-rich and machine-actionable metadata according to the FAIR principles.We present the core components and features of the FDP,its approach to metadata provision,the criteria to evaluate whether an application adheres to the FDP specifications and the service to register,index and allow users to search for metadata content of available FDPs.
基金The National High Technology Research and Development Program(863 Program)of China under contract No.2013AA09A505
文摘A parallel algorithm of circulation numerical model based on message passing interface(MPI) is developed using serialization and an irregular rectangle decomposition scheme. Neighboring point exchange strategy(NPES) is adopted to further enhance the computational efficiency. Two experiments are conducted on HP C7000 Blade System, the numerical results show that the parallel version with NPES(PVN) produces higher efficiency than the original parallel version(PV). The PVN achieves parallel efficiency in excess of 0.9 in the second experiment when the number of processors increases to 100, while the efficiency of PV decreases to 0.39 rapidly. The PVN of ocean circulation model is used in a fine-resolution regional simulation, which produces better results. The capability of universal implementation of this algorithm makes it applicable in many other ocean models potentially.
基金supported by the National Natural Science Foundation of China(Nos.41171355and41002120)
文摘A new object-oriented method has been developed for the extraction of Mars rocks from Mars rover data. It is based on a combination of Mars rover imagery and 3D point cloud data. First, Navcam or Pancam images taken by the Mars rovers are segmented into homogeneous objects with a mean-shift algorithm. Then, the objects in the segmented images are classified into small rock candidates, rock shadows, and large objects. Rock shadows and large objects are considered as the regions within which large rocks may exist. In these regions, large rock candidates are extracted through ground-plane fitting with the 3D point cloud data. Small and large rock candidates are combined and postprocessed to obtain the final rock extraction results. The shape properties of the rocks (angularity, circularity, width, height, and width-height ratio) have been calculated for subsequent ~eological studies.
基金supported by the National Natural Science Foundation of China Youth Fund Project(Grant No.51305182)the Ministry of Agriculture Key Laboratory of Modern Agricultural Equipment(Grant No.201602004).
文摘Rice variety selection and quality inspection are key links in rice planting.Compared with two-dimensional images,three-dimensional information on rice seeds shows the appearance characteristics of rice seeds more comprehensively and accurately.This study proposed a rice variety classification method using three-dimensional point cloud data of the surface of rice seeds combined with a deep learning network to achieve the rapid and accurate identification of rice varieties.First,a point cloud collection platform was set up with a Raytrix light field camera as the core to collect three-dimensional point cloud data on the surface of rice seeds;then,the collected point cloud was filled,filtered and smoothed;after that,the point cloud segmentation is based on the RANSAC algorithm,and the point cloud downsampling is based on a combination of random sampling algorithm and voxel grid filtering algorithm.Finally,the processed point cloud was input to the improved PointNet network for feature extraction and species classification.The improved PointNet network added a cross-level feature connection structure,made full use of features at different levels,and better extracted the surface structure features of rice seeds.After testing,the improved PointNet model had an average classification accuracy of 89.4%for eight varieties of rice,which was 1.2%higher than that of the PointNet model.The method proposed in this study combined deep learning and point cloud data to achieve the efficient classification of rice varieties.
基金National Natural Science Foundation of China(No.50975121)Jilin Province Science and Technology Development Plan Item(No.20130522150JH)2013 Jilin Province Science Foundation for Post Doctorate Research(No.RB201361).
文摘Surfaces of stored grain bulk are often reconstructed from organized point sets with noise by 3-D laser scanner in an online measuring system.As a result,denoising is an essential procedure in processing point cloud data for more accurate surface reconstruction and grain volume calculation.A classified denoising method was presented in this research for noise removal from point cloud data of the grain bulk surface.Based on the distribution characteristics of cloud point data,the noisy points were divided into three types:The first and second types of the noisy points were either sparse points or small point cloud data deviating and suspending from the main point cloud data,which could be deleted directly by a grid method;the third type of the noisy points was mixed with the main body of point cloud data,which were most difficult to distinguish.The point cloud data with those noisy points were projected into a horizontal plane.An image denoising method,discrete wavelet threshold(DWT)method,was applied to delete the third type of the noisy points.Three kinds of denoising methods including average filtering method,median filtering method and DWT method were applied respectively and compared for denoising the point cloud data.Experimental results show that the proposed method remains the most of the details and obtains the lowest average value of RMSE(Root Mean Square Error,0.219)as well as the lowest relative error of grain volume(0.086%)compared with the other two methods.Furthermore,the proposed denoising method could not only achieve the aim of removing noisy points,but also improve self-adaptive ability according to the characteristics of point cloud data of grain bulk surface.The results from this research also indicate that the proposed method is effective for denoising noisy points and provides more accurate data for calculating grain volume.
基金VODAN-Africathe Philips Foundation+2 种基金the Dutch Development Bank FMOCORDAIDthe GO FAIR Foundation for supporting this research
文摘The field of health data management poses unique challenges in relation to data ownership, the privacy of data subjects, and the reusability of data. The FAIR Guidelines have been developed to address these challenges. The Virus Outbreak Data Network(VODAN) architecture builds on these principles, using the European Union’s General Data Protection Regulation(GDPR) framework to ensure compliance with local data regulations, while using information knowledge management concepts to further improve data provenance and interoperability. In this article we provide an overview of the terminology used in the field of FAIR data management, with a specific focus on FAIR compliant health information management, as implemented in the VODAN architecture.
基金Supported by the Guangxi Provincial Natural Science Fund of China (No. 0832096)the Scientific Research Project of Education Department of Guangxi Province of China (No. 200708LX151)the Science Fund of Wuzhou University (No. 2008B008)
文摘An approximating algorithm on handling 3-D points cloud data was discussed for reconstruction of complicated curved surface. In this algorithm, the coordinate information of nodes both in internal and external regions of partition interpolation was used to realize minimized least squares approximation error of surface fitting. The changes between internal and external interpolation regions are continuous and smooth. Meanwhile, surface shape has properties of local controllability, variation reduction, and convex hull. The practical example shows that this algorithm possesses a higher accuracy of curved surface reconstruction and also improves the distortion of curved surface reconstruction when typical approximating algorithms and unstable operation are used.