Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.Howev...Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.展开更多
Cloud point extraction (CPE) has been used for the preconcentration of cadmium, after the formation of a complex with 1, 5-bis(di-2-pyridylmethylene) thiocarbonohydrazide (DPTH), and further determination by flame ato...Cloud point extraction (CPE) has been used for the preconcentration of cadmium, after the formation of a complex with 1, 5-bis(di-2-pyridylmethylene) thiocarbonohydrazide (DPTH), and further determination by flame atomic absorption spectrometry (FAAS) using Triton X-114 as surfactant. The main factors affecting the CPE, such as concentration of Triton X-114 and DPTH, pH, equilibration temperature and incubation time, were optimized for the best extract efficiency. Under the optimum conditions i.e., pH 5.4, [DPTH] = 6x10-3%, [Triton X-114] = 0.25% (v/v), an enhancement factor of 10.5 fold was reached. The lower limit of detection (LOD) obtained under the optimal conditions was 0.95 μg L?1. The precision for 8 replicate deter- minations at 20 and 100 μgL?1 Cd were 2.4 % and 2 % relative standard deviation (R.S.D.). The calibration graph using the preconcentration method was linear with a correlation coefficient of 0,998 at levels close to the detection limit up to at least 200 μgL?1. The method was successfully applied to the determination of cadmium in water, environmental and food samples and in a BCR-176 standard reference material.展开更多
A non-local denoising (NLD) algorithm for point-sampled surfaces (PSSs) is presented based on similarities, including geometry intensity and features of sample points. By using the trilateral filtering operator, the d...A non-local denoising (NLD) algorithm for point-sampled surfaces (PSSs) is presented based on similarities, including geometry intensity and features of sample points. By using the trilateral filtering operator, the differential signal of each sample point is determined and called "geometry intensity". Based on covariance analysis, a regular grid of geometry intensity of a sample point is constructed, and the geometry-intensity similarity of two points is measured according to their grids. Based on mean shift clustering, the PSSs are clustered in terms of the local geometry-features similarity. The smoothed geometry intensity, i.e., offset distance, of the sample point is estimated according to the two similarities. Using the resulting intensity, the noise component from PSSs is finally removed by adjusting the position of each sample point along its own normal direction. Ex- perimental results demonstrate that the algorithm is robust and can produce a more accurate denoising result while having better feature preservation.展开更多
A novel cloud-point extraction (CPE) was successfully used in preconcentration of biphenol A (BPA) from aqueous solutions. Majority of BPA is extracted into the surfactant-rich phase. The parameters affecting the CPE ...A novel cloud-point extraction (CPE) was successfully used in preconcentration of biphenol A (BPA) from aqueous solutions. Majority of BPA is extracted into the surfactant-rich phase. The parameters affecting the CPE such as concentration of surfactant and electrolyte, equilibration temperature and time and pH of sample solution were investigated. The samples were analyzed by high-performance liquid chromatography with ultraviolet detection. Under the optimized conditions, preconcentration of 10 mL sample gives a preconcentration factor of 11. The limit of detection (LOD) and limit of quantification (LOQ) are 0.1 μg/L and 0.33 μg/L, respectively. The linear range of the proposed method is 0.2-20 μg/L with correlation coefficients greater than 0.998 7 and the spiking recoveries are 97.96%-100.42%. The interference factor was tested and the extraction mechanism was also investigated. Thus, the developed CPE has proven to be an efficient, green, rapid and inexpensive approach for extraction and preconcentration of BPA from water samples.展开更多
In the reliability analysis of complex structures,response surface method(RSM)has been suggested as an efficient technique to estimate the actual but implicit limit state function.A set of sample points are needed to ...In the reliability analysis of complex structures,response surface method(RSM)has been suggested as an efficient technique to estimate the actual but implicit limit state function.A set of sample points are needed to fit to the implicit function.It has been noted that the accuracy of RSM depends highly on the so-called sample points.However,the technique for point selection has had little attention.In the present study,an improved response surface method(IRSM)based on two sample point selection techniques,named the direction cosines projected strategy(DCS)and the limit step length iteration strategy(LSS),is investigated.Since it uses the sampling points selected to be located in the region close to the original failure surface,and since it needs only one response surface,the IRSM should be accurate and simple in practical structural problems.Applications to several typical examples have helped to elucidate the successful working of the IRSM.展开更多
Recently unstructured dense point sets have become a new representation of geometric shapes. In this paper we introduce a novel framework within which several usable error metrics are analyzed and the most basic prope...Recently unstructured dense point sets have become a new representation of geometric shapes. In this paper we introduce a novel framework within which several usable error metrics are analyzed and the most basic properties of the pro- gressive point-sampled geometry are characterized. Another distinct feature of the proposed framework is its compatibility with most previously proposed surface inference engines. Given the proposed framework, the performances of four representative well-reputed engines are studied and compared.展开更多
Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materia...Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materials constituting the Gobi result in notable differences in saltation processes across various Gobi surfaces.It is challenging to describe these processes according to a uniform morphology.Therefore,it becomes imperative to articulate surface characteristics through parameters such as the three-dimensional(3D)size and shape of gravel.Collecting morphology information for Gobi gravels is essential for studying its genesis and sand saltation.To enhance the efficiency and information yield of gravel parameter measurements,this study conducted field experiments in the Gobi region across Dunhuang City,Guazhou County,and Yumen City(administrated by Jiuquan City),Gansu Province,China in March 2023.A research framework and methodology for measuring 3D parameters of gravel using point cloud were developed,alongside improved calculation formulas for 3D parameters including gravel grain size,volume,flatness,roundness,sphericity,and equivalent grain size.Leveraging multi-view geometry technology for 3D reconstruction allowed for establishing an optimal data acquisition scheme characterized by high point cloud reconstruction efficiency and clear quality.Additionally,the proposed methodology incorporated point cloud clustering,segmentation,and filtering techniques to isolate individual gravel point clouds.Advanced point cloud algorithms,including the Oriented Bounding Box(OBB),point cloud slicing method,and point cloud triangulation,were then deployed to calculate the 3D parameters of individual gravels.These systematic processes allow precise and detailed characterization of individual gravels.For gravel grain size and volume,the correlation coefficients between point cloud and manual measurements all exceeded 0.9000,confirming the feasibility of the proposed methodology for measuring 3D parameters of individual gravels.The proposed workflow yields accurate calculations of relevant parameters for Gobi gravels,providing essential data support for subsequent studies on Gobi environments.展开更多
We investigated the dependence of laser-induced breakdown spectral intensity on the focusing position of a lens at different sample temperatures(room temperature to 300 ℃) in atmosphere.A Q-switched Nd:YAG nanosecond...We investigated the dependence of laser-induced breakdown spectral intensity on the focusing position of a lens at different sample temperatures(room temperature to 300 ℃) in atmosphere.A Q-switched Nd:YAG nanosecond pulsed laser with 1064 nm wavelength and 10 ns pulse width was used to ablate silicon to produce plasma. It was confirmed that the increase in the sample's initial temperature could improve spectral line intensity. In addition, when the distance from the target surface to the focal point increased, the intensity firstly rose, and then dropped.The trend of change with distance was more obvious at higher sample temperatures. By observing the distribution of the normalized ratio of Si atomic spectral line intensity and Si ionic spectral line intensity as functions of distance and temperature, the maximum value of normalized ratio appeared at the longer distance as the initial temperature was higher, while the maximum ratio appeared at the shorter distance as the sample temperature was lower.展开更多
点云是一个庞大点的集合而且拥有重要的几何结构。由于其庞大的数据量,不可避免地就会在某些区域内出现一些相似点,这就使得在进行特征提取时提取到一些重复的信息,造成计算冗余,降低训练的准确率。针对上述问题,提出了一种新的神经网...点云是一个庞大点的集合而且拥有重要的几何结构。由于其庞大的数据量,不可避免地就会在某些区域内出现一些相似点,这就使得在进行特征提取时提取到一些重复的信息,造成计算冗余,降低训练的准确率。针对上述问题,提出了一种新的神经网络——PointPCA,可以有效地解决上述问题;在PointPCA中,总共分为三个模块:a)采样模块,提出了一种average point sampling(APS)采样方法,可以有效地规避一些相似的点,得到一组近似代表这组点云的新的点集;b)特征提取模块,采用分组中的思想,对这组新的点的集合进行多尺度空间特征提取;c)拼接模块,将每一尺度提取的特征向量拼接到一起组合为一个特征向量。经过实验表明,PointPCA比PointNet在准确率方面提升了4.6%,比PointNet++提升了1.1%;而且在mIoU评估测试中也有不错的效果。展开更多
Background:A new variance estimator is derived and tested for big BAF(Basal Area Factor)sampling which is a forest inventory system that utilizes Bitterlich sampling(point sampling)with two BAF sizes,a small BAF for t...Background:A new variance estimator is derived and tested for big BAF(Basal Area Factor)sampling which is a forest inventory system that utilizes Bitterlich sampling(point sampling)with two BAF sizes,a small BAF for tree counts and a larger BAF on which tree measurements are made usually including DBHs and heights needed for volume estimation.Methods:The new estimator is derived using the Delta method from an existing formulation of the big BAF estimator as consisting of three sample means.The new formula is compared to existing big BAF estimators including a popular estimator based on Bruce’s formula.Results:Several computer simulation studies were conducted comparing the new variance estimator to all known variance estimators for big BAF currently in the forest inventory literature.In simulations the new estimator performed well and comparably to existing variance formulas.Conclusions:A possible advantage of the new estimator is that it does not require the assumption of negligible correlation between basal area counts on the small BAF factor and volume-basal area ratios based on the large BAF factor selection trees,an assumption required by all previous big BAF variance estimation formulas.Although this correlation was negligible on the simulation stands used in this study,it is conceivable that the correlation could be significant in some forest types,such as those in which the DBH-height relationship can be affected substantially by density perhaps through competition.We derived a formula that can be used to estimate the covariance between estimates of mean basal area and the ratio of estimates of mean volume and mean basal area.We also mathematically derived expressions for bias in the big BAF estimator that can be used to show the bias approaches zero in large samples on the order of 1n where n is the number of sample points.展开更多
Gully erosion can account for significant volumes of sediment exiting agricultural landscapes, but is difficult to monitor and quantify its evolution with traditional surveying technology. Scientific investigations of...Gully erosion can account for significant volumes of sediment exiting agricultural landscapes, but is difficult to monitor and quantify its evolution with traditional surveying technology. Scientific investigations of gullies depend on accurate and detailed topographic information to understand and evaluate the complex interactions between field topography and gully evolution. Detailed terrain representations can be produced by new technologies such as terrestrial LiDAR systems. These systems are capable of collecting information with a wide range of ground point sampling densities as a result of operator controlled factors. Increasing point density results in richer datasets at a cost of increased time needed to complete field surveys. In large research watersheds, with hundreds of sites being monitored, data collection can become costly and time consuming. In this study, the effect of point sampling density on the capability to collect topographic information was investigated at individual gully scale. This was performed through the utilization of semi-variograms to produce overall guiding principles for multi-temporal gully surveys based on various levels of laser sampling points and relief variation (low, moderate, and high). Results indicated the existence of a point sampling density threshold that produces little or no additional topographic information when exceeded. A reduced dataset was created using the density thresholds and compared to the original dataset with no major discrepancy. Although variations in relief and soil roughness can lead to different point sampling density requirements, the outcome of this study serves as practical guidance for future field surveys of gully evolution and erosion.展开更多
This paper analyzes the effect of subgroup size on the x-bar chart characteristics using sample influx (SIF) into forensic science laboratory (FSL). The characteristics studied include changes in out-or-control points...This paper analyzes the effect of subgroup size on the x-bar chart characteristics using sample influx (SIF) into forensic science laboratory (FSL). The characteristics studied include changes in out-or-control points (OCP), upper control limit UCLx, and zonal demarcations. Multi-rules were used to identify the number of out-of-control-points, Nocp as violations using five control chart rules applied separately. A sensitivity analysis on the Nocp was applied for subgroup size, k, and number of sigma above the mean value to determine the upper control limit, UCLx. A computer code was implemented using a FORTRAN code to create x-bar control-charts and capture OCP and other control-chart characteristics with increasing k from 2 to 25. For each value of k, a complete series of average values, Q(p), of specific length, Nsg, was created from which statistical analysis was conducted and compared to the original SIF data, S(t). The variation of number of out-of-control points or violations, Nocp, for different control-charts rules with increasing k was determined to follow a decaying exponential function, Nocp = Ae–α, for which, the goodness of fit was established, and the R2 value approached unity for Rule #4 and #5 only. The goodness of fit was established to be the new criteria for rational subgroup-size range, for Rules #5 and #4 only, which involve a count of 6 consecutive points decreasing and 8 consecutive points above the selected control limit (σ/3 above the grand mean), respectively. Using this criterion, the rational subgroup range was established to be 4 ≤ k ≤ 20 for the two x-bar control chart rules.展开更多
Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requ...Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requirements.The key to handling large-scale point clouds lies in leveraging random sampling,which offers higher computational efficiency and lower memory consumption compared to other sampling methods.Nevertheless,the use of random sampling can potentially result in the loss of crucial points during the encoding stage.To address these issues,this paper proposes cross-fusion self-attention network(CFSA-Net),a lightweight and efficient network architecture specifically designed for directly processing large-scale point clouds.At the core of this network is the incorporation of random sampling alongside a local feature extraction module based on cross-fusion self-attention(CFSA).This module effectively integrates long-range contextual dependencies between points by employing hierarchical position encoding(HPC).Furthermore,it enhances the interaction between each point’s coordinates and feature information through cross-fusion self-attention pooling,enabling the acquisition of more comprehensive geometric information.Finally,a residual optimization(RO)structure is introduced to extend the receptive field of individual points by stacking hierarchical position encoding and cross-fusion self-attention pooling,thereby reducing the impact of information loss caused by random sampling.Experimental results on the Stanford Large-Scale 3D Indoor Spaces(S3DIS),Semantic3D,and SemanticKITTI datasets demonstrate the superiority of this algorithm over advanced approaches such as RandLA-Net and KPConv.These findings underscore the excellent performance of CFSA-Net in large-scale 3D semantic segmentation.展开更多
In this paper,by combining sampling methods for food statistics with years of sample sampling experience,various sampling points and corresponding sampling methods are summarized.It hopes to discover food safety risks...In this paper,by combining sampling methods for food statistics with years of sample sampling experience,various sampling points and corresponding sampling methods are summarized.It hopes to discover food safety risks and improve the level of food safety.展开更多
Image matching refers to the process of matching two or more images obtained at different time,different sensors or different conditions through a large number of feature points in the image.At present,image matching ...Image matching refers to the process of matching two or more images obtained at different time,different sensors or different conditions through a large number of feature points in the image.At present,image matching is widely used in target recognition and tracking,indoor positioning and navigation.Local features missing,however,often occurs in color images taken in dark light,making the extracted feature points greatly reduced in number,so as to affect image matching and even fail the target recognition.An unsharp masking(USM)based denoising model is established and a local adaptive enhancement algorithm is proposed to achieve feature point compensation by strengthening local features of the dark image in order to increase amount of image information effectively.Fast library for approximate nearest neighbors(FLANN)and random sample consensus(RANSAC)are image matching algorithms.Experimental results show that the number of effective feature points obtained by the proposed algorithm from images in dark light environment is increased,and the accuracy of image matching can be improved obviously.展开更多
基金This present research work was supported by the National Key R&D Program of China(No.2021YFB2700800)the GHfund B(No.202302024490).
文摘Peer-to-peer(P2P)overlay networks provide message transmission capabilities for blockchain systems.Improving data transmission efficiency in P2P networks can greatly enhance the performance of blockchain systems.However,traditional blockchain P2P networks face a common challenge where there is often a mismatch between the upper-layer traffic requirements and the underlying physical network topology.This mismatch results in redundant data transmission and inefficient routing,severely constraining the scalability of blockchain systems.To address these pressing issues,we propose FPSblo,an efficient transmission method for blockchain networks.Our inspiration for FPSblo stems from the Farthest Point Sampling(FPS)algorithm,a well-established technique widely utilized in point cloud image processing.In this work,we analogize blockchain nodes to points in a point cloud image and select a representative set of nodes to prioritize message forwarding so that messages reach the network edge quickly and are evenly distributed.Moreover,we compare our model with the Kadcast transmission model,which is a classic improvement model for blockchain P2P transmission networks,the experimental findings show that the FPSblo model reduces 34.8%of transmission redundancy and reduces the overload rate by 37.6%.By conducting experimental analysis,the FPS-BT model enhances the transmission capabilities of the P2P network in blockchain.
文摘Cloud point extraction (CPE) has been used for the preconcentration of cadmium, after the formation of a complex with 1, 5-bis(di-2-pyridylmethylene) thiocarbonohydrazide (DPTH), and further determination by flame atomic absorption spectrometry (FAAS) using Triton X-114 as surfactant. The main factors affecting the CPE, such as concentration of Triton X-114 and DPTH, pH, equilibration temperature and incubation time, were optimized for the best extract efficiency. Under the optimum conditions i.e., pH 5.4, [DPTH] = 6x10-3%, [Triton X-114] = 0.25% (v/v), an enhancement factor of 10.5 fold was reached. The lower limit of detection (LOD) obtained under the optimal conditions was 0.95 μg L?1. The precision for 8 replicate deter- minations at 20 and 100 μgL?1 Cd were 2.4 % and 2 % relative standard deviation (R.S.D.). The calibration graph using the preconcentration method was linear with a correlation coefficient of 0,998 at levels close to the detection limit up to at least 200 μgL?1. The method was successfully applied to the determination of cadmium in water, environmental and food samples and in a BCR-176 standard reference material.
基金the Hi-Tech Research and Development Pro-gram (863) of China (Nos. 2007AA01Z311 and 2007AA04Z1A5)the Research Fund for the Doctoral Program of Higher Education of China (No. 20060335114)
文摘A non-local denoising (NLD) algorithm for point-sampled surfaces (PSSs) is presented based on similarities, including geometry intensity and features of sample points. By using the trilateral filtering operator, the differential signal of each sample point is determined and called "geometry intensity". Based on covariance analysis, a regular grid of geometry intensity of a sample point is constructed, and the geometry-intensity similarity of two points is measured according to their grids. Based on mean shift clustering, the PSSs are clustered in terms of the local geometry-features similarity. The smoothed geometry intensity, i.e., offset distance, of the sample point is estimated according to the two similarities. Using the resulting intensity, the noise component from PSSs is finally removed by adjusting the position of each sample point along its own normal direction. Ex- perimental results demonstrate that the algorithm is robust and can produce a more accurate denoising result while having better feature preservation.
基金Project(20956001) supported by the National Natural Science Foundation of ChinaProject(CX2011B083) supported by Hunan Provincial Innovation Foundation for Postgraduate, ChinaProject(K1104026-11) supported by Project of Changsha Science and Technology Bureau, China
文摘A novel cloud-point extraction (CPE) was successfully used in preconcentration of biphenol A (BPA) from aqueous solutions. Majority of BPA is extracted into the surfactant-rich phase. The parameters affecting the CPE such as concentration of surfactant and electrolyte, equilibration temperature and time and pH of sample solution were investigated. The samples were analyzed by high-performance liquid chromatography with ultraviolet detection. Under the optimized conditions, preconcentration of 10 mL sample gives a preconcentration factor of 11. The limit of detection (LOD) and limit of quantification (LOQ) are 0.1 μg/L and 0.33 μg/L, respectively. The linear range of the proposed method is 0.2-20 μg/L with correlation coefficients greater than 0.998 7 and the spiking recoveries are 97.96%-100.42%. The interference factor was tested and the extraction mechanism was also investigated. Thus, the developed CPE has proven to be an efficient, green, rapid and inexpensive approach for extraction and preconcentration of BPA from water samples.
文摘In the reliability analysis of complex structures,response surface method(RSM)has been suggested as an efficient technique to estimate the actual but implicit limit state function.A set of sample points are needed to fit to the implicit function.It has been noted that the accuracy of RSM depends highly on the so-called sample points.However,the technique for point selection has had little attention.In the present study,an improved response surface method(IRSM)based on two sample point selection techniques,named the direction cosines projected strategy(DCS)and the limit step length iteration strategy(LSS),is investigated.Since it uses the sampling points selected to be located in the region close to the original failure surface,and since it needs only one response surface,the IRSM should be accurate and simple in practical structural problems.Applications to several typical examples have helped to elucidate the successful working of the IRSM.
文摘Recently unstructured dense point sets have become a new representation of geometric shapes. In this paper we introduce a novel framework within which several usable error metrics are analyzed and the most basic properties of the pro- gressive point-sampled geometry are characterized. Another distinct feature of the proposed framework is its compatibility with most previously proposed surface inference engines. Given the proposed framework, the performances of four representative well-reputed engines are studied and compared.
基金funded by the National Natural Science Foundation of China(42071014).
文摘Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materials constituting the Gobi result in notable differences in saltation processes across various Gobi surfaces.It is challenging to describe these processes according to a uniform morphology.Therefore,it becomes imperative to articulate surface characteristics through parameters such as the three-dimensional(3D)size and shape of gravel.Collecting morphology information for Gobi gravels is essential for studying its genesis and sand saltation.To enhance the efficiency and information yield of gravel parameter measurements,this study conducted field experiments in the Gobi region across Dunhuang City,Guazhou County,and Yumen City(administrated by Jiuquan City),Gansu Province,China in March 2023.A research framework and methodology for measuring 3D parameters of gravel using point cloud were developed,alongside improved calculation formulas for 3D parameters including gravel grain size,volume,flatness,roundness,sphericity,and equivalent grain size.Leveraging multi-view geometry technology for 3D reconstruction allowed for establishing an optimal data acquisition scheme characterized by high point cloud reconstruction efficiency and clear quality.Additionally,the proposed methodology incorporated point cloud clustering,segmentation,and filtering techniques to isolate individual gravel point clouds.Advanced point cloud algorithms,including the Oriented Bounding Box(OBB),point cloud slicing method,and point cloud triangulation,were then deployed to calculate the 3D parameters of individual gravels.These systematic processes allow precise and detailed characterization of individual gravels.For gravel grain size and volume,the correlation coefficients between point cloud and manual measurements all exceeded 0.9000,confirming the feasibility of the proposed methodology for measuring 3D parameters of individual gravels.The proposed workflow yields accurate calculations of relevant parameters for Gobi gravels,providing essential data support for subsequent studies on Gobi environments.
基金support by National Natural Science Foundation of China (Grant Nos. 11674128, 11504129, and 11474129)Jilin Province Scientific and Technological Development Program, China (Grant No. 20170101063JC)the Thirteenth Five-Year Scientific and Technological Research Project of the Education Department of Jilin Province, China (2016, No. 400)
文摘We investigated the dependence of laser-induced breakdown spectral intensity on the focusing position of a lens at different sample temperatures(room temperature to 300 ℃) in atmosphere.A Q-switched Nd:YAG nanosecond pulsed laser with 1064 nm wavelength and 10 ns pulse width was used to ablate silicon to produce plasma. It was confirmed that the increase in the sample's initial temperature could improve spectral line intensity. In addition, when the distance from the target surface to the focal point increased, the intensity firstly rose, and then dropped.The trend of change with distance was more obvious at higher sample temperatures. By observing the distribution of the normalized ratio of Si atomic spectral line intensity and Si ionic spectral line intensity as functions of distance and temperature, the maximum value of normalized ratio appeared at the longer distance as the initial temperature was higher, while the maximum ratio appeared at the shorter distance as the sample temperature was lower.
文摘点云是一个庞大点的集合而且拥有重要的几何结构。由于其庞大的数据量,不可避免地就会在某些区域内出现一些相似点,这就使得在进行特征提取时提取到一些重复的信息,造成计算冗余,降低训练的准确率。针对上述问题,提出了一种新的神经网络——PointPCA,可以有效地解决上述问题;在PointPCA中,总共分为三个模块:a)采样模块,提出了一种average point sampling(APS)采样方法,可以有效地规避一些相似的点,得到一组近似代表这组点云的新的点集;b)特征提取模块,采用分组中的思想,对这组新的点的集合进行多尺度空间特征提取;c)拼接模块,将每一尺度提取的特征向量拼接到一起组合为一个特征向量。经过实验表明,PointPCA比PointNet在准确率方面提升了4.6%,比PointNet++提升了1.1%;而且在mIoU评估测试中也有不错的效果。
基金Support was provided by Research Joint Venture Agreement 17-JV-11242306045,“Old Growth Forest Dynamics and Structure,”between the USDA Forest Service and the University of New HampshireAdditional support to MJD was provided by the USDA National Institute of Food and Agriculture McIntire-Stennis Project Accession Number 1020142,“Forest Structure,Volume,and Biomass in the Northeastern United States.”+1 种基金supported by the USDA National Institute of Food and Agriculture,McIntire-Stennis project OKL02834the Division of Agricultural Sciences and Natural Resources at Oklahoma State University.
文摘Background:A new variance estimator is derived and tested for big BAF(Basal Area Factor)sampling which is a forest inventory system that utilizes Bitterlich sampling(point sampling)with two BAF sizes,a small BAF for tree counts and a larger BAF on which tree measurements are made usually including DBHs and heights needed for volume estimation.Methods:The new estimator is derived using the Delta method from an existing formulation of the big BAF estimator as consisting of three sample means.The new formula is compared to existing big BAF estimators including a popular estimator based on Bruce’s formula.Results:Several computer simulation studies were conducted comparing the new variance estimator to all known variance estimators for big BAF currently in the forest inventory literature.In simulations the new estimator performed well and comparably to existing variance formulas.Conclusions:A possible advantage of the new estimator is that it does not require the assumption of negligible correlation between basal area counts on the small BAF factor and volume-basal area ratios based on the large BAF factor selection trees,an assumption required by all previous big BAF variance estimation formulas.Although this correlation was negligible on the simulation stands used in this study,it is conceivable that the correlation could be significant in some forest types,such as those in which the DBH-height relationship can be affected substantially by density perhaps through competition.We derived a formula that can be used to estimate the covariance between estimates of mean basal area and the ratio of estimates of mean volume and mean basal area.We also mathematically derived expressions for bias in the big BAF estimator that can be used to show the bias approaches zero in large samples on the order of 1n where n is the number of sample points.
文摘Gully erosion can account for significant volumes of sediment exiting agricultural landscapes, but is difficult to monitor and quantify its evolution with traditional surveying technology. Scientific investigations of gullies depend on accurate and detailed topographic information to understand and evaluate the complex interactions between field topography and gully evolution. Detailed terrain representations can be produced by new technologies such as terrestrial LiDAR systems. These systems are capable of collecting information with a wide range of ground point sampling densities as a result of operator controlled factors. Increasing point density results in richer datasets at a cost of increased time needed to complete field surveys. In large research watersheds, with hundreds of sites being monitored, data collection can become costly and time consuming. In this study, the effect of point sampling density on the capability to collect topographic information was investigated at individual gully scale. This was performed through the utilization of semi-variograms to produce overall guiding principles for multi-temporal gully surveys based on various levels of laser sampling points and relief variation (low, moderate, and high). Results indicated the existence of a point sampling density threshold that produces little or no additional topographic information when exceeded. A reduced dataset was created using the density thresholds and compared to the original dataset with no major discrepancy. Although variations in relief and soil roughness can lead to different point sampling density requirements, the outcome of this study serves as practical guidance for future field surveys of gully evolution and erosion.
文摘This paper analyzes the effect of subgroup size on the x-bar chart characteristics using sample influx (SIF) into forensic science laboratory (FSL). The characteristics studied include changes in out-or-control points (OCP), upper control limit UCLx, and zonal demarcations. Multi-rules were used to identify the number of out-of-control-points, Nocp as violations using five control chart rules applied separately. A sensitivity analysis on the Nocp was applied for subgroup size, k, and number of sigma above the mean value to determine the upper control limit, UCLx. A computer code was implemented using a FORTRAN code to create x-bar control-charts and capture OCP and other control-chart characteristics with increasing k from 2 to 25. For each value of k, a complete series of average values, Q(p), of specific length, Nsg, was created from which statistical analysis was conducted and compared to the original SIF data, S(t). The variation of number of out-of-control points or violations, Nocp, for different control-charts rules with increasing k was determined to follow a decaying exponential function, Nocp = Ae–α, for which, the goodness of fit was established, and the R2 value approached unity for Rule #4 and #5 only. The goodness of fit was established to be the new criteria for rational subgroup-size range, for Rules #5 and #4 only, which involve a count of 6 consecutive points decreasing and 8 consecutive points above the selected control limit (σ/3 above the grand mean), respectively. Using this criterion, the rational subgroup range was established to be 4 ≤ k ≤ 20 for the two x-bar control chart rules.
基金funded by the National Natural Science Foundation of China Youth Project(61603127).
文摘Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requirements.The key to handling large-scale point clouds lies in leveraging random sampling,which offers higher computational efficiency and lower memory consumption compared to other sampling methods.Nevertheless,the use of random sampling can potentially result in the loss of crucial points during the encoding stage.To address these issues,this paper proposes cross-fusion self-attention network(CFSA-Net),a lightweight and efficient network architecture specifically designed for directly processing large-scale point clouds.At the core of this network is the incorporation of random sampling alongside a local feature extraction module based on cross-fusion self-attention(CFSA).This module effectively integrates long-range contextual dependencies between points by employing hierarchical position encoding(HPC).Furthermore,it enhances the interaction between each point’s coordinates and feature information through cross-fusion self-attention pooling,enabling the acquisition of more comprehensive geometric information.Finally,a residual optimization(RO)structure is introduced to extend the receptive field of individual points by stacking hierarchical position encoding and cross-fusion self-attention pooling,thereby reducing the impact of information loss caused by random sampling.Experimental results on the Stanford Large-Scale 3D Indoor Spaces(S3DIS),Semantic3D,and SemanticKITTI datasets demonstrate the superiority of this algorithm over advanced approaches such as RandLA-Net and KPConv.These findings underscore the excellent performance of CFSA-Net in large-scale 3D semantic segmentation.
文摘In this paper,by combining sampling methods for food statistics with years of sample sampling experience,various sampling points and corresponding sampling methods are summarized.It hopes to discover food safety risks and improve the level of food safety.
基金Supported by the National Natural Science Foundation of China(No.61771186)the Heilongjiang Provincial Natural Science Foundation of China(No.YQ2020F012)the University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province(No.UNPYSCT-2017125).
文摘Image matching refers to the process of matching two or more images obtained at different time,different sensors or different conditions through a large number of feature points in the image.At present,image matching is widely used in target recognition and tracking,indoor positioning and navigation.Local features missing,however,often occurs in color images taken in dark light,making the extracted feature points greatly reduced in number,so as to affect image matching and even fail the target recognition.An unsharp masking(USM)based denoising model is established and a local adaptive enhancement algorithm is proposed to achieve feature point compensation by strengthening local features of the dark image in order to increase amount of image information effectively.Fast library for approximate nearest neighbors(FLANN)and random sample consensus(RANSAC)are image matching algorithms.Experimental results show that the number of effective feature points obtained by the proposed algorithm from images in dark light environment is increased,and the accuracy of image matching can be improved obviously.