Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materia...Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materials constituting the Gobi result in notable differences in saltation processes across various Gobi surfaces.It is challenging to describe these processes according to a uniform morphology.Therefore,it becomes imperative to articulate surface characteristics through parameters such as the three-dimensional(3D)size and shape of gravel.Collecting morphology information for Gobi gravels is essential for studying its genesis and sand saltation.To enhance the efficiency and information yield of gravel parameter measurements,this study conducted field experiments in the Gobi region across Dunhuang City,Guazhou County,and Yumen City(administrated by Jiuquan City),Gansu Province,China in March 2023.A research framework and methodology for measuring 3D parameters of gravel using point cloud were developed,alongside improved calculation formulas for 3D parameters including gravel grain size,volume,flatness,roundness,sphericity,and equivalent grain size.Leveraging multi-view geometry technology for 3D reconstruction allowed for establishing an optimal data acquisition scheme characterized by high point cloud reconstruction efficiency and clear quality.Additionally,the proposed methodology incorporated point cloud clustering,segmentation,and filtering techniques to isolate individual gravel point clouds.Advanced point cloud algorithms,including the Oriented Bounding Box(OBB),point cloud slicing method,and point cloud triangulation,were then deployed to calculate the 3D parameters of individual gravels.These systematic processes allow precise and detailed characterization of individual gravels.For gravel grain size and volume,the correlation coefficients between point cloud and manual measurements all exceeded 0.9000,confirming the feasibility of the proposed methodology for measuring 3D parameters of individual gravels.The proposed workflow yields accurate calculations of relevant parameters for Gobi gravels,providing essential data support for subsequent studies on Gobi environments.展开更多
Direct measurement of snow water equivalent(SWE)in snow-dominated mountainous areas is difficult,thus its prediction is essential for water resources management in such areas.In addition,because of nonlinear trend of ...Direct measurement of snow water equivalent(SWE)in snow-dominated mountainous areas is difficult,thus its prediction is essential for water resources management in such areas.In addition,because of nonlinear trend of snow spatial distribution and the multiple influencing factors concerning the SWE spatial distribution,statistical models are not usually able to present acceptable results.Therefore,applicable methods that are able to predict nonlinear trends are necessary.In this research,to predict SWE,the Sohrevard Watershed located in northwest of Iran was selected as the case study.Database was collected,and the required maps were derived.Snow depth(SD)at 150 points with two sampling patterns including systematic random sampling and Latin hypercube sampling(LHS),and snow density at 18 points were randomly measured,and then SWE was calculated.SWE was predicted using artificial neural network(ANN),adaptive neuro-fuzzy inference system(ANFIS)and regression methods.The results showed that the performance of ANN and ANFIS models with two sampling patterns were observed better than the regression method.Moreover,based on most of the efficiency criteria,the efficiency of ANN,ANFIS and regression methods under LHS pattern were observed higher than the systematic random sampling pattern.However,there were no significant differences between the two methods of ANN and ANFIS in SWE prediction.Data of both two sampling patterns had the highest sensitivity to the elevation.In addition,the LHS and the systematic random sampling patterns had the least sensitivity to the profile curvature and plan curvature,respectively.展开更多
Background:The local pivotal method(LPM)utilizing auxiliary data in sample selection has recently been proposed as a sampling method for national forest inventories(NFIs).Its performance compared to simple random samp...Background:The local pivotal method(LPM)utilizing auxiliary data in sample selection has recently been proposed as a sampling method for national forest inventories(NFIs).Its performance compared to simple random sampling(SRS)and LPM with geographical coordinates has produced promising results in simulation studies.In this simulation study we compared all these sampling methods to systematic sampling.The LPM samples were selected solely using the coordinates(LPMxy)or,in addition to that,auxiliary remote sensing-based forest variables(RS variables).We utilized field measurement data(NFI-field)and Multi-Source NFI(MS-NFI)maps as target data,and independent MS-NFI maps as auxiliary data.The designs were compared using relative efficiency(RE);a ratio of mean squared errors of the reference sampling design against the studied design.Applying a method in NFI also requires a proven estimator for the variance.Therefore,three different variance estimators were evaluated against the empirical variance of replications:1)an estimator corresponding to SRS;2)a Grafström-Schelin estimator repurposed for LPM;and 3)a Matérn estimator applied in the Finnish NFI for systematic sampling design.Results:The LPMxy was nearly comparable with the systematic design for the most target variables.The REs of the LPM designs utilizing auxiliary data compared to the systematic design varied between 0.74–1.18,according to the studied target variable.The SRS estimator for variance was expectedly the most biased and conservative estimator.Similarly,the Grafström-Schelin estimator gave overestimates in the case of LPMxy.When the RS variables were utilized as auxiliary data,the Grafström-Schelin estimates tended to underestimate the empirical variance.In systematic sampling the Matérn and Grafström-Schelin estimators performed for practical purposes equally.Conclusions:LPM optimized for a specific variable tended to be more efficient than systematic sampling,but all of the considered LPM designs were less efficient than the systematic sampling design for some target variables.The Grafström-Schelin estimator could be used as such with LPMxy or instead of the Matérn estimator in systematic sampling.Further studies of the variance estimators are needed if other auxiliary variables are to be used in LPM.展开更多
This article proposes two new Ranked Set Sampling(RSS)designs for estimating the population parameters:Simple Z Ranked Set Sampling(SZRSS)and Generalized Z Ranked Set Sampling(GZRSS).These designs provide unbiased est...This article proposes two new Ranked Set Sampling(RSS)designs for estimating the population parameters:Simple Z Ranked Set Sampling(SZRSS)and Generalized Z Ranked Set Sampling(GZRSS).These designs provide unbiased estimators for the mean of symmetric distributions.It is shown that for non-uniform symmetric distributions,the estimators of the mean under the suggested designs are more efcient than those obtained by RSS,Simple Random Sampling(SRS),extreme RSS and truncation based RSS designs.Also,the proposed RSS schemes outperform other RSS schemes and provide more efcient estimates than their competitors under imperfect rankings.The suggested mean estimators under perfect and imperfect rankings are more efcient than the linear regression estimator under SRS.Our proposed RSS designs are also extended to cover the estimation of the population median.Real data is used to examine wthe usefulness and efciency of our estimators.展开更多
Cost effective sampling design is a major concern in some experiments especially when the measurement of the characteristic of interest is costly or painful or time consuming.Ranked set sampling(RSS)was first proposed...Cost effective sampling design is a major concern in some experiments especially when the measurement of the characteristic of interest is costly or painful or time consuming.Ranked set sampling(RSS)was first proposed by McIntyre[1952.A method for unbiased selective sampling,using ranked sets.Australian Journal of Agricultural Research 3,385-390]as an effective way to estimate the pasture mean.In the current paper,a modification of ranked set sampling called moving extremes ranked set sampling(MERSS)is considered for the best linear unbiased estimators(BLUEs)for the simple linear regression model.The BLUEs for this model under MERSS are derived.The BLUEs under MERSS are shown to be markedly more efficient for normal data when compared with the BLUEs under simple random sampling.展开更多
Variance is one of the most vital measures of dispersion widely employed in practical aspects.A commonly used approach for variance estimation is the traditional method of moments that is strongly influenced by the pr...Variance is one of the most vital measures of dispersion widely employed in practical aspects.A commonly used approach for variance estimation is the traditional method of moments that is strongly influenced by the presence of extreme values,and thus its results cannot be relied on.Finding momentum from Koyuncu’s recent work,the present paper focuses first on proposing two classes of variance estimators based on linear moments(L-moments),and then employing them with auxiliary data under double stratified sampling to introduce a new class of calibration variance estimators using important properties of L-moments(L-location,L-cv,L-variance).Three populations are taken into account to assess the efficiency of the new estimators.The first and second populations are concerned with artificial data,and the third populations is concerned with real data.The percentage relative efficiency of the proposed estimators over existing ones is evaluated.In the presence of extreme values,our findings depict the superiority and high efficiency of the proposed classes over traditional classes.Hence,when auxiliary data is available along with extreme values,the proposed classes of estimators may be implemented in an extensive variety of sampling surveys.展开更多
In this paper, auxiliary information is used to determine an estimator of finite population total using nonparametric regression under stratified random sampling. To achieve this, a model-based approach is adopted by ...In this paper, auxiliary information is used to determine an estimator of finite population total using nonparametric regression under stratified random sampling. To achieve this, a model-based approach is adopted by making use of the local polynomial regression estimation to predict the nonsampled values of the survey variable y. The performance of the proposed estimator is investigated against some design-based and model-based regression estimators. The simulation experiments show that the resulting estimator exhibits good properties. Generally, good confidence intervals are seen for the nonparametric regression estimators, and use of the proposed estimator leads to relatively smaller values of RE compared to other estimators.展开更多
In this paper, the problem of nonparametric estimation of finite population quantile function using multiplicative bias correction technique is considered. A robust estimator of the finite population quantile function...In this paper, the problem of nonparametric estimation of finite population quantile function using multiplicative bias correction technique is considered. A robust estimator of the finite population quantile function based on multiplicative bias correction is derived with the aid of a super population model. Most studies have concentrated on kernel smoothers in the estimation of regression functions. This technique has also been applied to various methods of non-parametric estimation of the finite population quantile already under review. A major problem with the use of nonparametric kernel-based regression over a finite interval, such as the estimation of finite population quantities, is bias at boundary points. By correcting the boundary problems associated with previous model-based estimators, the multiplicative bias corrected estimator produced better results in estimating the finite population quantile function. Furthermore, the asymptotic behavior of the proposed estimators </span><span style="font-family:Verdana;">is</span><span style="font-family:Verdana;"> presented</span><span style="font-family:Verdana;">. </span><span style="font-family:Verdana;">It is observed that the estimator is asymptotically unbiased and statistically consistent when certain conditions are satisfied. The simulation results show that the suggested estimator is quite well in terms of relative bias, mean squared error, and relative root mean error. As a result, the multiplicative bias corrected estimator is strongly suggested for survey sampling estimation of the finite population quantile function.展开更多
Unlike height-diameter equations for standing trees commonly used in forest resources modelling,tree height models for cut-to-length(CTL)stems tend to produce prediction errors whose distributions are not conditionall...Unlike height-diameter equations for standing trees commonly used in forest resources modelling,tree height models for cut-to-length(CTL)stems tend to produce prediction errors whose distributions are not conditionally normal but are rather leptokurtic and heavy-tailed.This feature was merely noticed in previous studies but never thoroughly investigated.This study characterized the prediction error distribution of a newly developed such tree height model for Pin us radiata(D.Don)through the three-parameter Burr TypeⅫ(BⅫ)distribution.The model’s prediction errors(ε)exhibited heteroskedasticity conditional mainly on the small end relative diameter of the top log and also on DBH to a minor extent.Structured serial correlations were also present in the data.A total of 14 candidate weighting functions were compared to select the best two for weightingεin order to reduce its conditional heteroskedasticity.The weighted prediction errors(εw)were shifted by a constant to the positive range supported by the BXII distribution.Then the distribution of weighted and shifted prediction errors(εw+)was characterized by the BⅫdistribution using maximum likelihood estimation through 1000 times of repeated random sampling,fitting and goodness-of-fit testing,each time by randomly taking only one observation from each tree to circumvent the potential adverse impact of serial correlation in the data on parameter estimation and inferences.The nonparametric two sample Kolmogorov-Smirnov(KS)goodness-of-fit test and its closely related Kuiper’s(KU)test showed the fitted BⅫdistributions provided a good fit to the highly leptokurtic and heavy-tailed distribution ofε.Random samples generated from the fitted BⅫdistributions ofεw+derived from using the best two weighting functions,when back-shifted and unweighted,exhibited distributions that were,in about97 and 95%of the 1000 cases respectively,not statistically different from the distribution ofε.Our results for cut-tolength P.radiata stems represented the first case of any tree species where a non-normal error distribution in tree height prediction was described by an underlying probability distribution.The fitted BXII prediction error distribution will help to unlock the full potential of the new tree height model in forest resources modelling of P.radiata plantations,particularly when uncertainty assessments,statistical inferences and error propagations are needed in research and practical applications through harvester data analytics.展开更多
Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requ...Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requirements.The key to handling large-scale point clouds lies in leveraging random sampling,which offers higher computational efficiency and lower memory consumption compared to other sampling methods.Nevertheless,the use of random sampling can potentially result in the loss of crucial points during the encoding stage.To address these issues,this paper proposes cross-fusion self-attention network(CFSA-Net),a lightweight and efficient network architecture specifically designed for directly processing large-scale point clouds.At the core of this network is the incorporation of random sampling alongside a local feature extraction module based on cross-fusion self-attention(CFSA).This module effectively integrates long-range contextual dependencies between points by employing hierarchical position encoding(HPC).Furthermore,it enhances the interaction between each point’s coordinates and feature information through cross-fusion self-attention pooling,enabling the acquisition of more comprehensive geometric information.Finally,a residual optimization(RO)structure is introduced to extend the receptive field of individual points by stacking hierarchical position encoding and cross-fusion self-attention pooling,thereby reducing the impact of information loss caused by random sampling.Experimental results on the Stanford Large-Scale 3D Indoor Spaces(S3DIS),Semantic3D,and SemanticKITTI datasets demonstrate the superiority of this algorithm over advanced approaches such as RandLA-Net and KPConv.These findings underscore the excellent performance of CFSA-Net in large-scale 3D semantic segmentation.展开更多
Image matching refers to the process of matching two or more images obtained at different time,different sensors or different conditions through a large number of feature points in the image.At present,image matching ...Image matching refers to the process of matching two or more images obtained at different time,different sensors or different conditions through a large number of feature points in the image.At present,image matching is widely used in target recognition and tracking,indoor positioning and navigation.Local features missing,however,often occurs in color images taken in dark light,making the extracted feature points greatly reduced in number,so as to affect image matching and even fail the target recognition.An unsharp masking(USM)based denoising model is established and a local adaptive enhancement algorithm is proposed to achieve feature point compensation by strengthening local features of the dark image in order to increase amount of image information effectively.Fast library for approximate nearest neighbors(FLANN)and random sample consensus(RANSAC)are image matching algorithms.Experimental results show that the number of effective feature points obtained by the proposed algorithm from images in dark light environment is increased,and the accuracy of image matching can be improved obviously.展开更多
Climate change has become a global phenomenon and is adversely affecting agricultural development across the globe.Developing countries like Pakistan where 18.9%of the GDP(gross domestic product)came from the agricult...Climate change has become a global phenomenon and is adversely affecting agricultural development across the globe.Developing countries like Pakistan where 18.9%of the GDP(gross domestic product)came from the agriculture sector and also 42%of the labor force involved in agriculture.They are directly and indirectly affected by climate change due to an increase in the frequency and intensity of climatic extreme events such as floods,droughts and extreme weather events.In this paper,we have focused on the impact of climate change on farm households and their adaptation strategies to cope up the climatic extremes.For this purpose,we have selected farm households by using multistage stratified random sampling from four districts of the Potohar region i.e.Attock,Rawalpindi,Jhelum and Chakwal.These districts were selected by dividing the Potohar region into rain-fed areas.We have employed logistic regression to assess the determinants of adaptation to climate change and its impact.We have also calculated the marginal effect of each independent variable of the logistic regression to measure the immediate rate of change in the model.In order to check the significance of our suggested model,we have used hypothesis testing.展开更多
针对双目视觉测距中测量误差大、图像信息单一、实时性差等问题,提出一种基于ORB(oriented fast and rotated brief)特征的双目测距方法。对视频帧进行中值滤波处理,提取图像ORB特征,通过实验选出匹配效果最好的汉明距离。对筛选后的匹...针对双目视觉测距中测量误差大、图像信息单一、实时性差等问题,提出一种基于ORB(oriented fast and rotated brief)特征的双目测距方法。对视频帧进行中值滤波处理,提取图像ORB特征,通过实验选出匹配效果最好的汉明距离。对筛选后的匹配点进行RANSAC(random sample consensus)模型估计,去除误匹配,分析视差和真实距离的模型关系,构建最优的测距模型并在实验平台上进行验证。结果表明:所提方法比其他双目测距方法具有测距精确、运行速度快、鲁棒性强的优势,能够实时显示图中特征的距离信息。展开更多
The 3D reconstruction using deep learning-based intelligent systems can provide great help for measuring an individual’s height and shape quickly and accurately through 2D motion-blurred images.Generally,during the a...The 3D reconstruction using deep learning-based intelligent systems can provide great help for measuring an individual’s height and shape quickly and accurately through 2D motion-blurred images.Generally,during the acquisition of images in real-time,motion blur,caused by camera shaking or human motion,appears.Deep learning-based intelligent control applied in vision can help us solve the problem.To this end,we propose a 3D reconstruction method for motion-blurred images using deep learning.First,we develop a BF-WGAN algorithm that combines the bilateral filtering(BF)denoising theory with a Wasserstein generative adversarial network(WGAN)to remove motion blur.The bilateral filter denoising algorithm is used to remove the noise and to retain the details of the blurred image.Then,the blurred image and the corresponding sharp image are input into the WGAN.This algorithm distinguishes the motion-blurred image from the corresponding sharp image according to the WGAN loss and perceptual loss functions.Next,we use the deblurred images generated by the BFWGAN algorithm for 3D reconstruction.We propose a threshold optimization random sample consensus(TO-RANSAC)algorithm that can remove the wrong relationship between two views in the 3D reconstructed model relatively accurately.Compared with the traditional RANSAC algorithm,the TO-RANSAC algorithm can adjust the threshold adaptively,which improves the accuracy of the 3D reconstruction results.The experimental results show that our BF-WGAN algorithm has a better deblurring effect and higher efficiency than do other representative algorithms.In addition,the TO-RANSAC algorithm yields a calculation accuracy considerably higher than that of the traditional RANSAC algorithm.展开更多
Variance is one of themost important measures of descriptive statistics and commonly used for statistical analysis.The traditional second-order central moment based variance estimation is a widely utilized methodology...Variance is one of themost important measures of descriptive statistics and commonly used for statistical analysis.The traditional second-order central moment based variance estimation is a widely utilized methodology.However,traditional variance estimator is highly affected in the presence of extreme values.So this paper initially,proposes two classes of calibration estimators based on an adaptation of the estimators recently proposed by Koyuncu and then presents a new class of L-Moments based calibration variance estimators utilizing L-Moments characteristics(L-location,Lscale,L-CV)and auxiliary information.It is demonstrated that the proposed L-Moments based calibration variance estimators are more efficient than adapted ones.Artificial data is considered for assessing the performance of the proposed estimators.We also demonstrated an application related to apple fruit for purposes of the article.Using artificial and real data sets,percentage relative efficiency(PRE)of the proposed class of estimators with respect to adapted ones are calculated.The PRE results indicate to the superiority of the proposed class over adapted ones in the presence of extreme values.In this manner,the proposed class of estimators could be applied over an expansive range of survey sampling whenever auxiliary information is available in the presence of extreme values.展开更多
Coverage of nominal 95% confidence intervals of a proportion estimated from a sample obtained under a complex survey design, or a proportion estimated from a ratio of two random variables, can depart significantly fro...Coverage of nominal 95% confidence intervals of a proportion estimated from a sample obtained under a complex survey design, or a proportion estimated from a ratio of two random variables, can depart significantly from its target. Effective calibration methods exist for intervals for a proportion derived from a single binary study variable, but not for estimates of thematic classification accuracy. To promote a calibration of confidence intervals within the context of land-cover mapping, this study first illustrates a common problem of under and over-coverage with standard confidence intervals, and then proposes a simple and fast calibration that more often than not will improve coverage. The demonstration is with simulated sampling from a classified map with four classes, and a reference class known for every unit in a population of 160,000 units arranged in a square array. The simulations include four common probability sampling designs for accuracy assessment, and three sample sizes. Statistically significant over- and under-coverage was present in estimates of user’s (UA) and producer’s accuracy (PA) as well as in estimates of class area proportion. A calibration with Bayes intervals for UA and PA was most efficient with smaller sample sizes and two cluster sampling designs.展开更多
Haze concentration prediction,especially PM2.5,has always been a significant focus of air quality research,which is necessary to start a deep study.Aimed at predicting the monthly average concentration of PM2.5 in Bei...Haze concentration prediction,especially PM2.5,has always been a significant focus of air quality research,which is necessary to start a deep study.Aimed at predicting the monthly average concentration of PM2.5 in Beijing,a novel method based on Monte Carlo model is conducted.In order to fully exploit the value of PM2.5 data,we take logarithmic processing of the original PM2.5 data and propose two different scales of the daily concentration and the daily chain development speed of PM2.5 respectively.The results show that these data are both approximately normal distribution.On the basis of the results,a Monte Carlo method can be applied to establish a probability model of normal distribution based on two different variables and random sampling numbers can also be generated by computer.Through a large number of simulation experiments,the average monthly concentration of PM2.5 in Beijing and the general trend of PM2.5 can be obtained.By comparing the errors between the real data and the predicted data,the Monte Carlo method is reliable in predicting the PM2.5 monthly mean concentration in the area.This study also provides a feasible method that may be applied in other studies to predict other pollutants with large scale time series data.展开更多
The purpose of this paper is to obtain the expression of the sample mean difference variance of the Student’s distributive model. In the 2007 the study of the mean difference variance, after some decades, was resumed...The purpose of this paper is to obtain the expression of the sample mean difference variance of the Student’s distributive model. In the 2007 the study of the mean difference variance, after some decades, was resumed by Campobasso</span><span style="font-family:Verdana;"> [1]</span><span style="font-family:Verdana;">. Using the Nair’s </span><span style="font-family:Verdana;">[2]</span><span style="font-family:Verdana;"> and Lomnicki’s general results</span><span style="font-family:Verdana;"> [3]</span><span style="font-family:Verdana;">, he obtained the variance of sample mean difference for different distributive models (Laplace</span><span style="font-family:Verdana;">’</span><span style="font-family:Verdana;">s, triangular, power, logit, Pareto</span><span style="font-family:Verdana;">’</span><span style="font-family:Verdana;">s and Gumbel’s model). In addition he extended the knowledge comparing to the ones already known for the other distributive model (normal, rectangular and exponential model).展开更多
This research aims to develop a model to enhance lymphatic diseases diagnosis by the use of random forest ensemble machine-learning method trained with a simple sampling scheme. This study has been carried out in two ...This research aims to develop a model to enhance lymphatic diseases diagnosis by the use of random forest ensemble machine-learning method trained with a simple sampling scheme. This study has been carried out in two major phases: feature selection and classification. In the first stage, a number of discriminative features out of 18 were selected using PSO and several feature selection techniques to reduce the features dimension. In the second stage, we applied the random forest ensemble classification scheme to diagnose lymphatic diseases. While making experiments with the selected features, we used original and resampled distributions of the dataset to train random forest classifier. Experimental results demonstrate that the proposed method achieves a remark-able improvement in classification accuracy rate.展开更多
基金funded by the National Natural Science Foundation of China(42071014).
文摘Gobi spans a large area of China,surpassing the combined expanse of mobile dunes and semi-fixed dunes.Its presence significantly influences the movement of sand and dust.However,the complex origins and diverse materials constituting the Gobi result in notable differences in saltation processes across various Gobi surfaces.It is challenging to describe these processes according to a uniform morphology.Therefore,it becomes imperative to articulate surface characteristics through parameters such as the three-dimensional(3D)size and shape of gravel.Collecting morphology information for Gobi gravels is essential for studying its genesis and sand saltation.To enhance the efficiency and information yield of gravel parameter measurements,this study conducted field experiments in the Gobi region across Dunhuang City,Guazhou County,and Yumen City(administrated by Jiuquan City),Gansu Province,China in March 2023.A research framework and methodology for measuring 3D parameters of gravel using point cloud were developed,alongside improved calculation formulas for 3D parameters including gravel grain size,volume,flatness,roundness,sphericity,and equivalent grain size.Leveraging multi-view geometry technology for 3D reconstruction allowed for establishing an optimal data acquisition scheme characterized by high point cloud reconstruction efficiency and clear quality.Additionally,the proposed methodology incorporated point cloud clustering,segmentation,and filtering techniques to isolate individual gravel point clouds.Advanced point cloud algorithms,including the Oriented Bounding Box(OBB),point cloud slicing method,and point cloud triangulation,were then deployed to calculate the 3D parameters of individual gravels.These systematic processes allow precise and detailed characterization of individual gravels.For gravel grain size and volume,the correlation coefficients between point cloud and manual measurements all exceeded 0.9000,confirming the feasibility of the proposed methodology for measuring 3D parameters of individual gravels.The proposed workflow yields accurate calculations of relevant parameters for Gobi gravels,providing essential data support for subsequent studies on Gobi environments.
文摘Direct measurement of snow water equivalent(SWE)in snow-dominated mountainous areas is difficult,thus its prediction is essential for water resources management in such areas.In addition,because of nonlinear trend of snow spatial distribution and the multiple influencing factors concerning the SWE spatial distribution,statistical models are not usually able to present acceptable results.Therefore,applicable methods that are able to predict nonlinear trends are necessary.In this research,to predict SWE,the Sohrevard Watershed located in northwest of Iran was selected as the case study.Database was collected,and the required maps were derived.Snow depth(SD)at 150 points with two sampling patterns including systematic random sampling and Latin hypercube sampling(LHS),and snow density at 18 points were randomly measured,and then SWE was calculated.SWE was predicted using artificial neural network(ANN),adaptive neuro-fuzzy inference system(ANFIS)and regression methods.The results showed that the performance of ANN and ANFIS models with two sampling patterns were observed better than the regression method.Moreover,based on most of the efficiency criteria,the efficiency of ANN,ANFIS and regression methods under LHS pattern were observed higher than the systematic random sampling pattern.However,there were no significant differences between the two methods of ANN and ANFIS in SWE prediction.Data of both two sampling patterns had the highest sensitivity to the elevation.In addition,the LHS and the systematic random sampling patterns had the least sensitivity to the profile curvature and plan curvature,respectively.
基金the Ministry of Agriculture and Forestry key project“Puuta liikkeelle ja uusia tuotteita metsästä”(“Wood on the move and new products from forest”)Academy of Finland(project numbers 295100 , 306875).
文摘Background:The local pivotal method(LPM)utilizing auxiliary data in sample selection has recently been proposed as a sampling method for national forest inventories(NFIs).Its performance compared to simple random sampling(SRS)and LPM with geographical coordinates has produced promising results in simulation studies.In this simulation study we compared all these sampling methods to systematic sampling.The LPM samples were selected solely using the coordinates(LPMxy)or,in addition to that,auxiliary remote sensing-based forest variables(RS variables).We utilized field measurement data(NFI-field)and Multi-Source NFI(MS-NFI)maps as target data,and independent MS-NFI maps as auxiliary data.The designs were compared using relative efficiency(RE);a ratio of mean squared errors of the reference sampling design against the studied design.Applying a method in NFI also requires a proven estimator for the variance.Therefore,three different variance estimators were evaluated against the empirical variance of replications:1)an estimator corresponding to SRS;2)a Grafström-Schelin estimator repurposed for LPM;and 3)a Matérn estimator applied in the Finnish NFI for systematic sampling design.Results:The LPMxy was nearly comparable with the systematic design for the most target variables.The REs of the LPM designs utilizing auxiliary data compared to the systematic design varied between 0.74–1.18,according to the studied target variable.The SRS estimator for variance was expectedly the most biased and conservative estimator.Similarly,the Grafström-Schelin estimator gave overestimates in the case of LPMxy.When the RS variables were utilized as auxiliary data,the Grafström-Schelin estimates tended to underestimate the empirical variance.In systematic sampling the Matérn and Grafström-Schelin estimators performed for practical purposes equally.Conclusions:LPM optimized for a specific variable tended to be more efficient than systematic sampling,but all of the considered LPM designs were less efficient than the systematic sampling design for some target variables.The Grafström-Schelin estimator could be used as such with LPMxy or instead of the Matérn estimator in systematic sampling.Further studies of the variance estimators are needed if other auxiliary variables are to be used in LPM.
基金The authors extend their appreciation to Deanship of Scientic Research at King Khalid University for funding this work through Research Groups Program under Grant No.R.G.P.2/68/41.I.M.A.and A.I.A.received the grant.
文摘This article proposes two new Ranked Set Sampling(RSS)designs for estimating the population parameters:Simple Z Ranked Set Sampling(SZRSS)and Generalized Z Ranked Set Sampling(GZRSS).These designs provide unbiased estimators for the mean of symmetric distributions.It is shown that for non-uniform symmetric distributions,the estimators of the mean under the suggested designs are more efcient than those obtained by RSS,Simple Random Sampling(SRS),extreme RSS and truncation based RSS designs.Also,the proposed RSS schemes outperform other RSS schemes and provide more efcient estimates than their competitors under imperfect rankings.The suggested mean estimators under perfect and imperfect rankings are more efcient than the linear regression estimator under SRS.Our proposed RSS designs are also extended to cover the estimation of the population median.Real data is used to examine wthe usefulness and efciency of our estimators.
基金Supported by the National Natural Science Foundation of China(11901236)the Scientific Research Fund of Hunan Provincial Science and Technology Department(2019JJ50479)+3 种基金the Scientific Research Fund of Hunan Provincial Education Department(18B322)the Winning Bid Project of Hunan Province for the 4th National Economic Census([2020]1)the Young Core Teacher Foundation of Hunan Province([2020]43)the Funda-mental Research Fund of Xiangxi Autonomous Prefecture(2018SF5026)。
文摘Cost effective sampling design is a major concern in some experiments especially when the measurement of the characteristic of interest is costly or painful or time consuming.Ranked set sampling(RSS)was first proposed by McIntyre[1952.A method for unbiased selective sampling,using ranked sets.Australian Journal of Agricultural Research 3,385-390]as an effective way to estimate the pasture mean.In the current paper,a modification of ranked set sampling called moving extremes ranked set sampling(MERSS)is considered for the best linear unbiased estimators(BLUEs)for the simple linear regression model.The BLUEs for this model under MERSS are derived.The BLUEs under MERSS are shown to be markedly more efficient for normal data when compared with the BLUEs under simple random sampling.
基金The authors thank the Deanship of Scientific Research at King Khalid University,Kingdom of Saudi Arabia for funding this study through the research groups program under Project Number R.G.P.1/64/42.Ishfaq Ahmad and Ibrahim Mufrah Almanjahie received the grant.
文摘Variance is one of the most vital measures of dispersion widely employed in practical aspects.A commonly used approach for variance estimation is the traditional method of moments that is strongly influenced by the presence of extreme values,and thus its results cannot be relied on.Finding momentum from Koyuncu’s recent work,the present paper focuses first on proposing two classes of variance estimators based on linear moments(L-moments),and then employing them with auxiliary data under double stratified sampling to introduce a new class of calibration variance estimators using important properties of L-moments(L-location,L-cv,L-variance).Three populations are taken into account to assess the efficiency of the new estimators.The first and second populations are concerned with artificial data,and the third populations is concerned with real data.The percentage relative efficiency of the proposed estimators over existing ones is evaluated.In the presence of extreme values,our findings depict the superiority and high efficiency of the proposed classes over traditional classes.Hence,when auxiliary data is available along with extreme values,the proposed classes of estimators may be implemented in an extensive variety of sampling surveys.
文摘In this paper, auxiliary information is used to determine an estimator of finite population total using nonparametric regression under stratified random sampling. To achieve this, a model-based approach is adopted by making use of the local polynomial regression estimation to predict the nonsampled values of the survey variable y. The performance of the proposed estimator is investigated against some design-based and model-based regression estimators. The simulation experiments show that the resulting estimator exhibits good properties. Generally, good confidence intervals are seen for the nonparametric regression estimators, and use of the proposed estimator leads to relatively smaller values of RE compared to other estimators.
文摘In this paper, the problem of nonparametric estimation of finite population quantile function using multiplicative bias correction technique is considered. A robust estimator of the finite population quantile function based on multiplicative bias correction is derived with the aid of a super population model. Most studies have concentrated on kernel smoothers in the estimation of regression functions. This technique has also been applied to various methods of non-parametric estimation of the finite population quantile already under review. A major problem with the use of nonparametric kernel-based regression over a finite interval, such as the estimation of finite population quantities, is bias at boundary points. By correcting the boundary problems associated with previous model-based estimators, the multiplicative bias corrected estimator produced better results in estimating the finite population quantile function. Furthermore, the asymptotic behavior of the proposed estimators </span><span style="font-family:Verdana;">is</span><span style="font-family:Verdana;"> presented</span><span style="font-family:Verdana;">. </span><span style="font-family:Verdana;">It is observed that the estimator is asymptotically unbiased and statistically consistent when certain conditions are satisfied. The simulation results show that the suggested estimator is quite well in terms of relative bias, mean squared error, and relative root mean error. As a result, the multiplicative bias corrected estimator is strongly suggested for survey sampling estimation of the finite population quantile function.
文摘Unlike height-diameter equations for standing trees commonly used in forest resources modelling,tree height models for cut-to-length(CTL)stems tend to produce prediction errors whose distributions are not conditionally normal but are rather leptokurtic and heavy-tailed.This feature was merely noticed in previous studies but never thoroughly investigated.This study characterized the prediction error distribution of a newly developed such tree height model for Pin us radiata(D.Don)through the three-parameter Burr TypeⅫ(BⅫ)distribution.The model’s prediction errors(ε)exhibited heteroskedasticity conditional mainly on the small end relative diameter of the top log and also on DBH to a minor extent.Structured serial correlations were also present in the data.A total of 14 candidate weighting functions were compared to select the best two for weightingεin order to reduce its conditional heteroskedasticity.The weighted prediction errors(εw)were shifted by a constant to the positive range supported by the BXII distribution.Then the distribution of weighted and shifted prediction errors(εw+)was characterized by the BⅫdistribution using maximum likelihood estimation through 1000 times of repeated random sampling,fitting and goodness-of-fit testing,each time by randomly taking only one observation from each tree to circumvent the potential adverse impact of serial correlation in the data on parameter estimation and inferences.The nonparametric two sample Kolmogorov-Smirnov(KS)goodness-of-fit test and its closely related Kuiper’s(KU)test showed the fitted BⅫdistributions provided a good fit to the highly leptokurtic and heavy-tailed distribution ofε.Random samples generated from the fitted BⅫdistributions ofεw+derived from using the best two weighting functions,when back-shifted and unweighted,exhibited distributions that were,in about97 and 95%of the 1000 cases respectively,not statistically different from the distribution ofε.Our results for cut-tolength P.radiata stems represented the first case of any tree species where a non-normal error distribution in tree height prediction was described by an underlying probability distribution.The fitted BXII prediction error distribution will help to unlock the full potential of the new tree height model in forest resources modelling of P.radiata plantations,particularly when uncertainty assessments,statistical inferences and error propagations are needed in research and practical applications through harvester data analytics.
基金funded by the National Natural Science Foundation of China Youth Project(61603127).
文摘Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requirements.The key to handling large-scale point clouds lies in leveraging random sampling,which offers higher computational efficiency and lower memory consumption compared to other sampling methods.Nevertheless,the use of random sampling can potentially result in the loss of crucial points during the encoding stage.To address these issues,this paper proposes cross-fusion self-attention network(CFSA-Net),a lightweight and efficient network architecture specifically designed for directly processing large-scale point clouds.At the core of this network is the incorporation of random sampling alongside a local feature extraction module based on cross-fusion self-attention(CFSA).This module effectively integrates long-range contextual dependencies between points by employing hierarchical position encoding(HPC).Furthermore,it enhances the interaction between each point’s coordinates and feature information through cross-fusion self-attention pooling,enabling the acquisition of more comprehensive geometric information.Finally,a residual optimization(RO)structure is introduced to extend the receptive field of individual points by stacking hierarchical position encoding and cross-fusion self-attention pooling,thereby reducing the impact of information loss caused by random sampling.Experimental results on the Stanford Large-Scale 3D Indoor Spaces(S3DIS),Semantic3D,and SemanticKITTI datasets demonstrate the superiority of this algorithm over advanced approaches such as RandLA-Net and KPConv.These findings underscore the excellent performance of CFSA-Net in large-scale 3D semantic segmentation.
基金Supported by the National Natural Science Foundation of China(No.61771186)the Heilongjiang Provincial Natural Science Foundation of China(No.YQ2020F012)the University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province(No.UNPYSCT-2017125).
文摘Image matching refers to the process of matching two or more images obtained at different time,different sensors or different conditions through a large number of feature points in the image.At present,image matching is widely used in target recognition and tracking,indoor positioning and navigation.Local features missing,however,often occurs in color images taken in dark light,making the extracted feature points greatly reduced in number,so as to affect image matching and even fail the target recognition.An unsharp masking(USM)based denoising model is established and a local adaptive enhancement algorithm is proposed to achieve feature point compensation by strengthening local features of the dark image in order to increase amount of image information effectively.Fast library for approximate nearest neighbors(FLANN)and random sample consensus(RANSAC)are image matching algorithms.Experimental results show that the number of effective feature points obtained by the proposed algorithm from images in dark light environment is increased,and the accuracy of image matching can be improved obviously.
文摘Climate change has become a global phenomenon and is adversely affecting agricultural development across the globe.Developing countries like Pakistan where 18.9%of the GDP(gross domestic product)came from the agriculture sector and also 42%of the labor force involved in agriculture.They are directly and indirectly affected by climate change due to an increase in the frequency and intensity of climatic extreme events such as floods,droughts and extreme weather events.In this paper,we have focused on the impact of climate change on farm households and their adaptation strategies to cope up the climatic extremes.For this purpose,we have selected farm households by using multistage stratified random sampling from four districts of the Potohar region i.e.Attock,Rawalpindi,Jhelum and Chakwal.These districts were selected by dividing the Potohar region into rain-fed areas.We have employed logistic regression to assess the determinants of adaptation to climate change and its impact.We have also calculated the marginal effect of each independent variable of the logistic regression to measure the immediate rate of change in the model.In order to check the significance of our suggested model,we have used hypothesis testing.
文摘针对双目视觉测距中测量误差大、图像信息单一、实时性差等问题,提出一种基于ORB(oriented fast and rotated brief)特征的双目测距方法。对视频帧进行中值滤波处理,提取图像ORB特征,通过实验选出匹配效果最好的汉明距离。对筛选后的匹配点进行RANSAC(random sample consensus)模型估计,去除误匹配,分析视差和真实距离的模型关系,构建最优的测距模型并在实验平台上进行验证。结果表明:所提方法比其他双目测距方法具有测距精确、运行速度快、鲁棒性强的优势,能够实时显示图中特征的距离信息。
基金the National Natural Science Foundation of China under Grant 61902311in part by the Japan Society for the Promotion of Science(JSPS)Grants-in-Aid for Scientific Research(KAKENHI)under Grant JP18K18044.
文摘The 3D reconstruction using deep learning-based intelligent systems can provide great help for measuring an individual’s height and shape quickly and accurately through 2D motion-blurred images.Generally,during the acquisition of images in real-time,motion blur,caused by camera shaking or human motion,appears.Deep learning-based intelligent control applied in vision can help us solve the problem.To this end,we propose a 3D reconstruction method for motion-blurred images using deep learning.First,we develop a BF-WGAN algorithm that combines the bilateral filtering(BF)denoising theory with a Wasserstein generative adversarial network(WGAN)to remove motion blur.The bilateral filter denoising algorithm is used to remove the noise and to retain the details of the blurred image.Then,the blurred image and the corresponding sharp image are input into the WGAN.This algorithm distinguishes the motion-blurred image from the corresponding sharp image according to the WGAN loss and perceptual loss functions.Next,we use the deblurred images generated by the BFWGAN algorithm for 3D reconstruction.We propose a threshold optimization random sample consensus(TO-RANSAC)algorithm that can remove the wrong relationship between two views in the 3D reconstructed model relatively accurately.Compared with the traditional RANSAC algorithm,the TO-RANSAC algorithm can adjust the threshold adaptively,which improves the accuracy of the 3D reconstruction results.The experimental results show that our BF-WGAN algorithm has a better deblurring effect and higher efficiency than do other representative algorithms.In addition,the TO-RANSAC algorithm yields a calculation accuracy considerably higher than that of the traditional RANSAC algorithm.
基金The authors are grateful to the Deanship of Scientific Research at King Khalid University,Kingdom of Saudi Arabia for funding this study through the research groups program under project number R.G.P.2/67/41.Ibrahim Mufrah Almanjahie received the grant.
文摘Variance is one of themost important measures of descriptive statistics and commonly used for statistical analysis.The traditional second-order central moment based variance estimation is a widely utilized methodology.However,traditional variance estimator is highly affected in the presence of extreme values.So this paper initially,proposes two classes of calibration estimators based on an adaptation of the estimators recently proposed by Koyuncu and then presents a new class of L-Moments based calibration variance estimators utilizing L-Moments characteristics(L-location,Lscale,L-CV)and auxiliary information.It is demonstrated that the proposed L-Moments based calibration variance estimators are more efficient than adapted ones.Artificial data is considered for assessing the performance of the proposed estimators.We also demonstrated an application related to apple fruit for purposes of the article.Using artificial and real data sets,percentage relative efficiency(PRE)of the proposed class of estimators with respect to adapted ones are calculated.The PRE results indicate to the superiority of the proposed class over adapted ones in the presence of extreme values.In this manner,the proposed class of estimators could be applied over an expansive range of survey sampling whenever auxiliary information is available in the presence of extreme values.
文摘Coverage of nominal 95% confidence intervals of a proportion estimated from a sample obtained under a complex survey design, or a proportion estimated from a ratio of two random variables, can depart significantly from its target. Effective calibration methods exist for intervals for a proportion derived from a single binary study variable, but not for estimates of thematic classification accuracy. To promote a calibration of confidence intervals within the context of land-cover mapping, this study first illustrates a common problem of under and over-coverage with standard confidence intervals, and then proposes a simple and fast calibration that more often than not will improve coverage. The demonstration is with simulated sampling from a classified map with four classes, and a reference class known for every unit in a population of 160,000 units arranged in a square array. The simulations include four common probability sampling designs for accuracy assessment, and three sample sizes. Statistically significant over- and under-coverage was present in estimates of user’s (UA) and producer’s accuracy (PA) as well as in estimates of class area proportion. A calibration with Bayes intervals for UA and PA was most efficient with smaller sample sizes and two cluster sampling designs.
文摘Haze concentration prediction,especially PM2.5,has always been a significant focus of air quality research,which is necessary to start a deep study.Aimed at predicting the monthly average concentration of PM2.5 in Beijing,a novel method based on Monte Carlo model is conducted.In order to fully exploit the value of PM2.5 data,we take logarithmic processing of the original PM2.5 data and propose two different scales of the daily concentration and the daily chain development speed of PM2.5 respectively.The results show that these data are both approximately normal distribution.On the basis of the results,a Monte Carlo method can be applied to establish a probability model of normal distribution based on two different variables and random sampling numbers can also be generated by computer.Through a large number of simulation experiments,the average monthly concentration of PM2.5 in Beijing and the general trend of PM2.5 can be obtained.By comparing the errors between the real data and the predicted data,the Monte Carlo method is reliable in predicting the PM2.5 monthly mean concentration in the area.This study also provides a feasible method that may be applied in other studies to predict other pollutants with large scale time series data.
文摘The purpose of this paper is to obtain the expression of the sample mean difference variance of the Student’s distributive model. In the 2007 the study of the mean difference variance, after some decades, was resumed by Campobasso</span><span style="font-family:Verdana;"> [1]</span><span style="font-family:Verdana;">. Using the Nair’s </span><span style="font-family:Verdana;">[2]</span><span style="font-family:Verdana;"> and Lomnicki’s general results</span><span style="font-family:Verdana;"> [3]</span><span style="font-family:Verdana;">, he obtained the variance of sample mean difference for different distributive models (Laplace</span><span style="font-family:Verdana;">’</span><span style="font-family:Verdana;">s, triangular, power, logit, Pareto</span><span style="font-family:Verdana;">’</span><span style="font-family:Verdana;">s and Gumbel’s model). In addition he extended the knowledge comparing to the ones already known for the other distributive model (normal, rectangular and exponential model).
文摘This research aims to develop a model to enhance lymphatic diseases diagnosis by the use of random forest ensemble machine-learning method trained with a simple sampling scheme. This study has been carried out in two major phases: feature selection and classification. In the first stage, a number of discriminative features out of 18 were selected using PSO and several feature selection techniques to reduce the features dimension. In the second stage, we applied the random forest ensemble classification scheme to diagnose lymphatic diseases. While making experiments with the selected features, we used original and resampled distributions of the dataset to train random forest classifier. Experimental results demonstrate that the proposed method achieves a remark-able improvement in classification accuracy rate.