The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the effi...The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.展开更多
Under single-satellite observation,the parameter estimation of the boost phase of high-precision space noncooperative targets requires prior information.To improve the accuracy without prior information,we propose a p...Under single-satellite observation,the parameter estimation of the boost phase of high-precision space noncooperative targets requires prior information.To improve the accuracy without prior information,we propose a parameter estimation model of the boost phase based on trajectory plane parametric cutting.The use of the plane passing through the geo-center and the cutting sequence line of sight(LOS)generates the trajectory-cutting plane.With the coefficient of the trajectory cutting plane directly used as the parameter to be estimated,a motion parameter estimation model in space non-cooperative targets is established,and the Gauss-Newton iteration method is used to solve the flight parameters.The experimental results show that the estimation algorithm proposed in this paper weakly relies on prior information and has higher estimation accuracy,providing a practical new idea and method for the parameter estimation of space non-cooperative targets under single-satellite warning.展开更多
The human pose paradigm is estimated using a transformer-based multi-branch multidimensional directed the three-dimensional(3D)method that takes into account self-occlusion,badly posedness,and a lack of depth data in ...The human pose paradigm is estimated using a transformer-based multi-branch multidimensional directed the three-dimensional(3D)method that takes into account self-occlusion,badly posedness,and a lack of depth data in the per-frame 3D posture estimation from two-dimensional(2D)mapping to 3D mapping.Firstly,by examining the relationship between the movements of different bones in the human body,four virtual skeletons are proposed to enhance the cyclic constraints of limb joints.Then,multiple parameters describing the skeleton are fused and projected into a high-dimensional space.Utilizing a multi-branch network,motion features between bones and overall motion features are extracted to mitigate the drift error in the estimation results.Furthermore,the estimated relative depth is projected into 3D space,and the error is calculated against real 3D data,forming a loss function along with the relative depth error.This article adopts the average joint pixel error as the primary performance metric.Compared to the benchmark approach,the estimation findings indicate an increase in average precision of 1.8 mm within the Human3.6M sample.展开更多
In order to measure the parameters of flight rocket by using radar,rocket impact point was estimated accurately for rocket trajectory correction.The Kalman filter with adaptive filter gain matrix was adopted.According...In order to measure the parameters of flight rocket by using radar,rocket impact point was estimated accurately for rocket trajectory correction.The Kalman filter with adaptive filter gain matrix was adopted.According to the particle trajectory model,the adaptive Kalman filter trajectory model was constructed for removing and filtering the outliers of the parameters during a section of flight detected by three-dimensional data radar and the rocket impact point was extrapolated.The results of numerical simulation show that the outliers and noise in trajectory measurement signal can be removed effectively by using the adaptive Kalman filter and the filter variance can converge in a short period of time.Based on the relation of filtering time and impact point estimation error,choosing the filtering time of 8-10 scan get the minimum estimation error of impact point.展开更多
The measurement of atmospheric water vapor (WV) content and variability is important for meteorological and climatological research. A technique for the remote sensing of atmospheric WV content using ground-based Gl...The measurement of atmospheric water vapor (WV) content and variability is important for meteorological and climatological research. A technique for the remote sensing of atmospheric WV content using ground-based Global Positioning System (GPS) has become available, which can routinely achieve accuracies for integrated WV content of 1-2 kg/m2. Some experimental work has shown that the accuracy of WV measurements from a moving platform is comparable to that of (static) land-based receivers. Extending this technique into the marine environment on a moving platform would be greatly beneficial for many aspects of meteorological research, such as the calibration of satellite data, investigation of the air-sea interface, as well as forecasting and climatological studies. In this study, kinematic precise point positioning has been developed to investigate WV in the Arctic Ocean (80°-87°N) and annual variations are obtained for 2008 and 2012 that are identical to those related to the enhanced greenhouse effect.展开更多
For localisation of unknown non-cooperative targets in space,the existence of interference points causes inaccuracy of pose estimation while utilizing point cloud registration.To address this issue,this paper proposes...For localisation of unknown non-cooperative targets in space,the existence of interference points causes inaccuracy of pose estimation while utilizing point cloud registration.To address this issue,this paper proposes a new iterative closest point(ICP)algorithm combined with distributed weights to intensify the dependability and robustness of the non-cooperative target localisation.As interference points in space have not yet been extensively studied,we classify them into two broad categories,far interference points and near interference points.For the former,the statistical outlier elimination algorithm is employed.For the latter,the Gaussian distributed weights,simultaneously valuing with the variation of the Euclidean distance from each point to the centroid,are commingled to the traditional ICP algorithm.In each iteration,the weight matrix W in connection with the overall localisation is obtained,and the singular value decomposition is adopted to accomplish high-precision estimation of the target pose.Finally,the experiments are implemented by shooting the satellite model and setting the position of interference points.The outcomes suggest that the proposed algorithm can effectively suppress interference points and enhance the accuracy of non-cooperative target pose estimation.When the interference point number reaches about 700,the average error of angle is superior to 0.88°.展开更多
Environmental uncertainty represents the limiting factor in matched-field localization. Within a Bayesian framework, both the environmental parameters, and the source parameters are considered to be unknown variables....Environmental uncertainty represents the limiting factor in matched-field localization. Within a Bayesian framework, both the environmental parameters, and the source parameters are considered to be unknown variables. However, including environmental parameters in multiple-source localization greatly increases the complexity and computational demands of the inverse problem. In the paper, the closed-form maximumlikelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. This paper compares two Bayesian-point-estimation methods: the maximum a posteriori(MAP) approach and the marginal posterior probability density(PPD) approach to source localization. The MAP approach determines the sources locations by maximizing the PPD over all source and environmental parameters. The marginal PPD approach integrates the PPD over the unknowns to obtain a sequence of marginal probability distribution over source range or depth.Monte Carlo analysis of the two approaches for a test case involving both geoacoustic and water-column uncertainties indicates that:(1) For sensitive parameters such as source range, water depth and water sound speed, the MAP solution is better than the marginal PPD solution.(2) For the less sensitive parameters, such as,bottom sound speed, bottom density, bottom attenuation and water sound speed, when the SNR is low, the marginal PPD solution can better smooth the noise, which leads to better performance than the MAP solution.Since the source range and depth are sensitive parameters, the research shows that the MAP approach provides a slightly more reliable method to locate multiple sources in an unknown environment.展开更多
Human Action Recognition(HAR)and pose estimation from videos have gained significant attention among research communities due to its applica-tion in several areas namely intelligent surveillance,human robot interaction...Human Action Recognition(HAR)and pose estimation from videos have gained significant attention among research communities due to its applica-tion in several areas namely intelligent surveillance,human robot interaction,robot vision,etc.Though considerable improvements have been made in recent days,design of an effective and accurate action recognition model is yet a difficult process owing to the existence of different obstacles such as variations in camera angle,occlusion,background,movement speed,and so on.From the literature,it is observed that hard to deal with the temporal dimension in the action recognition process.Convolutional neural network(CNN)models could be used widely to solve this.With this motivation,this study designs a novel key point extraction with deep convolutional neural networks based pose estimation(KPE-DCNN)model for activity recognition.The KPE-DCNN technique initially converts the input video into a sequence of frames followed by a three stage process namely key point extraction,hyperparameter tuning,and pose estimation.In the keypoint extraction process an OpenPose model is designed to compute the accurate key-points in the human pose.Then,an optimal DCNN model is developed to classify the human activities label based on the extracted key points.For improving the training process of the DCNN technique,RMSProp optimizer is used to optimally adjust the hyperparameters such as learning rate,batch size,and epoch count.The experimental results tested using benchmark dataset like UCF sports dataset showed that KPE-DCNN technique is able to achieve good results compared with benchmark algorithms like CNN,DBN,SVM,STAL,T-CNN and so on.展开更多
Background:A new variance estimator is derived and tested for big BAF(Basal Area Factor)sampling which is a forest inventory system that utilizes Bitterlich sampling(point sampling)with two BAF sizes,a small BAF for t...Background:A new variance estimator is derived and tested for big BAF(Basal Area Factor)sampling which is a forest inventory system that utilizes Bitterlich sampling(point sampling)with two BAF sizes,a small BAF for tree counts and a larger BAF on which tree measurements are made usually including DBHs and heights needed for volume estimation.Methods:The new estimator is derived using the Delta method from an existing formulation of the big BAF estimator as consisting of three sample means.The new formula is compared to existing big BAF estimators including a popular estimator based on Bruce’s formula.Results:Several computer simulation studies were conducted comparing the new variance estimator to all known variance estimators for big BAF currently in the forest inventory literature.In simulations the new estimator performed well and comparably to existing variance formulas.Conclusions:A possible advantage of the new estimator is that it does not require the assumption of negligible correlation between basal area counts on the small BAF factor and volume-basal area ratios based on the large BAF factor selection trees,an assumption required by all previous big BAF variance estimation formulas.Although this correlation was negligible on the simulation stands used in this study,it is conceivable that the correlation could be significant in some forest types,such as those in which the DBH-height relationship can be affected substantially by density perhaps through competition.We derived a formula that can be used to estimate the covariance between estimates of mean basal area and the ratio of estimates of mean volume and mean basal area.We also mathematically derived expressions for bias in the big BAF estimator that can be used to show the bias approaches zero in large samples on the order of 1n where n is the number of sample points.展开更多
Fast and accurate measurement of the volume of earthmoving materials is of great signifcance for the real-time evaluation of loader operation efciency and the realization of autonomous operation. Existing methods for ...Fast and accurate measurement of the volume of earthmoving materials is of great signifcance for the real-time evaluation of loader operation efciency and the realization of autonomous operation. Existing methods for volume measurement, such as total station-based methods, cannot measure the volume in real time, while the bucket-based method also has the disadvantage of poor universality. In this study, a fast estimation method for a loader’s shovel load volume by 3D reconstruction of material piles is proposed. First, a dense stereo matching method (QORB–MAPM) was proposed by integrating the improved quadtree ORB algorithm (QORB) and the maximum a posteriori probability model (MAPM), which achieves fast matching of feature points and dense 3D reconstruction of material piles. Second, the 3D point cloud model of the material piles before and after shoveling was registered and segmented to obtain the 3D point cloud model of the shoveling area, and the Alpha-shape algorithm of Delaunay triangulation was used to estimate the volume of the 3D point cloud model. Finally, a shovel loading volume measurement experiment was conducted under loose-soil working conditions. The results show that the shovel loading volume estimation method (QORB–MAPM VE) proposed in this study has higher estimation accuracy and less calculation time in volume estimation and bucket fll factor estimation, and it has signifcant theoretical research and engineering application value.展开更多
This article describes a novel approach for enhancing the three-dimensional(3D)point cloud reconstruction for light field microscopy(LFM)using U-net architecture-based fully convolutional neural network(CNN).Since the...This article describes a novel approach for enhancing the three-dimensional(3D)point cloud reconstruction for light field microscopy(LFM)using U-net architecture-based fully convolutional neural network(CNN).Since the directional view of the LFM is limited,noise and artifacts make it difficult to reconstruct the exact shape of 3D point clouds.The existing methods suffer from these problems due to the self-occlusion of the model.This manuscript proposes a deep fusion learning(DL)method that combines a 3D CNN with a U-Net-based model as a feature extractor.The sub-aperture images obtained from the light field microscopy are aligned to form a light field data cube for preprocessing.A multi-stream 3D CNNs and U-net architecture are applied to obtain the depth feature fromthe directional sub-aperture LF data cube.For the enhancement of the depthmap,dual iteration-based weighted median filtering(WMF)is used to reduce surface noise and enhance the accuracy of the reconstruction.Generating a 3D point cloud involves combining two key elements:the enhanced depth map and the central view of the light field image.The proposed method is validated using synthesized Heidelberg Collaboratory for Image Processing(HCI)and real-world LFM datasets.The results are compared with different state-of-the-art methods.The structural similarity index(SSIM)gain for boxes,cotton,pillow,and pens are 0.9760,0.9806,0.9940,and 0.9907,respectively.Moreover,the discrete entropy(DE)value for LFM depth maps exhibited better performance than other existing methods.展开更多
针对多视图三维重建任务中点云完整性欠佳的问题,提出一种基于空间传播的多视图深度估计网络(SPMVSNet)。引入空间传播思想用于复杂条件下的稠密点云重建,并分别设计基于空间传播的混合深度假设策略和空间感知优化模块。混合深度假设策...针对多视图三维重建任务中点云完整性欠佳的问题,提出一种基于空间传播的多视图深度估计网络(SPMVSNet)。引入空间传播思想用于复杂条件下的稠密点云重建,并分别设计基于空间传播的混合深度假设策略和空间感知优化模块。混合深度假设策略采用由粗糙到精细的深度推理方式,将深度估计视为多标签分类任务,对正则化概率体执行交叉熵损失以约束代价体,从而避免回归方法过拟合和收敛速度过慢的问题。空间感知优化模块从包含高级语义特征表示的特征图中获得引导,在进行置信度检查后采用卷积空间传播网络,通过构建亲和矩阵来细化最终的深度图。同时,为解决大多数方法存在的对不满足多视图一致性的不可靠区域重建质量较低的问题,进一步结合注意力机制设计具有样本自适应能力的动态特征提取网络,用于增强模型的局部感知能力。实验结果表明,在DTU数据集上,SP-MVSNet的重建完整性相比于CVP-MVSNet提升32.8%,整体质量提升11.4%。在Tanks and Temples基准和Blended MVS数据集上,SP-MVSNet的表现也优于大多数已知方法,取得了良好的三维重建效果。展开更多
基金supported by the Innovation Fund Project of the Gansu Education Department(Grant No.2021B-099).
文摘The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.
基金supported in part by the National Natural Science Foundation of China(Nos.42271448,41701531)the Key Laboratory of Land Satellite Remote Sensing Application,Ministry of Natural Resources of the People’s Republic of China(No.KLSMNRG202317)。
文摘Under single-satellite observation,the parameter estimation of the boost phase of high-precision space noncooperative targets requires prior information.To improve the accuracy without prior information,we propose a parameter estimation model of the boost phase based on trajectory plane parametric cutting.The use of the plane passing through the geo-center and the cutting sequence line of sight(LOS)generates the trajectory-cutting plane.With the coefficient of the trajectory cutting plane directly used as the parameter to be estimated,a motion parameter estimation model in space non-cooperative targets is established,and the Gauss-Newton iteration method is used to solve the flight parameters.The experimental results show that the estimation algorithm proposed in this paper weakly relies on prior information and has higher estimation accuracy,providing a practical new idea and method for the parameter estimation of space non-cooperative targets under single-satellite warning.
基金supported by the Medical Special Cultivation Project of Anhui University of Science and Technology(Grant No.YZ2023H2B013)the Anhui Provincial Key Research and Development Project(Grant No.2022i01020015)the Open Project of Key Laboratory of Conveyance Equipment(East China Jiaotong University),Ministry of Education(KLCE2022-01).
文摘The human pose paradigm is estimated using a transformer-based multi-branch multidimensional directed the three-dimensional(3D)method that takes into account self-occlusion,badly posedness,and a lack of depth data in the per-frame 3D posture estimation from two-dimensional(2D)mapping to 3D mapping.Firstly,by examining the relationship between the movements of different bones in the human body,four virtual skeletons are proposed to enhance the cyclic constraints of limb joints.Then,multiple parameters describing the skeleton are fused and projected into a high-dimensional space.Utilizing a multi-branch network,motion features between bones and overall motion features are extracted to mitigate the drift error in the estimation results.Furthermore,the estimated relative depth is projected into 3D space,and the error is calculated against real 3D data,forming a loss function along with the relative depth error.This article adopts the average joint pixel error as the primary performance metric.Compared to the benchmark approach,the estimation findings indicate an increase in average precision of 1.8 mm within the Human3.6M sample.
文摘In order to measure the parameters of flight rocket by using radar,rocket impact point was estimated accurately for rocket trajectory correction.The Kalman filter with adaptive filter gain matrix was adopted.According to the particle trajectory model,the adaptive Kalman filter trajectory model was constructed for removing and filtering the outliers of the parameters during a section of flight detected by three-dimensional data radar and the rocket impact point was extrapolated.The results of numerical simulation show that the outliers and noise in trajectory measurement signal can be removed effectively by using the adaptive Kalman filter and the filter variance can converge in a short period of time.Based on the relation of filtering time and impact point estimation error,choosing the filtering time of 8-10 scan get the minimum estimation error of impact point.
基金Chinese Polar Environment Comprehensive Investigation and Assessment Programmes under contract Nos CHINARE2013-03-03 and CHINARE 2013-04-03the National Oceanic Commonweal Research Project under contract No.201105001the National Natural Science Foundation of China under contract No.41374043
文摘The measurement of atmospheric water vapor (WV) content and variability is important for meteorological and climatological research. A technique for the remote sensing of atmospheric WV content using ground-based Global Positioning System (GPS) has become available, which can routinely achieve accuracies for integrated WV content of 1-2 kg/m2. Some experimental work has shown that the accuracy of WV measurements from a moving platform is comparable to that of (static) land-based receivers. Extending this technique into the marine environment on a moving platform would be greatly beneficial for many aspects of meteorological research, such as the calibration of satellite data, investigation of the air-sea interface, as well as forecasting and climatological studies. In this study, kinematic precise point positioning has been developed to investigate WV in the Arctic Ocean (80°-87°N) and annual variations are obtained for 2008 and 2012 that are identical to those related to the enhanced greenhouse effect.
基金supported by the National Natural Science Foundation of China(51875535)the Natural Science Foundation for Young Scientists of Shanxi Province(201901D211242201701D221017)。
文摘For localisation of unknown non-cooperative targets in space,the existence of interference points causes inaccuracy of pose estimation while utilizing point cloud registration.To address this issue,this paper proposes a new iterative closest point(ICP)algorithm combined with distributed weights to intensify the dependability and robustness of the non-cooperative target localisation.As interference points in space have not yet been extensively studied,we classify them into two broad categories,far interference points and near interference points.For the former,the statistical outlier elimination algorithm is employed.For the latter,the Gaussian distributed weights,simultaneously valuing with the variation of the Euclidean distance from each point to the centroid,are commingled to the traditional ICP algorithm.In each iteration,the weight matrix W in connection with the overall localisation is obtained,and the singular value decomposition is adopted to accomplish high-precision estimation of the target pose.Finally,the experiments are implemented by shooting the satellite model and setting the position of interference points.The outcomes suggest that the proposed algorithm can effectively suppress interference points and enhance the accuracy of non-cooperative target pose estimation.When the interference point number reaches about 700,the average error of angle is superior to 0.88°.
基金The National Natural Science Foundation of China under contract No.11704225the Shandong Provincial Natural Science Foundation under contract No.ZR2016AQ23+1 种基金the State Key Laboratory of Acoustics of Chinese Academy of Sciences under contract No.SKLA201704the National Programe on Global Change and Air-Sea Interaction
文摘Environmental uncertainty represents the limiting factor in matched-field localization. Within a Bayesian framework, both the environmental parameters, and the source parameters are considered to be unknown variables. However, including environmental parameters in multiple-source localization greatly increases the complexity and computational demands of the inverse problem. In the paper, the closed-form maximumlikelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. This paper compares two Bayesian-point-estimation methods: the maximum a posteriori(MAP) approach and the marginal posterior probability density(PPD) approach to source localization. The MAP approach determines the sources locations by maximizing the PPD over all source and environmental parameters. The marginal PPD approach integrates the PPD over the unknowns to obtain a sequence of marginal probability distribution over source range or depth.Monte Carlo analysis of the two approaches for a test case involving both geoacoustic and water-column uncertainties indicates that:(1) For sensitive parameters such as source range, water depth and water sound speed, the MAP solution is better than the marginal PPD solution.(2) For the less sensitive parameters, such as,bottom sound speed, bottom density, bottom attenuation and water sound speed, when the SNR is low, the marginal PPD solution can better smooth the noise, which leads to better performance than the MAP solution.Since the source range and depth are sensitive parameters, the research shows that the MAP approach provides a slightly more reliable method to locate multiple sources in an unknown environment.
文摘Human Action Recognition(HAR)and pose estimation from videos have gained significant attention among research communities due to its applica-tion in several areas namely intelligent surveillance,human robot interaction,robot vision,etc.Though considerable improvements have been made in recent days,design of an effective and accurate action recognition model is yet a difficult process owing to the existence of different obstacles such as variations in camera angle,occlusion,background,movement speed,and so on.From the literature,it is observed that hard to deal with the temporal dimension in the action recognition process.Convolutional neural network(CNN)models could be used widely to solve this.With this motivation,this study designs a novel key point extraction with deep convolutional neural networks based pose estimation(KPE-DCNN)model for activity recognition.The KPE-DCNN technique initially converts the input video into a sequence of frames followed by a three stage process namely key point extraction,hyperparameter tuning,and pose estimation.In the keypoint extraction process an OpenPose model is designed to compute the accurate key-points in the human pose.Then,an optimal DCNN model is developed to classify the human activities label based on the extracted key points.For improving the training process of the DCNN technique,RMSProp optimizer is used to optimally adjust the hyperparameters such as learning rate,batch size,and epoch count.The experimental results tested using benchmark dataset like UCF sports dataset showed that KPE-DCNN technique is able to achieve good results compared with benchmark algorithms like CNN,DBN,SVM,STAL,T-CNN and so on.
基金Support was provided by Research Joint Venture Agreement 17-JV-11242306045,“Old Growth Forest Dynamics and Structure,”between the USDA Forest Service and the University of New HampshireAdditional support to MJD was provided by the USDA National Institute of Food and Agriculture McIntire-Stennis Project Accession Number 1020142,“Forest Structure,Volume,and Biomass in the Northeastern United States.”+1 种基金supported by the USDA National Institute of Food and Agriculture,McIntire-Stennis project OKL02834the Division of Agricultural Sciences and Natural Resources at Oklahoma State University.
文摘Background:A new variance estimator is derived and tested for big BAF(Basal Area Factor)sampling which is a forest inventory system that utilizes Bitterlich sampling(point sampling)with two BAF sizes,a small BAF for tree counts and a larger BAF on which tree measurements are made usually including DBHs and heights needed for volume estimation.Methods:The new estimator is derived using the Delta method from an existing formulation of the big BAF estimator as consisting of three sample means.The new formula is compared to existing big BAF estimators including a popular estimator based on Bruce’s formula.Results:Several computer simulation studies were conducted comparing the new variance estimator to all known variance estimators for big BAF currently in the forest inventory literature.In simulations the new estimator performed well and comparably to existing variance formulas.Conclusions:A possible advantage of the new estimator is that it does not require the assumption of negligible correlation between basal area counts on the small BAF factor and volume-basal area ratios based on the large BAF factor selection trees,an assumption required by all previous big BAF variance estimation formulas.Although this correlation was negligible on the simulation stands used in this study,it is conceivable that the correlation could be significant in some forest types,such as those in which the DBH-height relationship can be affected substantially by density perhaps through competition.We derived a formula that can be used to estimate the covariance between estimates of mean basal area and the ratio of estimates of mean volume and mean basal area.We also mathematically derived expressions for bias in the big BAF estimator that can be used to show the bias approaches zero in large samples on the order of 1n where n is the number of sample points.
基金Supported by National Key R&D Program of China(Grant Nos.2020YFB1709901 and 2020YFB1709904)National Natural Science Foundation of China(Grant Nos.51975495 and 51905460)+1 种基金Guangdong Provincial Basic and Applied Basic Research Foundation(Grant No.2021A1515012286)Guiding Funds of Central Government for Supporting the Development of the Local Science and Technology(Grant No.2022L3049).
文摘Fast and accurate measurement of the volume of earthmoving materials is of great signifcance for the real-time evaluation of loader operation efciency and the realization of autonomous operation. Existing methods for volume measurement, such as total station-based methods, cannot measure the volume in real time, while the bucket-based method also has the disadvantage of poor universality. In this study, a fast estimation method for a loader’s shovel load volume by 3D reconstruction of material piles is proposed. First, a dense stereo matching method (QORB–MAPM) was proposed by integrating the improved quadtree ORB algorithm (QORB) and the maximum a posteriori probability model (MAPM), which achieves fast matching of feature points and dense 3D reconstruction of material piles. Second, the 3D point cloud model of the material piles before and after shoveling was registered and segmented to obtain the 3D point cloud model of the shoveling area, and the Alpha-shape algorithm of Delaunay triangulation was used to estimate the volume of the 3D point cloud model. Finally, a shovel loading volume measurement experiment was conducted under loose-soil working conditions. The results show that the shovel loading volume estimation method (QORB–MAPM VE) proposed in this study has higher estimation accuracy and less calculation time in volume estimation and bucket fll factor estimation, and it has signifcant theoretical research and engineering application value.
基金supported by the National Research Foundation of Korea (NRF) (NRF-2018R1D1A3B07044041&NRF-2020R1A2C1101258)supported by the MSIT (Ministry of Science and ICT),Korea,under the ITRC (Information Technology Research Center)Support Program (IITP-2023-2020-0-01846)was conducted during the research year of Chungbuk National University in 2023.
文摘This article describes a novel approach for enhancing the three-dimensional(3D)point cloud reconstruction for light field microscopy(LFM)using U-net architecture-based fully convolutional neural network(CNN).Since the directional view of the LFM is limited,noise and artifacts make it difficult to reconstruct the exact shape of 3D point clouds.The existing methods suffer from these problems due to the self-occlusion of the model.This manuscript proposes a deep fusion learning(DL)method that combines a 3D CNN with a U-Net-based model as a feature extractor.The sub-aperture images obtained from the light field microscopy are aligned to form a light field data cube for preprocessing.A multi-stream 3D CNNs and U-net architecture are applied to obtain the depth feature fromthe directional sub-aperture LF data cube.For the enhancement of the depthmap,dual iteration-based weighted median filtering(WMF)is used to reduce surface noise and enhance the accuracy of the reconstruction.Generating a 3D point cloud involves combining two key elements:the enhanced depth map and the central view of the light field image.The proposed method is validated using synthesized Heidelberg Collaboratory for Image Processing(HCI)and real-world LFM datasets.The results are compared with different state-of-the-art methods.The structural similarity index(SSIM)gain for boxes,cotton,pillow,and pens are 0.9760,0.9806,0.9940,and 0.9907,respectively.Moreover,the discrete entropy(DE)value for LFM depth maps exhibited better performance than other existing methods.
文摘针对多视图三维重建任务中点云完整性欠佳的问题,提出一种基于空间传播的多视图深度估计网络(SPMVSNet)。引入空间传播思想用于复杂条件下的稠密点云重建,并分别设计基于空间传播的混合深度假设策略和空间感知优化模块。混合深度假设策略采用由粗糙到精细的深度推理方式,将深度估计视为多标签分类任务,对正则化概率体执行交叉熵损失以约束代价体,从而避免回归方法过拟合和收敛速度过慢的问题。空间感知优化模块从包含高级语义特征表示的特征图中获得引导,在进行置信度检查后采用卷积空间传播网络,通过构建亲和矩阵来细化最终的深度图。同时,为解决大多数方法存在的对不满足多视图一致性的不可靠区域重建质量较低的问题,进一步结合注意力机制设计具有样本自适应能力的动态特征提取网络,用于增强模型的局部感知能力。实验结果表明,在DTU数据集上,SP-MVSNet的重建完整性相比于CVP-MVSNet提升32.8%,整体质量提升11.4%。在Tanks and Temples基准和Blended MVS数据集上,SP-MVSNet的表现也优于大多数已知方法,取得了良好的三维重建效果。