In order to enhance grain sampling efficiency, in this work a truss type multi-rod grain sampling machine is designed and tested. The sampling machine primarily consists of truss support mechanism, main carriage mecha...In order to enhance grain sampling efficiency, in this work a truss type multi-rod grain sampling machine is designed and tested. The sampling machine primarily consists of truss support mechanism, main carriage mechanism, auxiliary carriage mechanism, sampling rods, and a PLC controller. The movement of the main carriage on the truss, the auxiliary carriage on the main carriage, and the vertical movement of the sampling rods on the auxiliary carriage are controlled through PLC programming. The sampling machine accurately controls the position of the sampling rods, enabling random sampling with six rods to ensure comprehensive and random sampling. Additionally, sampling experiments were conducted, and the results showed that the multi-rod grain sampling machine simultaneously samples with six rods, achieving a sampling frequency of 38 times per hour. The round trip time for the sampling rods is 33 seconds per cycle, and the sampling length direction reaches 18 m. This study provides valuable insights for the design of multi-rod grain sampling machines.展开更多
Background:The local pivotal method(LPM)utilizing auxiliary data in sample selection has recently been proposed as a sampling method for national forest inventories(NFIs).Its performance compared to simple random samp...Background:The local pivotal method(LPM)utilizing auxiliary data in sample selection has recently been proposed as a sampling method for national forest inventories(NFIs).Its performance compared to simple random sampling(SRS)and LPM with geographical coordinates has produced promising results in simulation studies.In this simulation study we compared all these sampling methods to systematic sampling.The LPM samples were selected solely using the coordinates(LPMxy)or,in addition to that,auxiliary remote sensing-based forest variables(RS variables).We utilized field measurement data(NFI-field)and Multi-Source NFI(MS-NFI)maps as target data,and independent MS-NFI maps as auxiliary data.The designs were compared using relative efficiency(RE);a ratio of mean squared errors of the reference sampling design against the studied design.Applying a method in NFI also requires a proven estimator for the variance.Therefore,three different variance estimators were evaluated against the empirical variance of replications:1)an estimator corresponding to SRS;2)a Grafström-Schelin estimator repurposed for LPM;and 3)a Matérn estimator applied in the Finnish NFI for systematic sampling design.Results:The LPMxy was nearly comparable with the systematic design for the most target variables.The REs of the LPM designs utilizing auxiliary data compared to the systematic design varied between 0.74–1.18,according to the studied target variable.The SRS estimator for variance was expectedly the most biased and conservative estimator.Similarly,the Grafström-Schelin estimator gave overestimates in the case of LPMxy.When the RS variables were utilized as auxiliary data,the Grafström-Schelin estimates tended to underestimate the empirical variance.In systematic sampling the Matérn and Grafström-Schelin estimators performed for practical purposes equally.Conclusions:LPM optimized for a specific variable tended to be more efficient than systematic sampling,but all of the considered LPM designs were less efficient than the systematic sampling design for some target variables.The Grafström-Schelin estimator could be used as such with LPMxy or instead of the Matérn estimator in systematic sampling.Further studies of the variance estimators are needed if other auxiliary variables are to be used in LPM.展开更多
Reinforcement learning can be modeled as markov decision process mathematically.In consequence,the interaction samples as well as the connection relation between them are two main types of information for learning.How...Reinforcement learning can be modeled as markov decision process mathematically.In consequence,the interaction samples as well as the connection relation between them are two main types of information for learning.However,most of recent works on deep reinforcement learning treat samples independently either in their own episode or between episodes.In this paper,in order to utilize more sample information,we propose another learning system based on directed associative graph(DAG).The DAG is built on all trajectories in real time,which includes the whole connection relation of all samples among all episodes.Through planning with directed edges on DAG,we offer another perspective to estimate stateaction pair,especially for the unknowns to deep neural network(DNN)as well as episodic memory(EM).Mixed loss function is generated by the three learning systems(DNN,EM and DAG)to improve the efficiency of the parameter update in the proposed algorithm.We show that our algorithm is significantly better than the state-of-the-art algorithm in performance and sample efficiency on testing environments.Furthermore,the convergence of our algorithm is proved in the appendix and its long-term performance as well as the effects of DAG are verified.展开更多
In event-driven algorithms for simulation of diffusing,colliding,and reacting particles,new positions and events are sampled from the cumulative distribution function(CDF)of a probability distribution.The distribution...In event-driven algorithms for simulation of diffusing,colliding,and reacting particles,new positions and events are sampled from the cumulative distribution function(CDF)of a probability distribution.The distribution is sampled frequently and it is important for the efficiency of the algorithm that the sampling is fast.The CDF is known analytically or computed numerically.Analytical formulas are sometimes rather complicated making them difficult to evaluate.The CDF may be stored in a table for interpolation or computed directly when it is needed.Different alternatives are compared for chemically reacting molecules moving by Brownian diffusion in two and three dimensions.The best strategy depends on the dimension of the problem,the length of the time interval,the density of the particles,and the number of different reactions.展开更多
The GARCH diffusion model has received much attention in recent years, as it describes financial time series better when compared to many other models. In this paper, the authors study the empirical performance of Ame...The GARCH diffusion model has received much attention in recent years, as it describes financial time series better when compared to many other models. In this paper, the authors study the empirical performance of American option pricing model when the underlying asset follows the GARCH diffusion. The parameters of the GARCH diffusion model are estimated by the efficient importance sampling-based maximum likelihood (EIS-ML) method. Then the least-squares Monte Carlo (LSMC) method is introduced to price American options. Empirical pricing results on American put options in Hong Kong stock market shows that the GARCH diffusion model outperforms the classical constant volatility (CV) model significantly.展开更多
A sample enrichment method focusing on the minor targeted components was established to help them to be successfully separated by pH-zone refining CCC.Seven minor indole alkaloids in Uncaria rhynchophylla(Miq.)Miq.ex ...A sample enrichment method focusing on the minor targeted components was established to help them to be successfully separated by pH-zone refining CCC.Seven minor indole alkaloids in Uncaria rhynchophylla(Miq.)Miq.ex Havil(UR)were chosen to show the advantage of this method.The sample enrichment and separation were展开更多
Nonlinear dynamical stochastic models are ubiquitous in different areas.Their statistical properties are often of great interest,but are also very challenging to compute.Many excitable media models belong to such type...Nonlinear dynamical stochastic models are ubiquitous in different areas.Their statistical properties are often of great interest,but are also very challenging to compute.Many excitable media models belong to such types of complex systems with large state dimensions and the associated covariance matrices have localized structures.In this article,a mathematical framework to understand the spatial localization for a large class of stochastically coupled nonlinear systems in high dimensions is developed.Rigorous mathematical analysis shows that the local effect from the diffusion results in an exponential decay of the components in the covariance matrix as a function of the distance while the global effect due to the mean field interaction synchronizes different components and contributes to a global covariance.The analysis is based on a comparison with an appropriate linear surrogate model,of which the covariance propagation can be computed explicitly.Two important applications of these theoretical results are discussed.They are the spatial averaging strategy for efficiently sampling the covariance matrix and the localization technique in data assimilation.Test examples of a linear model and a stochastically coupled Fitz Hugh-Nagumo model for excitable media are adopted to validate the theoretical results.The latter is also used for a systematical study of the spatial averaging strategy in efficiently sampling the covariance matrix in different dynamical regimes.展开更多
文摘In order to enhance grain sampling efficiency, in this work a truss type multi-rod grain sampling machine is designed and tested. The sampling machine primarily consists of truss support mechanism, main carriage mechanism, auxiliary carriage mechanism, sampling rods, and a PLC controller. The movement of the main carriage on the truss, the auxiliary carriage on the main carriage, and the vertical movement of the sampling rods on the auxiliary carriage are controlled through PLC programming. The sampling machine accurately controls the position of the sampling rods, enabling random sampling with six rods to ensure comprehensive and random sampling. Additionally, sampling experiments were conducted, and the results showed that the multi-rod grain sampling machine simultaneously samples with six rods, achieving a sampling frequency of 38 times per hour. The round trip time for the sampling rods is 33 seconds per cycle, and the sampling length direction reaches 18 m. This study provides valuable insights for the design of multi-rod grain sampling machines.
基金the Ministry of Agriculture and Forestry key project“Puuta liikkeelle ja uusia tuotteita metsästä”(“Wood on the move and new products from forest”)Academy of Finland(project numbers 295100 , 306875).
文摘Background:The local pivotal method(LPM)utilizing auxiliary data in sample selection has recently been proposed as a sampling method for national forest inventories(NFIs).Its performance compared to simple random sampling(SRS)and LPM with geographical coordinates has produced promising results in simulation studies.In this simulation study we compared all these sampling methods to systematic sampling.The LPM samples were selected solely using the coordinates(LPMxy)or,in addition to that,auxiliary remote sensing-based forest variables(RS variables).We utilized field measurement data(NFI-field)and Multi-Source NFI(MS-NFI)maps as target data,and independent MS-NFI maps as auxiliary data.The designs were compared using relative efficiency(RE);a ratio of mean squared errors of the reference sampling design against the studied design.Applying a method in NFI also requires a proven estimator for the variance.Therefore,three different variance estimators were evaluated against the empirical variance of replications:1)an estimator corresponding to SRS;2)a Grafström-Schelin estimator repurposed for LPM;and 3)a Matérn estimator applied in the Finnish NFI for systematic sampling design.Results:The LPMxy was nearly comparable with the systematic design for the most target variables.The REs of the LPM designs utilizing auxiliary data compared to the systematic design varied between 0.74–1.18,according to the studied target variable.The SRS estimator for variance was expectedly the most biased and conservative estimator.Similarly,the Grafström-Schelin estimator gave overestimates in the case of LPMxy.When the RS variables were utilized as auxiliary data,the Grafström-Schelin estimates tended to underestimate the empirical variance.In systematic sampling the Matérn and Grafström-Schelin estimators performed for practical purposes equally.Conclusions:LPM optimized for a specific variable tended to be more efficient than systematic sampling,but all of the considered LPM designs were less efficient than the systematic sampling design for some target variables.The Grafström-Schelin estimator could be used as such with LPMxy or instead of the Matérn estimator in systematic sampling.Further studies of the variance estimators are needed if other auxiliary variables are to be used in LPM.
基金This work is supported by the National Key Research and Development Program of China,2018YFA0701603 and Natural Science Foundation of Anhui Province,2008085MF213.
文摘Reinforcement learning can be modeled as markov decision process mathematically.In consequence,the interaction samples as well as the connection relation between them are two main types of information for learning.However,most of recent works on deep reinforcement learning treat samples independently either in their own episode or between episodes.In this paper,in order to utilize more sample information,we propose another learning system based on directed associative graph(DAG).The DAG is built on all trajectories in real time,which includes the whole connection relation of all samples among all episodes.Through planning with directed edges on DAG,we offer another perspective to estimate stateaction pair,especially for the unknowns to deep neural network(DNN)as well as episodic memory(EM).Mixed loss function is generated by the three learning systems(DNN,EM and DAG)to improve the efficiency of the parameter update in the proposed algorithm.We show that our algorithm is significantly better than the state-of-the-art algorithm in performance and sample efficiency on testing environments.Furthermore,the convergence of our algorithm is proved in the appendix and its long-term performance as well as the effects of DAG are verified.
基金Financial support has been obtained from the Swedish Research Council.
文摘In event-driven algorithms for simulation of diffusing,colliding,and reacting particles,new positions and events are sampled from the cumulative distribution function(CDF)of a probability distribution.The distribution is sampled frequently and it is important for the efficiency of the algorithm that the sampling is fast.The CDF is known analytically or computed numerically.Analytical formulas are sometimes rather complicated making them difficult to evaluate.The CDF may be stored in a table for interpolation or computed directly when it is needed.Different alternatives are compared for chemically reacting molecules moving by Brownian diffusion in two and three dimensions.The best strategy depends on the dimension of the problem,the length of the time interval,the density of the particles,and the number of different reactions.
基金supported by the National Natural Science Foundations of China under Grant No.71201013the National Science Fund for Distinguished Young Scholars of China under Grant No.70825006+1 种基金the Program for Changjiang Scholars and Innovative Research Team in University under Grant No.IRT0916the National Natural Science Innovation Research Group of China under Grant No.71221001
文摘The GARCH diffusion model has received much attention in recent years, as it describes financial time series better when compared to many other models. In this paper, the authors study the empirical performance of American option pricing model when the underlying asset follows the GARCH diffusion. The parameters of the GARCH diffusion model are estimated by the efficient importance sampling-based maximum likelihood (EIS-ML) method. Then the least-squares Monte Carlo (LSMC) method is introduced to price American options. Empirical pricing results on American put options in Hong Kong stock market shows that the GARCH diffusion model outperforms the classical constant volatility (CV) model significantly.
基金supported by the National Science and Technology Major Project for Major Drug Development(No.2013ZX09508104)the Traditional Chinese Medicine Industry Research Special Project(No.201307002)the National Science&Technology Major Project Key New Drug Creation and Manufacturing program(No.2011ZX09307002-03)of the People's Republic of China
文摘A sample enrichment method focusing on the minor targeted components was established to help them to be successfully separated by pH-zone refining CCC.Seven minor indole alkaloids in Uncaria rhynchophylla(Miq.)Miq.ex Havil(UR)were chosen to show the advantage of this method.The sample enrichment and separation were
基金supported by the Office of Vice Chancellor for Research and Graduate Education(VCRGE)at University of Wisconsin-Madisonthe Office of Naval Research Grant ONR MURI N00014-16-1-2161+1 种基金the Center for Prototype Climate Modeling(CPCM)at New York University Abu Dhabi Research InstituteNUS Grant R-146-000-226-133
文摘Nonlinear dynamical stochastic models are ubiquitous in different areas.Their statistical properties are often of great interest,but are also very challenging to compute.Many excitable media models belong to such types of complex systems with large state dimensions and the associated covariance matrices have localized structures.In this article,a mathematical framework to understand the spatial localization for a large class of stochastically coupled nonlinear systems in high dimensions is developed.Rigorous mathematical analysis shows that the local effect from the diffusion results in an exponential decay of the components in the covariance matrix as a function of the distance while the global effect due to the mean field interaction synchronizes different components and contributes to a global covariance.The analysis is based on a comparison with an appropriate linear surrogate model,of which the covariance propagation can be computed explicitly.Two important applications of these theoretical results are discussed.They are the spatial averaging strategy for efficiently sampling the covariance matrix and the localization technique in data assimilation.Test examples of a linear model and a stochastically coupled Fitz Hugh-Nagumo model for excitable media are adopted to validate the theoretical results.The latter is also used for a systematical study of the spatial averaging strategy in efficiently sampling the covariance matrix in different dynamical regimes.