期刊文献+
共找到14篇文章
< 1 >
每页显示 20 50 100
Crowdsourced Sampling of a Composite Random Variable: Analysis, Simulation, and Experimental Test 被引量:2
1
作者 m. p. silverman 《Open Journal of Statistics》 2019年第4期494-529,共36页
A composite random variable is a product (or sum of products) of statistically distributed quantities. Such a variable can represent the solution to a multi-factor quantitative problem submitted to a large, diverse, i... A composite random variable is a product (or sum of products) of statistically distributed quantities. Such a variable can represent the solution to a multi-factor quantitative problem submitted to a large, diverse, independent, anonymous group of non-expert respondents (the “crowd”). The objective of this research is to examine the statistical distribution of solutions from a large crowd to a quantitative problem involving image analysis and object counting. Theoretical analysis by the author, covering a range of conditions and types of factor variables, predicts that composite random variables are distributed log-normally to an excellent approximation. If the factors in a problem are themselves distributed log-normally, then their product is rigorously log-normal. A crowdsourcing experiment devised by the author and implemented with the assistance of a BBC (British Broadcasting Corporation) television show, yielded a sample of approximately 2000 responses consistent with a log-normal distribution. The sample mean was within ~12% of the true count. However, a Monte Carlo simulation (MCS) of the experiment, employing either normal or log-normal random variables as factors to model the processes by which a crowd of 1 million might arrive at their estimates, resulted in a visually perfect log-normal distribution with a mean response within ~5% of the true count. The results of this research suggest that a well-modeled MCS, by simulating a sample of responses from a large, rational, and incentivized crowd, can provide a more accurate solution to a quantitative problem than might be attainable by direct sampling of a smaller crowd or an uninformed crowd, irrespective of size, that guesses randomly. 展开更多
关键词 Crowdsourcing COMPUTER Modeling of CROWDS MONTE Carlo SIMULATION LARGE-SCALE Sampling Log-Normal RANDOM Variable Log-Normal Distribution
下载PDF
Extraction of Information from Crowdsourcing: Experimental Test Employing Bayesian, Maximum Likelihood, and Maximum Entropy Methods 被引量:1
2
作者 m. p. silverman 《Open Journal of Statistics》 2019年第5期571-600,共30页
A crowdsourcing experiment in which viewers (the “crowd”) of a British Broadcasting Corporation (BBC) television show submitted estimates of the number of coins in a tumbler was shown in an antecedent paper (Part 1)... A crowdsourcing experiment in which viewers (the “crowd”) of a British Broadcasting Corporation (BBC) television show submitted estimates of the number of coins in a tumbler was shown in an antecedent paper (Part 1) to follow a log-normal distribution ∧(m,s2). The coin-estimation experiment is an archetype of a broad class of image analysis and object counting problems suitable for solution by crowdsourcing. The objective of the current paper (Part 2) is to determine the location and scale parameters (m,s) of ∧(m,s2) by both Bayesian and maximum likelihood (ML) methods and to compare the results. One outcome of the analysis is the resolution, by means of Jeffreys’ rule, of questions regarding the appropriate Bayesian prior. It is shown that Bayesian and ML analyses lead to the same expression for the location parameter, but different expressions for the scale parameter, which become identical in the limit of an infinite sample size. A second outcome of the analysis concerns use of the sample mean as the measure of information of the crowd in applications where the distribution of responses is not sought or known. In the coin-estimation experiment, the sample mean was found to differ widely from the mean number of coins calculated from ∧(m,s2). This discordance raises critical questions concerning whether, and under what conditions, the sample mean provides a reliable measure of the information of the crowd. This paper resolves that problem by use of the principle of maximum entropy (PME). The PME yields a set of equations for finding the most probable distribution consistent with given prior information and only that information. If there is no solution to the PME equations for a specified sample mean and sample variance, then the sample mean is an unreliable statistic, since no measure can be assigned to its uncertainty. Parts 1 and 2 together demonstrate that the information content of crowdsourcing resides in the distribution of responses (very often log-normal in form), which can be obtained empirically or by appropriate modeling. 展开更多
关键词 Crowdsourcing BAYESIAN PRIORS MAXIMUM LIKELIHOOD PRINCIPLE of MAXIMUM ENTROPY Parameter Estimation Log-Normal Distribution
下载PDF
Brownian Motion of Decaying Particles: Transition Probability, Computer Simulation, and First-Passage Times 被引量:2
3
作者 m. p. silverman 《Journal of Modern Physics》 2017年第11期1809-1849,共41页
Recent developments in the measurement of radioactive gases in passive diffusion motivate the analysis of Brownian motion of decaying particles, a subject that has received little previous attention. This paper report... Recent developments in the measurement of radioactive gases in passive diffusion motivate the analysis of Brownian motion of decaying particles, a subject that has received little previous attention. This paper reports the derivation and solution of equations comparable to the Fokker-Planck and Langevin equations for one-dimensional diffusion and decay of unstable particles. In marked contrast to the case of stable particles, the two equations are not equivalent, but provide different information regarding the same stochastic process. The differences arise because Brownian motion with particle decay is not a continuous process. The discontinuity is readily apparent in the computer-simulated trajectories of the Langevin equation that incorporate both a Wiener process for displacement fluctuations and a Bernoulli process for random decay. This paper also reports the derivation of the mean time of first passage of the decaying particle to absorbing boundaries. Here, too, particle decay can lead to an outcome markedly different from that for stable particles. In particular, the first-passage time of the decaying particle is always finite, whereas the time for a stable particle to reach a single absorbing boundary is theoretically infinite due to the heavy tail of the inverse Gaussian density. The methodology developed in this paper should prove useful in the investigation of radioactive gases, aerosols of radioactive atoms, dust particles to which adhere radioactive ions, as well as diffusing gases and liquids of unstable molecules. 展开更多
关键词 BROWNIAN Motion Random Walk Diffusion RADIOACTIVITY Transition Probability FOKKER-PLANCK EQUATION LANGEVIN EQUATION First Passage Fick’s Law Wiener Process
下载PDF
Reaction Forces on a Fixed Ladder in Static Equilibrium: Analysis and Definitive Experimental Test of the Ladder Problem 被引量:2
4
作者 m. p. silverman 《World Journal of Mechanics》 2018年第9期311-342,共32页
The development of a theoretical model to predict the four equilibrium forces of reaction on a simple ladder of non-adjustable length leaning against a wall has long remained an unresolved matter. The difficulty is th... The development of a theoretical model to predict the four equilibrium forces of reaction on a simple ladder of non-adjustable length leaning against a wall has long remained an unresolved matter. The difficulty is that the problem is statically indeterminate and therefore requires complementary information to obtain a unique solution. This paper reports 1) a comprehensive theoretical analysis of the three fundamental models based on treating the ladder as a single Euler-Bernoulli beam, and 2) a detailed experimental investigation of the forces of reaction as a function of applied load and location of load. In contrast to previous untested proposals that the solution to the ladder problem lay in the axial constraint on compression or the transverse constraint on flexure, the experimental outcome of the present work showed unambiguously that 1) the ladder could be modeled the best by a pinned support at the base (on the ground) and a roller support at the top (at the wall), and 2) the only complementary relation needed to resolve the static indeterminacy is the force of friction at the wall. Measurements were also made on the impact loading of a ladder by rapid ascent and descent of a climber. The results obtained were consistent with a simple dynamical model of the ladder as a linear elastic medium subject to a pulse perturbation. The solution to the ladder problem herein presented provides a basis for theoretical extension to other types of ladders. Of particular importance, given that accidents involving ladders in the workplace comprise a significant fraction of all industrial accidents, the theoretical relations reported here can help determine whether a collapsed structure, against which a ladder was applied, met regulatory safety limits or not. 展开更多
关键词 REACTION FORCES on a LADDER REACTION FORCES on a Beam IMPULSE Load on a LADDER LADDER in Static EQUILIBRIUM Statically INDETERMINATE FORCES
下载PDF
Bending of a Tapered Rod: Modern Application and Experimental Test of Elastica Theory 被引量:2
5
作者 m. p. silverman Joseph Farrah 《World Journal of Mechanics》 2018年第7期272-300,共29页
A tapered rod mounted at one end (base) and subject to a normal force at the other end (tip) is a fundamental structure of continuum mechanics that occurs widely at all size scales from radio towers to fishing rods to... A tapered rod mounted at one end (base) and subject to a normal force at the other end (tip) is a fundamental structure of continuum mechanics that occurs widely at all size scales from radio towers to fishing rods to micro-electromechanical sensors. Although the bending of a uniform rod is well studied and gives rise to mathematical shapes described by elliptic integrals, no exact closed form solution to the nonlinear differential equations of static equilibrium is known for the deflection of a tapered rod. We report in this paper a comprehensive numerical analysis and experimental test of the exact theory of bending deformation of a tapered rod. Given the rod geometry and elastic modulus, the theory yields virtually all the geometric and physical features that an analyst, experimenter, or instrument designer might want as a function of impressed load, such as the exact curve of deformation (termed the elastica), maximum tip displacement, maximum tip deflection angle, distribution of curvature, and distribution of bending moment. Applied experimentally, the theory permits rapid estimation of the elastic modulus of a rod, which is not easily obtainable by other means. We have tested the theory by photographing the shapes of a set of flexible rods of different lengths and tapers subject to a range of impressed loads and using digital image analysis to extract the coordinates of the elastica curves. The extent of flexure in these experiments far exceeded the range of applicability of approximations that linearize the equations of equilibrium or neglect tapering of the rod. Agreement between the measured deflection curves and the exact theoretical predictions was excellent in all but several cases. In these exceptional cases, the nature of the anomalies provided important information regarding the deviation of the rods from an ideal Euler-Bernoulli cantilever, which thereby permitted us to model the deformation of the rods more accurately. 展开更多
关键词 ELASTICA DEFLECTION of Tapered CANTILEVER Euler-Bernoulli Beam Elastic MODULUS Flexure FORMULA
下载PDF
Progressive Randomization of a Deck of Playing Cards: Experimental Tests and Statistical Analysis of the Riffle Shuffle 被引量:1
6
作者 m. p. silverman 《Open Journal of Statistics》 2019年第2期268-298,共31页
The question of how many shuffles are required to randomize an initially ordered deck of cards is a problem that has fascinated mathematicians, scientists, and the general public. The two principal theoretical approac... The question of how many shuffles are required to randomize an initially ordered deck of cards is a problem that has fascinated mathematicians, scientists, and the general public. The two principal theoretical approaches to the problem, which differed in how each defined randomness, has led to statistically different threshold numbers of shuffles. This paper reports a comprehensive experimental analysis of the card randomization problem for the purposes of determining 1) which of the two theoretical approaches made the more accurate prediction, 2) whether different statistical tests yield different threshold numbers of randomizing shuffles, and 3) whether manual or mechanical shuffling randomizes a deck more effectively for a given number of shuffles. Permutations of 52-card decks, each subjected to sets of 19 successive riffle shuffles executed manually and by an auto-shuffling device were recorded sequentially and analyzed in respect to 1) the theory of runs, 2) rank ordering, 3) serial correlation, 4) theory of rising sequences, and 5) entropy and information theory. Among the outcomes, it was found that: 1) different statistical tests were sensitive to different patterns indicative of residual order;2) as a consequence, the threshold number of randomizing shuffles could vary widely among tests;3) in general, manual shuffling randomized a deck better than mechanical shuffling for a given number of shuffles;and 4) the mean number of rising sequences as a function of number of manual shuffles matched very closely the theoretical predictions based on the Gilbert-Shannon-Reed (GSR) model of riffle shuffles, whereas mechanical shuffling resulted in significantly fewer rising sequences than predicted. 展开更多
关键词 RANDOMIZATION of CARDS Number of RIFFLE Shuffles Rising Sequences GSR Model Entropy and Information
下载PDF
Cheating or Coincidence? Statistical Method Employing the Principle of Maximum Entropy for Judging Whether a Student Has Committed Plagiarism 被引量:1
7
作者 m. p. silverman 《Open Journal of Statistics》 2015年第2期143-157,共15页
Elements of correspondence (“coincidences”) between a student’s solutions to an assigned set of quantitative problems and the solutions manual for the course textbook may suggest that the stu-dent copied the work f... Elements of correspondence (“coincidences”) between a student’s solutions to an assigned set of quantitative problems and the solutions manual for the course textbook may suggest that the stu-dent copied the work from an illicit source. Plagiarism of this kind, which occurs primarily in fields such as the natural sciences, engineering, and mathematics, is often difficult to establish. This paper derives an expression for the probability that alleged coincidences in a student’s paper could be attributable to pure chance. The analysis employs the Principle of Maximum Entropy (PME), which, mathematically, is a variational procedure requiring maximization of the Shannon-Jaynes entropy function augmented by the completeness relation for probabilities and known information in the form of expectation values. The virtue of the PME as a general method of inferential reasoning is that it generates the most objective (i.e. least biased) probability distribution consistent with the given information. Numerical examination of test cases for a range of plausible conditions can yield outcomes that tend to exonerate a student who otherwise might be wrongfully judged guilty of cheating by adjudicators unfamiliar with the surprising properties of random processes. 展开更多
关键词 PLAGIARISM CHEATING COINCIDENCE Information Entropy
下载PDF
Brownian Motion of Radioactive Particles: Derivation and Monte Carlo Test of Spatial and Temporal Distributions 被引量:1
8
作者 m. p. silverman Akrit mudvari 《World Journal of Nuclear Science and Technology》 2018年第2期86-119,共34页
Stochastic processes such as diffusion can be analyzed by means of a partial differential equation of the Fokker-Planck type (FPE), which yields a transition probability density, or by a stochastic differential equati... Stochastic processes such as diffusion can be analyzed by means of a partial differential equation of the Fokker-Planck type (FPE), which yields a transition probability density, or by a stochastic differential equation of the Langevin type (LE), which yields the time evolution of a statistical process variable. Provided the stochastic process is continuous and certain boundary conditions are met, the two approaches yield equivalent information. However, Brownian motion of radioactively decaying particles is not a continuous process because the Brownian trajectories abruptly terminate when the particle decays. Recent analysis of the Brownian motion of decaying particles by both approaches has led to different mean-square displacements. In this paper, we demonstrate the complete equivalence of the two approaches by 1) showing quantitatively and operationally how the probability densities and statistical moments predicted by the FPE and LE relate to one another, 2) verifying that both approaches lead to identical statistical moments at all orders, and 3) confirming that the analytical solution to the FPE accurately describes the Brownian trajectories obtained by Monte Carlo simulations based on the LE. The analysis in this paper addresses both the spatial distribution of the particles (i.e. the question of displacement as a function of diffusion time) and the temporal distribution (i.e. the question of first-passage time to fixed absorbing boundaries). 展开更多
关键词 Brownian Motion Diffusion Radioactive Decay FOKKER-PLANCK EQUATION LANGEVIN EQUATION Monte Carlo Simulation First-Passage Time Wiener PROCESS Bernoulli PROCESS Survival FUNCTION Moment-Generating FUNCTION
下载PDF
Analysis of Residence Time in the Measurement of Radon Activity by Passive Diffusion in an Open Volume: A Micro-Statistical Approach 被引量:1
9
作者 m. p. silverman 《World Journal of Nuclear Science and Technology》 2017年第4期252-273,共22页
Residence time in a flow measurement of radioactivity is the time spent by a pre-determined quantity of radioactive sample in the flow cell. In a recent report of the measurement of indoor radon by passive diffusion i... Residence time in a flow measurement of radioactivity is the time spent by a pre-determined quantity of radioactive sample in the flow cell. In a recent report of the measurement of indoor radon by passive diffusion in an open volume (i.e. no flow cell or control volume), the concept of residence time was generalized to apply to measurement conditions with random, rather than directed, flow. The generalization, leading to a quantity &Delta;tr, involved use of a) a phenomenological alpha-particle range function to calculate the effective detection volume, and b) a phenomenological description of diffusion by Fick’s law to determine the effective flow velocity. This paper examines the residence time in passive diffusion from the micro-statistical perspective of single-particle continuous Brownian motion. The statistical quantity “mean residence time” Tr is derived from the Green’s function for unbiased single-particle diffusion and is shown to be consistent with &Delta;tr. The finite statistical lifetime of the randomly moving radioactive atom plays an essential part. For stable particles, Tr is of infinite duration, whereas for an unstable particle (such as 222Rn), with diffusivity D and decay rate &lambda;, Tr is approximately the effective size of the detection region divided by the characteristic diffusion velocity . Comparison of the mean residence time with the time of first passage (or exit time) in the theory of stochastic processes shows the conditions under which the two measures of time are equivalent and helps elucidate the connection between the phenomenological and statistical descriptions of radon diffusion. 展开更多
关键词 RADON Diffusion BROWNIAN Motion Random WALK RESIDENCE TIME First-Passage TIME EXIT TIME
下载PDF
Method to Measure Indoor Radon Concentration in an Open Volume with Geiger-Mueller Counters: Analysis from First Principles 被引量:2
10
作者 m. p. silverman 《World Journal of Nuclear Science and Technology》 2016年第4期232-260,共29页
A simple method employing a pair of pancake-style Geiger-Mueller (GM) counters for quantitative measurement of radon activity concentration (activity per unit volume) is described and demonstrated. The use of two GM c... A simple method employing a pair of pancake-style Geiger-Mueller (GM) counters for quantitative measurement of radon activity concentration (activity per unit volume) is described and demonstrated. The use of two GM counters, together with the basic theory derived in this paper, permit the detection of alpha particles from decay of and progeny ( <sup>218</sup>Po, <sup>214</sup>Po) and the conversion of the alpha count rate into a radon concentration. A unique feature of this method, in comparison with standard methodologies to measure radon concentration, is the absence of a fixed control volume. Advantages afforded by the reported GM method include: 1) it provides a direct in-situ value of radon level, thereby eliminating the need to send samples to an external testing laboratory;2) it can be applied to monitoring radon levels exhibiting wide short-term variability;3) it can yield short-term measurements of comparable accuracy and equivalent or higher precision than a commercial radon monitor sampling by passive diffusion;4) it yields long-term measurements statistically equivalent to commercial radon monitors;5) it uses the most commonly employed, overall least expensive, and most easily operated type of nuclear instrumentation. As such, the method is par-ticularly suitable for use by researchers, public health personnel, and home dwellers who prefer to monitor indoor radon levels themselves. The results of a consecutive 30-day sequence of 24 hour mean radon measurements by the proposed GM method and a commercial state-of-the-art radon monitor certified for radon testing are compared. 展开更多
关键词 RADIOACTIVITY Radon Concentration Geiger-Mueller Counter Alpha Particle Diffusion Alpha Range
下载PDF
Numerical Procedures for Calculating the Probabilities of Recurrent Runs
11
作者 m. p. silverman 《Open Journal of Statistics》 2014年第2期144-153,共10页
Run count statistics serve a central role in tests of non-randomness of stochastic processes of interest to a wide range of disciplines within the physical sciences, social sciences, business and finance, and other en... Run count statistics serve a central role in tests of non-randomness of stochastic processes of interest to a wide range of disciplines within the physical sciences, social sciences, business and finance, and other endeavors involving intrinsic uncertainty. To carry out such tests, it is often necessary to calculate two kinds of run count probabilities: 1) the probability that a certain number of trials results in a specified multiple occurrence of an event, or 2) the probability that a specified number of occurrences of an event take place within a fixed number of trials. The use of appropriate generating functions provides a systematic procedure for obtaining the distribution functions of these probabilities. This paper examines relationships among the generating functions applicable to recurrent runs and discusses methods, employing symbolic mathematical software, for implementing numerical extraction of probabilities. In addition, the asymptotic form of the cumulative distribution function is derived, which allows accurate runs statistics to be obtained for sequences of trials so large that computation times for extraction of this information from the generating functions could be impractically long. 展开更多
关键词 RECURRENT Events Theory of RUNS Time Series Analysis GENERATING Functions PROBABILITY DISTRIBUTIONS
下载PDF
Statistical Analysis of Subsurface Diffusion of Solar Energy with Implications for Urban Heat Stress
12
作者 m. p. silverman 《Journal of Modern Physics》 2014年第9期751-762,共12页
Analysis of hourly underground temperature measurements at a medium-size (by population) US city as a function of depth and extending over 5+ years revealed a positive trend exceeding the rate of regional and global w... Analysis of hourly underground temperature measurements at a medium-size (by population) US city as a function of depth and extending over 5+ years revealed a positive trend exceeding the rate of regional and global warming by an order of magnitude. Measurements at depths greater than ~2 m are unaffected by daily fluctuations and sense only seasonal variability. A comparable trend also emerged from the surface temperature record of the largest US city (New York). Power spectral analysis of deep and shallow subsurface temperature records showed respectively two kinds of power-law behavior: 1) a quasi-continuum of power amplitudes indicative of Brownian noise, superposed (in the shallow record) by 2) a discrete spectrum of diurnal harmonics attributable to the unequal heat flux between daylight and darkness. Spectral amplitudes of the deepest temperature time series (2.4 m) conformed to a log-hyperbolic distribution. Upon removal of seasonal variability from the temperature record, the resulting spectral amplitudes followed a log-exponential distribution. Dynamical analysis showed that relative amplitudes and phases of temperature records at different depths were in excellent accord with a 1-dimensional heat diffusion model. 展开更多
关键词 Time Series Analysis Heat Conduction Thermal DIFFUSION Power LAWS CLIMATE Change Heat-Island Effect
下载PDF
The Role of Friction in the Static Equilibrium of a Fixed Ladder: Theoretical Analysis and Experimental Test
13
作者 m. p. silverman 《World Journal of Mechanics》 2018年第12期445-463,共19页
In a recent publication the author derived and experimentally tested several theoretical models, distinguished by different boundary conditions at the contacts with horizontal and vertical supports, that predicted the... In a recent publication the author derived and experimentally tested several theoretical models, distinguished by different boundary conditions at the contacts with horizontal and vertical supports, that predicted the forces of reaction on a fixed (i.e. inextensible) ladder. This problem is statically indeterminate since there are 4 forces of reaction and only 3 equations of static equilibrium. The model that predicted the empirical reactions correctly used a law of static friction to complement the equations of static equilibrium. The present paper examines in greater theoretical and experimental detail the role of friction in accounting for the forces of reaction on a fixed ladder. The reported measurements confirm that forces parallel and normal to the support at the top of the ladder are linearly proportional with a constant coefficient of friction irrespective of the magnitude or location of the load, as assumed in the theoretical model. However, measurements of forces parallel and normal to the support at the base of the ladder are linearly proportional with coefficients that depend sensitively on the location (although not the magnitude) of the load. This paper accounts quantitatively for the different effects of friction at the top and base of the ladder under conditions of usual use whereby friction at the vertical support alone is insufficient to keep the ladder from sliding. A theoretical model is also proposed for the unusual circumstance in which friction at the vertical support can keep the ladder from sliding. 展开更多
关键词 FORCES on a LADDER STATIC Equilibrium Law of STATIC FRICTION Statically INDETERMINATE FORCES of Reaction
下载PDF
Effects of a Periodic Decay Rate on the Statistics of Radioactive Decay: New Methods to Search for Violations of the Law of Radioactive Change
14
作者 m. p. silverman 《Journal of Modern Physics》 2015年第11期1533-1553,共21页
It is a long-held tenet of nuclear physics, from the early work of Rutherford and Soddy up to present times that the disintegration of each species of radioactive nuclide occurs randomly at a constant rate unaffected ... It is a long-held tenet of nuclear physics, from the early work of Rutherford and Soddy up to present times that the disintegration of each species of radioactive nuclide occurs randomly at a constant rate unaffected by interactions with the external environment. During the past 15 years or so, reports have been published of some 10 or more unstable nuclides with non-exponential, periodic decay rates claimed to be of geophysical, astrophysical, or cosmological origin. Deviations from standard exponential decay are weak, and the claims are controversial. This paper examines the effects of a periodic decay rate on the statistical distributions of 1) nuclear activity measurements and 2) nuclear lifetime measurements. It is demonstrated that the modifications to these distributions are approximately 100 times more sensitive to non-standard radioactive decay than measurements of the decay curve, power spectrum, or autocorrelation function for corresponding system parameters. 展开更多
关键词 RADIOACTIVE DECAY Non-Exponential DECAY TIME-DEPENDENT DECAY HALF-LIFE Lifetime DECAY Curve
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部