A composite random variable is a product (or sum of products) of statistically distributed quantities. Such a variable can represent the solution to a multi-factor quantitative problem submitted to a large, diverse, i...A composite random variable is a product (or sum of products) of statistically distributed quantities. Such a variable can represent the solution to a multi-factor quantitative problem submitted to a large, diverse, independent, anonymous group of non-expert respondents (the “crowd”). The objective of this research is to examine the statistical distribution of solutions from a large crowd to a quantitative problem involving image analysis and object counting. Theoretical analysis by the author, covering a range of conditions and types of factor variables, predicts that composite random variables are distributed log-normally to an excellent approximation. If the factors in a problem are themselves distributed log-normally, then their product is rigorously log-normal. A crowdsourcing experiment devised by the author and implemented with the assistance of a BBC (British Broadcasting Corporation) television show, yielded a sample of approximately 2000 responses consistent with a log-normal distribution. The sample mean was within ~12% of the true count. However, a Monte Carlo simulation (MCS) of the experiment, employing either normal or log-normal random variables as factors to model the processes by which a crowd of 1 million might arrive at their estimates, resulted in a visually perfect log-normal distribution with a mean response within ~5% of the true count. The results of this research suggest that a well-modeled MCS, by simulating a sample of responses from a large, rational, and incentivized crowd, can provide a more accurate solution to a quantitative problem than might be attainable by direct sampling of a smaller crowd or an uninformed crowd, irrespective of size, that guesses randomly.展开更多
A crowdsourcing experiment in which viewers (the “crowd”) of a British Broadcasting Corporation (BBC) television show submitted estimates of the number of coins in a tumbler was shown in an antecedent paper (Part 1)...A crowdsourcing experiment in which viewers (the “crowd”) of a British Broadcasting Corporation (BBC) television show submitted estimates of the number of coins in a tumbler was shown in an antecedent paper (Part 1) to follow a log-normal distribution ∧(m,s2). The coin-estimation experiment is an archetype of a broad class of image analysis and object counting problems suitable for solution by crowdsourcing. The objective of the current paper (Part 2) is to determine the location and scale parameters (m,s) of ∧(m,s2) by both Bayesian and maximum likelihood (ML) methods and to compare the results. One outcome of the analysis is the resolution, by means of Jeffreys’ rule, of questions regarding the appropriate Bayesian prior. It is shown that Bayesian and ML analyses lead to the same expression for the location parameter, but different expressions for the scale parameter, which become identical in the limit of an infinite sample size. A second outcome of the analysis concerns use of the sample mean as the measure of information of the crowd in applications where the distribution of responses is not sought or known. In the coin-estimation experiment, the sample mean was found to differ widely from the mean number of coins calculated from ∧(m,s2). This discordance raises critical questions concerning whether, and under what conditions, the sample mean provides a reliable measure of the information of the crowd. This paper resolves that problem by use of the principle of maximum entropy (PME). The PME yields a set of equations for finding the most probable distribution consistent with given prior information and only that information. If there is no solution to the PME equations for a specified sample mean and sample variance, then the sample mean is an unreliable statistic, since no measure can be assigned to its uncertainty. Parts 1 and 2 together demonstrate that the information content of crowdsourcing resides in the distribution of responses (very often log-normal in form), which can be obtained empirically or by appropriate modeling.展开更多
Recent developments in the measurement of radioactive gases in passive diffusion motivate the analysis of Brownian motion of decaying particles, a subject that has received little previous attention. This paper report...Recent developments in the measurement of radioactive gases in passive diffusion motivate the analysis of Brownian motion of decaying particles, a subject that has received little previous attention. This paper reports the derivation and solution of equations comparable to the Fokker-Planck and Langevin equations for one-dimensional diffusion and decay of unstable particles. In marked contrast to the case of stable particles, the two equations are not equivalent, but provide different information regarding the same stochastic process. The differences arise because Brownian motion with particle decay is not a continuous process. The discontinuity is readily apparent in the computer-simulated trajectories of the Langevin equation that incorporate both a Wiener process for displacement fluctuations and a Bernoulli process for random decay. This paper also reports the derivation of the mean time of first passage of the decaying particle to absorbing boundaries. Here, too, particle decay can lead to an outcome markedly different from that for stable particles. In particular, the first-passage time of the decaying particle is always finite, whereas the time for a stable particle to reach a single absorbing boundary is theoretically infinite due to the heavy tail of the inverse Gaussian density. The methodology developed in this paper should prove useful in the investigation of radioactive gases, aerosols of radioactive atoms, dust particles to which adhere radioactive ions, as well as diffusing gases and liquids of unstable molecules.展开更多
The development of a theoretical model to predict the four equilibrium forces of reaction on a simple ladder of non-adjustable length leaning against a wall has long remained an unresolved matter. The difficulty is th...The development of a theoretical model to predict the four equilibrium forces of reaction on a simple ladder of non-adjustable length leaning against a wall has long remained an unresolved matter. The difficulty is that the problem is statically indeterminate and therefore requires complementary information to obtain a unique solution. This paper reports 1) a comprehensive theoretical analysis of the three fundamental models based on treating the ladder as a single Euler-Bernoulli beam, and 2) a detailed experimental investigation of the forces of reaction as a function of applied load and location of load. In contrast to previous untested proposals that the solution to the ladder problem lay in the axial constraint on compression or the transverse constraint on flexure, the experimental outcome of the present work showed unambiguously that 1) the ladder could be modeled the best by a pinned support at the base (on the ground) and a roller support at the top (at the wall), and 2) the only complementary relation needed to resolve the static indeterminacy is the force of friction at the wall. Measurements were also made on the impact loading of a ladder by rapid ascent and descent of a climber. The results obtained were consistent with a simple dynamical model of the ladder as a linear elastic medium subject to a pulse perturbation. The solution to the ladder problem herein presented provides a basis for theoretical extension to other types of ladders. Of particular importance, given that accidents involving ladders in the workplace comprise a significant fraction of all industrial accidents, the theoretical relations reported here can help determine whether a collapsed structure, against which a ladder was applied, met regulatory safety limits or not.展开更多
A tapered rod mounted at one end (base) and subject to a normal force at the other end (tip) is a fundamental structure of continuum mechanics that occurs widely at all size scales from radio towers to fishing rods to...A tapered rod mounted at one end (base) and subject to a normal force at the other end (tip) is a fundamental structure of continuum mechanics that occurs widely at all size scales from radio towers to fishing rods to micro-electromechanical sensors. Although the bending of a uniform rod is well studied and gives rise to mathematical shapes described by elliptic integrals, no exact closed form solution to the nonlinear differential equations of static equilibrium is known for the deflection of a tapered rod. We report in this paper a comprehensive numerical analysis and experimental test of the exact theory of bending deformation of a tapered rod. Given the rod geometry and elastic modulus, the theory yields virtually all the geometric and physical features that an analyst, experimenter, or instrument designer might want as a function of impressed load, such as the exact curve of deformation (termed the elastica), maximum tip displacement, maximum tip deflection angle, distribution of curvature, and distribution of bending moment. Applied experimentally, the theory permits rapid estimation of the elastic modulus of a rod, which is not easily obtainable by other means. We have tested the theory by photographing the shapes of a set of flexible rods of different lengths and tapers subject to a range of impressed loads and using digital image analysis to extract the coordinates of the elastica curves. The extent of flexure in these experiments far exceeded the range of applicability of approximations that linearize the equations of equilibrium or neglect tapering of the rod. Agreement between the measured deflection curves and the exact theoretical predictions was excellent in all but several cases. In these exceptional cases, the nature of the anomalies provided important information regarding the deviation of the rods from an ideal Euler-Bernoulli cantilever, which thereby permitted us to model the deformation of the rods more accurately.展开更多
The question of how many shuffles are required to randomize an initially ordered deck of cards is a problem that has fascinated mathematicians, scientists, and the general public. The two principal theoretical approac...The question of how many shuffles are required to randomize an initially ordered deck of cards is a problem that has fascinated mathematicians, scientists, and the general public. The two principal theoretical approaches to the problem, which differed in how each defined randomness, has led to statistically different threshold numbers of shuffles. This paper reports a comprehensive experimental analysis of the card randomization problem for the purposes of determining 1) which of the two theoretical approaches made the more accurate prediction, 2) whether different statistical tests yield different threshold numbers of randomizing shuffles, and 3) whether manual or mechanical shuffling randomizes a deck more effectively for a given number of shuffles. Permutations of 52-card decks, each subjected to sets of 19 successive riffle shuffles executed manually and by an auto-shuffling device were recorded sequentially and analyzed in respect to 1) the theory of runs, 2) rank ordering, 3) serial correlation, 4) theory of rising sequences, and 5) entropy and information theory. Among the outcomes, it was found that: 1) different statistical tests were sensitive to different patterns indicative of residual order;2) as a consequence, the threshold number of randomizing shuffles could vary widely among tests;3) in general, manual shuffling randomized a deck better than mechanical shuffling for a given number of shuffles;and 4) the mean number of rising sequences as a function of number of manual shuffles matched very closely the theoretical predictions based on the Gilbert-Shannon-Reed (GSR) model of riffle shuffles, whereas mechanical shuffling resulted in significantly fewer rising sequences than predicted.展开更多
Elements of correspondence (“coincidences”) between a student’s solutions to an assigned set of quantitative problems and the solutions manual for the course textbook may suggest that the stu-dent copied the work f...Elements of correspondence (“coincidences”) between a student’s solutions to an assigned set of quantitative problems and the solutions manual for the course textbook may suggest that the stu-dent copied the work from an illicit source. Plagiarism of this kind, which occurs primarily in fields such as the natural sciences, engineering, and mathematics, is often difficult to establish. This paper derives an expression for the probability that alleged coincidences in a student’s paper could be attributable to pure chance. The analysis employs the Principle of Maximum Entropy (PME), which, mathematically, is a variational procedure requiring maximization of the Shannon-Jaynes entropy function augmented by the completeness relation for probabilities and known information in the form of expectation values. The virtue of the PME as a general method of inferential reasoning is that it generates the most objective (i.e. least biased) probability distribution consistent with the given information. Numerical examination of test cases for a range of plausible conditions can yield outcomes that tend to exonerate a student who otherwise might be wrongfully judged guilty of cheating by adjudicators unfamiliar with the surprising properties of random processes.展开更多
Stochastic processes such as diffusion can be analyzed by means of a partial differential equation of the Fokker-Planck type (FPE), which yields a transition probability density, or by a stochastic differential equati...Stochastic processes such as diffusion can be analyzed by means of a partial differential equation of the Fokker-Planck type (FPE), which yields a transition probability density, or by a stochastic differential equation of the Langevin type (LE), which yields the time evolution of a statistical process variable. Provided the stochastic process is continuous and certain boundary conditions are met, the two approaches yield equivalent information. However, Brownian motion of radioactively decaying particles is not a continuous process because the Brownian trajectories abruptly terminate when the particle decays. Recent analysis of the Brownian motion of decaying particles by both approaches has led to different mean-square displacements. In this paper, we demonstrate the complete equivalence of the two approaches by 1) showing quantitatively and operationally how the probability densities and statistical moments predicted by the FPE and LE relate to one another, 2) verifying that both approaches lead to identical statistical moments at all orders, and 3) confirming that the analytical solution to the FPE accurately describes the Brownian trajectories obtained by Monte Carlo simulations based on the LE. The analysis in this paper addresses both the spatial distribution of the particles (i.e. the question of displacement as a function of diffusion time) and the temporal distribution (i.e. the question of first-passage time to fixed absorbing boundaries).展开更多
Residence time in a flow measurement of radioactivity is the time spent by a pre-determined quantity of radioactive sample in the flow cell. In a recent report of the measurement of indoor radon by passive diffusion i...Residence time in a flow measurement of radioactivity is the time spent by a pre-determined quantity of radioactive sample in the flow cell. In a recent report of the measurement of indoor radon by passive diffusion in an open volume (i.e. no flow cell or control volume), the concept of residence time was generalized to apply to measurement conditions with random, rather than directed, flow. The generalization, leading to a quantity Δtr, involved use of a) a phenomenological alpha-particle range function to calculate the effective detection volume, and b) a phenomenological description of diffusion by Fick’s law to determine the effective flow velocity. This paper examines the residence time in passive diffusion from the micro-statistical perspective of single-particle continuous Brownian motion. The statistical quantity “mean residence time” Tr is derived from the Green’s function for unbiased single-particle diffusion and is shown to be consistent with Δtr. The finite statistical lifetime of the randomly moving radioactive atom plays an essential part. For stable particles, Tr is of infinite duration, whereas for an unstable particle (such as 222Rn), with diffusivity D and decay rate λ, Tr is approximately the effective size of the detection region divided by the characteristic diffusion velocity . Comparison of the mean residence time with the time of first passage (or exit time) in the theory of stochastic processes shows the conditions under which the two measures of time are equivalent and helps elucidate the connection between the phenomenological and statistical descriptions of radon diffusion.展开更多
A simple method employing a pair of pancake-style Geiger-Mueller (GM) counters for quantitative measurement of radon activity concentration (activity per unit volume) is described and demonstrated. The use of two GM c...A simple method employing a pair of pancake-style Geiger-Mueller (GM) counters for quantitative measurement of radon activity concentration (activity per unit volume) is described and demonstrated. The use of two GM counters, together with the basic theory derived in this paper, permit the detection of alpha particles from decay of and progeny ( <sup>218</sup>Po, <sup>214</sup>Po) and the conversion of the alpha count rate into a radon concentration. A unique feature of this method, in comparison with standard methodologies to measure radon concentration, is the absence of a fixed control volume. Advantages afforded by the reported GM method include: 1) it provides a direct in-situ value of radon level, thereby eliminating the need to send samples to an external testing laboratory;2) it can be applied to monitoring radon levels exhibiting wide short-term variability;3) it can yield short-term measurements of comparable accuracy and equivalent or higher precision than a commercial radon monitor sampling by passive diffusion;4) it yields long-term measurements statistically equivalent to commercial radon monitors;5) it uses the most commonly employed, overall least expensive, and most easily operated type of nuclear instrumentation. As such, the method is par-ticularly suitable for use by researchers, public health personnel, and home dwellers who prefer to monitor indoor radon levels themselves. The results of a consecutive 30-day sequence of 24 hour mean radon measurements by the proposed GM method and a commercial state-of-the-art radon monitor certified for radon testing are compared.展开更多
Run count statistics serve a central role in tests of non-randomness of stochastic processes of interest to a wide range of disciplines within the physical sciences, social sciences, business and finance, and other en...Run count statistics serve a central role in tests of non-randomness of stochastic processes of interest to a wide range of disciplines within the physical sciences, social sciences, business and finance, and other endeavors involving intrinsic uncertainty. To carry out such tests, it is often necessary to calculate two kinds of run count probabilities: 1) the probability that a certain number of trials results in a specified multiple occurrence of an event, or 2) the probability that a specified number of occurrences of an event take place within a fixed number of trials. The use of appropriate generating functions provides a systematic procedure for obtaining the distribution functions of these probabilities. This paper examines relationships among the generating functions applicable to recurrent runs and discusses methods, employing symbolic mathematical software, for implementing numerical extraction of probabilities. In addition, the asymptotic form of the cumulative distribution function is derived, which allows accurate runs statistics to be obtained for sequences of trials so large that computation times for extraction of this information from the generating functions could be impractically long.展开更多
Analysis of hourly underground temperature measurements at a medium-size (by population) US city as a function of depth and extending over 5+ years revealed a positive trend exceeding the rate of regional and global w...Analysis of hourly underground temperature measurements at a medium-size (by population) US city as a function of depth and extending over 5+ years revealed a positive trend exceeding the rate of regional and global warming by an order of magnitude. Measurements at depths greater than ~2 m are unaffected by daily fluctuations and sense only seasonal variability. A comparable trend also emerged from the surface temperature record of the largest US city (New York). Power spectral analysis of deep and shallow subsurface temperature records showed respectively two kinds of power-law behavior: 1) a quasi-continuum of power amplitudes indicative of Brownian noise, superposed (in the shallow record) by 2) a discrete spectrum of diurnal harmonics attributable to the unequal heat flux between daylight and darkness. Spectral amplitudes of the deepest temperature time series (2.4 m) conformed to a log-hyperbolic distribution. Upon removal of seasonal variability from the temperature record, the resulting spectral amplitudes followed a log-exponential distribution. Dynamical analysis showed that relative amplitudes and phases of temperature records at different depths were in excellent accord with a 1-dimensional heat diffusion model.展开更多
In a recent publication the author derived and experimentally tested several theoretical models, distinguished by different boundary conditions at the contacts with horizontal and vertical supports, that predicted the...In a recent publication the author derived and experimentally tested several theoretical models, distinguished by different boundary conditions at the contacts with horizontal and vertical supports, that predicted the forces of reaction on a fixed (i.e. inextensible) ladder. This problem is statically indeterminate since there are 4 forces of reaction and only 3 equations of static equilibrium. The model that predicted the empirical reactions correctly used a law of static friction to complement the equations of static equilibrium. The present paper examines in greater theoretical and experimental detail the role of friction in accounting for the forces of reaction on a fixed ladder. The reported measurements confirm that forces parallel and normal to the support at the top of the ladder are linearly proportional with a constant coefficient of friction irrespective of the magnitude or location of the load, as assumed in the theoretical model. However, measurements of forces parallel and normal to the support at the base of the ladder are linearly proportional with coefficients that depend sensitively on the location (although not the magnitude) of the load. This paper accounts quantitatively for the different effects of friction at the top and base of the ladder under conditions of usual use whereby friction at the vertical support alone is insufficient to keep the ladder from sliding. A theoretical model is also proposed for the unusual circumstance in which friction at the vertical support can keep the ladder from sliding.展开更多
It is a long-held tenet of nuclear physics, from the early work of Rutherford and Soddy up to present times that the disintegration of each species of radioactive nuclide occurs randomly at a constant rate unaffected ...It is a long-held tenet of nuclear physics, from the early work of Rutherford and Soddy up to present times that the disintegration of each species of radioactive nuclide occurs randomly at a constant rate unaffected by interactions with the external environment. During the past 15 years or so, reports have been published of some 10 or more unstable nuclides with non-exponential, periodic decay rates claimed to be of geophysical, astrophysical, or cosmological origin. Deviations from standard exponential decay are weak, and the claims are controversial. This paper examines the effects of a periodic decay rate on the statistical distributions of 1) nuclear activity measurements and 2) nuclear lifetime measurements. It is demonstrated that the modifications to these distributions are approximately 100 times more sensitive to non-standard radioactive decay than measurements of the decay curve, power spectrum, or autocorrelation function for corresponding system parameters.展开更多
文摘A composite random variable is a product (or sum of products) of statistically distributed quantities. Such a variable can represent the solution to a multi-factor quantitative problem submitted to a large, diverse, independent, anonymous group of non-expert respondents (the “crowd”). The objective of this research is to examine the statistical distribution of solutions from a large crowd to a quantitative problem involving image analysis and object counting. Theoretical analysis by the author, covering a range of conditions and types of factor variables, predicts that composite random variables are distributed log-normally to an excellent approximation. If the factors in a problem are themselves distributed log-normally, then their product is rigorously log-normal. A crowdsourcing experiment devised by the author and implemented with the assistance of a BBC (British Broadcasting Corporation) television show, yielded a sample of approximately 2000 responses consistent with a log-normal distribution. The sample mean was within ~12% of the true count. However, a Monte Carlo simulation (MCS) of the experiment, employing either normal or log-normal random variables as factors to model the processes by which a crowd of 1 million might arrive at their estimates, resulted in a visually perfect log-normal distribution with a mean response within ~5% of the true count. The results of this research suggest that a well-modeled MCS, by simulating a sample of responses from a large, rational, and incentivized crowd, can provide a more accurate solution to a quantitative problem than might be attainable by direct sampling of a smaller crowd or an uninformed crowd, irrespective of size, that guesses randomly.
文摘A crowdsourcing experiment in which viewers (the “crowd”) of a British Broadcasting Corporation (BBC) television show submitted estimates of the number of coins in a tumbler was shown in an antecedent paper (Part 1) to follow a log-normal distribution ∧(m,s2). The coin-estimation experiment is an archetype of a broad class of image analysis and object counting problems suitable for solution by crowdsourcing. The objective of the current paper (Part 2) is to determine the location and scale parameters (m,s) of ∧(m,s2) by both Bayesian and maximum likelihood (ML) methods and to compare the results. One outcome of the analysis is the resolution, by means of Jeffreys’ rule, of questions regarding the appropriate Bayesian prior. It is shown that Bayesian and ML analyses lead to the same expression for the location parameter, but different expressions for the scale parameter, which become identical in the limit of an infinite sample size. A second outcome of the analysis concerns use of the sample mean as the measure of information of the crowd in applications where the distribution of responses is not sought or known. In the coin-estimation experiment, the sample mean was found to differ widely from the mean number of coins calculated from ∧(m,s2). This discordance raises critical questions concerning whether, and under what conditions, the sample mean provides a reliable measure of the information of the crowd. This paper resolves that problem by use of the principle of maximum entropy (PME). The PME yields a set of equations for finding the most probable distribution consistent with given prior information and only that information. If there is no solution to the PME equations for a specified sample mean and sample variance, then the sample mean is an unreliable statistic, since no measure can be assigned to its uncertainty. Parts 1 and 2 together demonstrate that the information content of crowdsourcing resides in the distribution of responses (very often log-normal in form), which can be obtained empirically or by appropriate modeling.
文摘Recent developments in the measurement of radioactive gases in passive diffusion motivate the analysis of Brownian motion of decaying particles, a subject that has received little previous attention. This paper reports the derivation and solution of equations comparable to the Fokker-Planck and Langevin equations for one-dimensional diffusion and decay of unstable particles. In marked contrast to the case of stable particles, the two equations are not equivalent, but provide different information regarding the same stochastic process. The differences arise because Brownian motion with particle decay is not a continuous process. The discontinuity is readily apparent in the computer-simulated trajectories of the Langevin equation that incorporate both a Wiener process for displacement fluctuations and a Bernoulli process for random decay. This paper also reports the derivation of the mean time of first passage of the decaying particle to absorbing boundaries. Here, too, particle decay can lead to an outcome markedly different from that for stable particles. In particular, the first-passage time of the decaying particle is always finite, whereas the time for a stable particle to reach a single absorbing boundary is theoretically infinite due to the heavy tail of the inverse Gaussian density. The methodology developed in this paper should prove useful in the investigation of radioactive gases, aerosols of radioactive atoms, dust particles to which adhere radioactive ions, as well as diffusing gases and liquids of unstable molecules.
文摘The development of a theoretical model to predict the four equilibrium forces of reaction on a simple ladder of non-adjustable length leaning against a wall has long remained an unresolved matter. The difficulty is that the problem is statically indeterminate and therefore requires complementary information to obtain a unique solution. This paper reports 1) a comprehensive theoretical analysis of the three fundamental models based on treating the ladder as a single Euler-Bernoulli beam, and 2) a detailed experimental investigation of the forces of reaction as a function of applied load and location of load. In contrast to previous untested proposals that the solution to the ladder problem lay in the axial constraint on compression or the transverse constraint on flexure, the experimental outcome of the present work showed unambiguously that 1) the ladder could be modeled the best by a pinned support at the base (on the ground) and a roller support at the top (at the wall), and 2) the only complementary relation needed to resolve the static indeterminacy is the force of friction at the wall. Measurements were also made on the impact loading of a ladder by rapid ascent and descent of a climber. The results obtained were consistent with a simple dynamical model of the ladder as a linear elastic medium subject to a pulse perturbation. The solution to the ladder problem herein presented provides a basis for theoretical extension to other types of ladders. Of particular importance, given that accidents involving ladders in the workplace comprise a significant fraction of all industrial accidents, the theoretical relations reported here can help determine whether a collapsed structure, against which a ladder was applied, met regulatory safety limits or not.
文摘A tapered rod mounted at one end (base) and subject to a normal force at the other end (tip) is a fundamental structure of continuum mechanics that occurs widely at all size scales from radio towers to fishing rods to micro-electromechanical sensors. Although the bending of a uniform rod is well studied and gives rise to mathematical shapes described by elliptic integrals, no exact closed form solution to the nonlinear differential equations of static equilibrium is known for the deflection of a tapered rod. We report in this paper a comprehensive numerical analysis and experimental test of the exact theory of bending deformation of a tapered rod. Given the rod geometry and elastic modulus, the theory yields virtually all the geometric and physical features that an analyst, experimenter, or instrument designer might want as a function of impressed load, such as the exact curve of deformation (termed the elastica), maximum tip displacement, maximum tip deflection angle, distribution of curvature, and distribution of bending moment. Applied experimentally, the theory permits rapid estimation of the elastic modulus of a rod, which is not easily obtainable by other means. We have tested the theory by photographing the shapes of a set of flexible rods of different lengths and tapers subject to a range of impressed loads and using digital image analysis to extract the coordinates of the elastica curves. The extent of flexure in these experiments far exceeded the range of applicability of approximations that linearize the equations of equilibrium or neglect tapering of the rod. Agreement between the measured deflection curves and the exact theoretical predictions was excellent in all but several cases. In these exceptional cases, the nature of the anomalies provided important information regarding the deviation of the rods from an ideal Euler-Bernoulli cantilever, which thereby permitted us to model the deformation of the rods more accurately.
文摘The question of how many shuffles are required to randomize an initially ordered deck of cards is a problem that has fascinated mathematicians, scientists, and the general public. The two principal theoretical approaches to the problem, which differed in how each defined randomness, has led to statistically different threshold numbers of shuffles. This paper reports a comprehensive experimental analysis of the card randomization problem for the purposes of determining 1) which of the two theoretical approaches made the more accurate prediction, 2) whether different statistical tests yield different threshold numbers of randomizing shuffles, and 3) whether manual or mechanical shuffling randomizes a deck more effectively for a given number of shuffles. Permutations of 52-card decks, each subjected to sets of 19 successive riffle shuffles executed manually and by an auto-shuffling device were recorded sequentially and analyzed in respect to 1) the theory of runs, 2) rank ordering, 3) serial correlation, 4) theory of rising sequences, and 5) entropy and information theory. Among the outcomes, it was found that: 1) different statistical tests were sensitive to different patterns indicative of residual order;2) as a consequence, the threshold number of randomizing shuffles could vary widely among tests;3) in general, manual shuffling randomized a deck better than mechanical shuffling for a given number of shuffles;and 4) the mean number of rising sequences as a function of number of manual shuffles matched very closely the theoretical predictions based on the Gilbert-Shannon-Reed (GSR) model of riffle shuffles, whereas mechanical shuffling resulted in significantly fewer rising sequences than predicted.
文摘Elements of correspondence (“coincidences”) between a student’s solutions to an assigned set of quantitative problems and the solutions manual for the course textbook may suggest that the stu-dent copied the work from an illicit source. Plagiarism of this kind, which occurs primarily in fields such as the natural sciences, engineering, and mathematics, is often difficult to establish. This paper derives an expression for the probability that alleged coincidences in a student’s paper could be attributable to pure chance. The analysis employs the Principle of Maximum Entropy (PME), which, mathematically, is a variational procedure requiring maximization of the Shannon-Jaynes entropy function augmented by the completeness relation for probabilities and known information in the form of expectation values. The virtue of the PME as a general method of inferential reasoning is that it generates the most objective (i.e. least biased) probability distribution consistent with the given information. Numerical examination of test cases for a range of plausible conditions can yield outcomes that tend to exonerate a student who otherwise might be wrongfully judged guilty of cheating by adjudicators unfamiliar with the surprising properties of random processes.
文摘Stochastic processes such as diffusion can be analyzed by means of a partial differential equation of the Fokker-Planck type (FPE), which yields a transition probability density, or by a stochastic differential equation of the Langevin type (LE), which yields the time evolution of a statistical process variable. Provided the stochastic process is continuous and certain boundary conditions are met, the two approaches yield equivalent information. However, Brownian motion of radioactively decaying particles is not a continuous process because the Brownian trajectories abruptly terminate when the particle decays. Recent analysis of the Brownian motion of decaying particles by both approaches has led to different mean-square displacements. In this paper, we demonstrate the complete equivalence of the two approaches by 1) showing quantitatively and operationally how the probability densities and statistical moments predicted by the FPE and LE relate to one another, 2) verifying that both approaches lead to identical statistical moments at all orders, and 3) confirming that the analytical solution to the FPE accurately describes the Brownian trajectories obtained by Monte Carlo simulations based on the LE. The analysis in this paper addresses both the spatial distribution of the particles (i.e. the question of displacement as a function of diffusion time) and the temporal distribution (i.e. the question of first-passage time to fixed absorbing boundaries).
文摘Residence time in a flow measurement of radioactivity is the time spent by a pre-determined quantity of radioactive sample in the flow cell. In a recent report of the measurement of indoor radon by passive diffusion in an open volume (i.e. no flow cell or control volume), the concept of residence time was generalized to apply to measurement conditions with random, rather than directed, flow. The generalization, leading to a quantity Δtr, involved use of a) a phenomenological alpha-particle range function to calculate the effective detection volume, and b) a phenomenological description of diffusion by Fick’s law to determine the effective flow velocity. This paper examines the residence time in passive diffusion from the micro-statistical perspective of single-particle continuous Brownian motion. The statistical quantity “mean residence time” Tr is derived from the Green’s function for unbiased single-particle diffusion and is shown to be consistent with Δtr. The finite statistical lifetime of the randomly moving radioactive atom plays an essential part. For stable particles, Tr is of infinite duration, whereas for an unstable particle (such as 222Rn), with diffusivity D and decay rate λ, Tr is approximately the effective size of the detection region divided by the characteristic diffusion velocity . Comparison of the mean residence time with the time of first passage (or exit time) in the theory of stochastic processes shows the conditions under which the two measures of time are equivalent and helps elucidate the connection between the phenomenological and statistical descriptions of radon diffusion.
文摘A simple method employing a pair of pancake-style Geiger-Mueller (GM) counters for quantitative measurement of radon activity concentration (activity per unit volume) is described and demonstrated. The use of two GM counters, together with the basic theory derived in this paper, permit the detection of alpha particles from decay of and progeny ( <sup>218</sup>Po, <sup>214</sup>Po) and the conversion of the alpha count rate into a radon concentration. A unique feature of this method, in comparison with standard methodologies to measure radon concentration, is the absence of a fixed control volume. Advantages afforded by the reported GM method include: 1) it provides a direct in-situ value of radon level, thereby eliminating the need to send samples to an external testing laboratory;2) it can be applied to monitoring radon levels exhibiting wide short-term variability;3) it can yield short-term measurements of comparable accuracy and equivalent or higher precision than a commercial radon monitor sampling by passive diffusion;4) it yields long-term measurements statistically equivalent to commercial radon monitors;5) it uses the most commonly employed, overall least expensive, and most easily operated type of nuclear instrumentation. As such, the method is par-ticularly suitable for use by researchers, public health personnel, and home dwellers who prefer to monitor indoor radon levels themselves. The results of a consecutive 30-day sequence of 24 hour mean radon measurements by the proposed GM method and a commercial state-of-the-art radon monitor certified for radon testing are compared.
文摘Run count statistics serve a central role in tests of non-randomness of stochastic processes of interest to a wide range of disciplines within the physical sciences, social sciences, business and finance, and other endeavors involving intrinsic uncertainty. To carry out such tests, it is often necessary to calculate two kinds of run count probabilities: 1) the probability that a certain number of trials results in a specified multiple occurrence of an event, or 2) the probability that a specified number of occurrences of an event take place within a fixed number of trials. The use of appropriate generating functions provides a systematic procedure for obtaining the distribution functions of these probabilities. This paper examines relationships among the generating functions applicable to recurrent runs and discusses methods, employing symbolic mathematical software, for implementing numerical extraction of probabilities. In addition, the asymptotic form of the cumulative distribution function is derived, which allows accurate runs statistics to be obtained for sequences of trials so large that computation times for extraction of this information from the generating functions could be impractically long.
文摘Analysis of hourly underground temperature measurements at a medium-size (by population) US city as a function of depth and extending over 5+ years revealed a positive trend exceeding the rate of regional and global warming by an order of magnitude. Measurements at depths greater than ~2 m are unaffected by daily fluctuations and sense only seasonal variability. A comparable trend also emerged from the surface temperature record of the largest US city (New York). Power spectral analysis of deep and shallow subsurface temperature records showed respectively two kinds of power-law behavior: 1) a quasi-continuum of power amplitudes indicative of Brownian noise, superposed (in the shallow record) by 2) a discrete spectrum of diurnal harmonics attributable to the unequal heat flux between daylight and darkness. Spectral amplitudes of the deepest temperature time series (2.4 m) conformed to a log-hyperbolic distribution. Upon removal of seasonal variability from the temperature record, the resulting spectral amplitudes followed a log-exponential distribution. Dynamical analysis showed that relative amplitudes and phases of temperature records at different depths were in excellent accord with a 1-dimensional heat diffusion model.
文摘In a recent publication the author derived and experimentally tested several theoretical models, distinguished by different boundary conditions at the contacts with horizontal and vertical supports, that predicted the forces of reaction on a fixed (i.e. inextensible) ladder. This problem is statically indeterminate since there are 4 forces of reaction and only 3 equations of static equilibrium. The model that predicted the empirical reactions correctly used a law of static friction to complement the equations of static equilibrium. The present paper examines in greater theoretical and experimental detail the role of friction in accounting for the forces of reaction on a fixed ladder. The reported measurements confirm that forces parallel and normal to the support at the top of the ladder are linearly proportional with a constant coefficient of friction irrespective of the magnitude or location of the load, as assumed in the theoretical model. However, measurements of forces parallel and normal to the support at the base of the ladder are linearly proportional with coefficients that depend sensitively on the location (although not the magnitude) of the load. This paper accounts quantitatively for the different effects of friction at the top and base of the ladder under conditions of usual use whereby friction at the vertical support alone is insufficient to keep the ladder from sliding. A theoretical model is also proposed for the unusual circumstance in which friction at the vertical support can keep the ladder from sliding.
文摘It is a long-held tenet of nuclear physics, from the early work of Rutherford and Soddy up to present times that the disintegration of each species of radioactive nuclide occurs randomly at a constant rate unaffected by interactions with the external environment. During the past 15 years or so, reports have been published of some 10 or more unstable nuclides with non-exponential, periodic decay rates claimed to be of geophysical, astrophysical, or cosmological origin. Deviations from standard exponential decay are weak, and the claims are controversial. This paper examines the effects of a periodic decay rate on the statistical distributions of 1) nuclear activity measurements and 2) nuclear lifetime measurements. It is demonstrated that the modifications to these distributions are approximately 100 times more sensitive to non-standard radioactive decay than measurements of the decay curve, power spectrum, or autocorrelation function for corresponding system parameters.