We initially look at a non singular universe representation of entropy, based in part on what was brought up by Muller and Lousto. This is a gateway to bringing up information and computational steps (as defined by Se...We initially look at a non singular universe representation of entropy, based in part on what was brought up by Muller and Lousto. This is a gateway to bringing up information and computational steps (as defined by Seth Lloyd) as to what would be available initially due to a modified ZPE formalism. The ZPE formalism is modified as due to Matt Visser’s alternation of k (maximum) ~ 1/(Planck length), with a specific initial density giving rise to initial information content which may permit fixing the initial Planck’s constant, h, which is pivotal to the setting of physical law. The settings of these parameters depend upon NLED.展开更多
In quantum mechanics, there are two very famous formulas. One is the energy formula of the bose particle, called Planck’s law. The other is the wavelength formula, which is called the de Broy wavelength. According to...In quantum mechanics, there are two very famous formulas. One is the energy formula of the bose particle, called Planck’s law. The other is the wavelength formula, which is called the de Broy wavelength. According to Einstein’s mass-energy equation, we have studied Planck’s law and De Bloy’s wavelength, and generalized it to the De Bloy’s wavelength formula from low speed to light speed. Then, on this basis, the smallest particle is defined as mass quantum. The new wavelength formula is obtained from the mass quantum and converted into the frequency formula. The generalized Planck’s law is obtained.展开更多
Annual variations of 1000 - 3000 ppm (peak-to-valley) have been observed in the decay rates of 8 radionuclides over a 20 year span by six organizations on three continents, including beta decay (weak interaction) and ...Annual variations of 1000 - 3000 ppm (peak-to-valley) have been observed in the decay rates of 8 radionuclides over a 20 year span by six organizations on three continents, including beta decay (weak interaction) and alpha decay (strong interaction). In searching for a common cause, we hypothesized that small variations in Planck’s constant might account for the observed synchronized variations in strong and weak decays. If so, then h would be a maximum around January-February of each year and a minimum around July-August of each year based on the 20 years of radioactive decay data. To test this hypothesis, a purely electromagnetic experiment was set up to search for the same annual variations. From Jun 14, 2011 to Jan 29, 2014 (941 days), annual variations in tunneling voltage through 5 parallel Esaki tunnel diodes were recorded. It found annual variations of 826 ppm peak-to-valley peaking around Jan 1. These variations lend support to the hypothesis that there is a gradient in h of about 21 ppm across the Earth’s orbit.展开更多
Padmanabhan elucidated the concept of super radiance in black hole physics which would lead to loss mass of a black hole, and loss of angular momentum due to space-time infall of material into a black hole. As Padmana...Padmanabhan elucidated the concept of super radiance in black hole physics which would lead to loss mass of a black hole, and loss of angular momentum due to space-time infall of material into a black hole. As Padmanabhan explained it, to avoid super radiance, and probable break down of black holes, from in fall, one would need in fall material frequency, divided by mass of particles undergoing in fall in the black hole to be greater than the angular velocity of the black hole event horizon in question. We should keep in mind we bring this model up to improve the chance that Penrose’s conformal cyclic cosmology will allow for retention of enough information for preservation of Planck’s constant from cycle to cycle, as a counterpart to what we view as unacceptable reliance upon the LQG quantum bounce and its tetrad structure to preserve memory. In addition, we are presuming that at the time of z = 20 in red shift that there would be roughly about the same order of magnitude of entropy as number of operations in the electro weak era, and that the number of operations in the z = 20 case is close to the entropy at redshift z = 0. Finally, we have changed Λ with the result that after redshift = 20;there is a rapid collapse to the present-day vacuum energy value i.e. by z = 12 the value of the cosmological constant, Λ likely being the same, today, as for what it was when z = 12. And z = 12 is the redshift value about when Galaxies form.展开更多
In the first step the extremal values of the vibrational specific heat and entropy represented by the Planck oscillators at the low temperatures could be calculated. The positions of the extrema are defined by the dim...In the first step the extremal values of the vibrational specific heat and entropy represented by the Planck oscillators at the low temperatures could be calculated. The positions of the extrema are defined by the dimensionless ratios between the quanta of the vibrational energy and products of the actual temperature multiplied by the Boltzmann constant. It became evident that position of a local maximum obtained for the Planck’s average energy of a vibration mode and position of a local maximum of entropy are the same. In the next step the Haken’s time-dependent perturbation approach to the pair of quantum non-degenerate Schr<span style="white-space:nowrap;">?</span>dinger eigenstates of energy is re-examined. An averaging process done on the time variable leads to a very simple formula for the coefficients entering the perturbation terms.展开更多
The singularity at distance r → 0 at the center of a spherically symmetric non-rotating, uncharged mass of radius R, is considered here. Under inverse square law force, the Schwarzschild metric, needs to be modified,...The singularity at distance r → 0 at the center of a spherically symmetric non-rotating, uncharged mass of radius R, is considered here. Under inverse square law force, the Schwarzschild metric, needs to be modified, to include Newton’s Shell Theorem (NST). By including NST for r, both Schwarzschild singularity at r = 2GM/c2 and at r → 0 singularities are removed from the metric. Near R → 0, the question of maximal density is considered based on Schwarzschild’s modified metric, and compared to the quantum limit of maximal mass density put by Planck’s quantum-based universal units. It is asserted, that General relativity, when combined with Planck’s universal units, inevitably leads to quantization of gravity.展开更多
This document is due to reviewing an article by Maydanyuk and Olkhovsky, of a Nova Science conpendium as of “The big bang, theory assumptions and Problems”, as of 2012, which uses the Wheeler De Witt equation as an ...This document is due to reviewing an article by Maydanyuk and Olkhovsky, of a Nova Science conpendium as of “The big bang, theory assumptions and Problems”, as of 2012, which uses the Wheeler De Witt equation as an evolution equation assuming a closed universe. Having the value of k, not as the closed universe, but nearly zero of a nearly flat universe, which leads to serious problems of interpretation of what initial conditions are. These problems of interpretations of initial conditions tie in with difficulties in using QM as an initial driver of inflation. And argue in favor of using a different procedure as far as forming a wave function of the universe initially. The author wishes to thank Abhay Ashtekar for his well thought out criticism but asserts that limitations in space-time geometry largely due to when is formed from semi classical reasoning, i.e. Maxwell’s equation involving a close boundary value regime between Octonionic geometry and flat space non Octonionic geometry is a datum which Abhay Ashekhar may wish to consider in his quantum bounce model and in loop quantum gravity in the future.展开更多
Planck’s constant <i>h</i> is a fundamental physical constant defined in the realm of quantum theory and is determined only by physical measurement and cannot be calculated. To this day, physicists do not...Planck’s constant <i>h</i> is a fundamental physical constant defined in the realm of quantum theory and is determined only by physical measurement and cannot be calculated. To this day, physicists do not have a convincing explanation for why action in microcosm is quantized or why <i>h</i> has a specific quantitative value. Here, a new theory is presented based on the idea that the elementary particles are vortices of a condensed superfluid vacuum. The vortex has a conserved angular momentum that can be calculated by applying hydrodynamic laws;in this way, the numerical value of Planck’s constant can be obtained. Therefore, the Planck constant is not a fundamental constant but an observable parameter of the elementary particle as a vortex that has constant vorticity and conserved angular momentum. This theory may offer a unique and comprehensive understanding of Planck’s constant and open a new perspective for a theory of everything.展开更多
Classical Mechanics using Einstein’s theories of relativity places a limit on speed as the speed of light. Quantum Mechanics has no such limitation. To understand space accelerating faster than the speed of light and...Classical Mechanics using Einstein’s theories of relativity places a limit on speed as the speed of light. Quantum Mechanics has no such limitation. To understand space accelerating faster than the speed of light and information being exchanged instantaneously between two entangled electrons separated by huge distances, one uses Planck’s length, Planck’s time, and Planck’s mass to indicate that space and time are discrete and therefore along with masses smaller than Planck’s mass are Quantum Mechanical in nature. Faster than the speed of light c = 3 × 10<sup>8</sup> m/s is a classical effect only in dimensions of space lower than our 3-D Universe, but it is a Quantum effect in all dimensions of space. Because space can oscillate sending out ripples from the source, it is the medium used for transporting light waves and gravity waves.展开更多
From a basic probabilistic argumentation, the Zipfian distribution and Benford’s law are derived. It is argued that Zipf’s law fits to calculate the rank probabilities of identical indistinguishable objects and that...From a basic probabilistic argumentation, the Zipfian distribution and Benford’s law are derived. It is argued that Zipf’s law fits to calculate the rank probabilities of identical indistinguishable objects and that Benford’s distribution fits to calculate the rank probabilities of distinguishable objects. i.e. in the distribution of words in long texts all the words in a given rank are identical, therefore, the rank distribution is Zipfian. In logarithmic tables, the objects with identical 1st digits are distinguishable as there are many different digits in the 2nd, 3rd… places, etc., and therefore the distribution is according to Benford’s law. Pareto 20 - 80 rule is shown to be an outcome of Benford’s distribution as when the number of ranks is about 10 the probability of 20% of the high probability ranks is equal to the probability of the rest of 80% low probability ranks. It is argued that all these distributions, including the central limit theorem, are outcomes of Planck’s law and are the result of the quantization of energy. This argumentation may be considered a physical origin of probability.展开更多
We consider if a generalized HUP set greater than or equal to Planck’s constant divided by the square of a scale factor, as well as an inflaton field, yields the result that Delta E times Delta t is embedded in a 5 d...We consider if a generalized HUP set greater than or equal to Planck’s constant divided by the square of a scale factor, as well as an inflaton field, yields the result that Delta E times Delta t is embedded in a 5 dimensional field which is within a deterministic structure. Our proof concludes with Delta t as of Planck time, resulting in enormous potential energy. If that potential energy is induced by a repeating universe structure, we get a free value of Delta E that is almost infinite, supporting a prior conclusion.展开更多
This paper extends the previous experimental work on Planck’s constant h and the vacuum field, whose spectrum is determined by h. In particular it adds additional experimental evidence supporting temporal and spatial...This paper extends the previous experimental work on Planck’s constant h and the vacuum field, whose spectrum is determined by h. In particular it adds additional experimental evidence supporting temporal and spatial variations in the vacuum field, including the Sun as a source at 13 sigmas of certainty. The vacuum field has long been a mystery of physics, having enormous theoretical intensity set by Planck’s constant h and yet no obvious physical effect. Hendrick Casimir first proposed that this form of E & M radiation was real in 1948 and suggested an experiment to verify its existence. Over 50 experiments since then have confirmed that this vacuum radiation is real, is a form of electro-magnetic radiation, and varies in time and space over 10:1 in our laboratory compared to its standard QM spectrum. Two other authors have found the fine structure constant α (proportional to 1/h) is varying across the cosmos at up to 4.2 sigma certainty. All these results suggest that the vacuum field (and thus h) varies in time and space. In a previous paper we reported our tunnel diode experimental results as well as the results of six other organizations (including German, Russian and US national labs).The six organizations reported sinusoidal annual variations of 1000 - 3000 ppm (peak-to-valley) in the decay rates of 8 radionuclides over a 20-year span, including beta decay (weak interaction) and alpha decay (strong interaction). All decay rates peaked in January-February and minimized in July-August without any candidate cause suggested. We confirmed that Planck’s constant was the cause by verifying similar variations in Esaki tunnel diode current, which is purely electromagnetic. The combined data from previous strong and weak decays plus our own E & M tunnel data showed similar magnitude and time phasing for strong, weak and E & M interactions, except that the tunnel diode temporal variations were 180 deg out of phase—as we predicted. The logic for this 180 deg phase shift was straight forward. Radioactive decay and electron tunneling both have h in the denominator of the tunneling exponent, but tunnel diodes also have h2 in the numerator of the exponent due to the size of atoms being proportional to h2. This extra h2 makes the exponent proportional to h for electron tunneling instead of proportional to 1/h for strong and weak decay—shifting the annual oscillation for E & M tunnel current by 180 deg. Radioactive decay had a maximum around January-February of each year and a minimum around July-August of each year. Tunnel current (the equivalent to radioactive decay rate) had the opposite—a minimum around January of each year and a maximum around July of each year. This predicted and observed sign flip in the temporal variations between radioactive decay and electron tunneling provides strong evidence that h variations across the Earth’s orbit are the cause of these annual cycles. In this paper we take the next step by verifying whether the Sun and a potential more distant cosmic source radiate the vacuum E & M field, just as all stars generate massive amounts of regular E & M radiation. We reprocessed two years of data, 6 million data points, from our tunnel diode experiment to search for day-night oscillations in tunnel current. Here we assume that the Earth would block the radiated vacuum field half of each day. Sun-locked signals have 365 cycles per year and cosmos locked signals have 366 cycles per year. With our two years of data, these two signals are separated by a null-signal, which is not locked to the Earth or to the cosmos—allowing us to clearly distinguish the solar and cosmic sources. 1) We found sun-locked variations in the vacuum field, peaking around local noon with 10-13 probability of false alarm. Other potential causes are carefully examined and ruled out. 2) We also found cosmos-locked variations in the vacuum field, peaking at the right ascension of the red super-giant star Betelgeuse with 10-7 probability of false alarm. Cosmos locked sources are easily distinguished from the solar source because they have one extra cycle per year, two extra cycles during the two years of the experiment. They are thus independent Fourier components, easily separated by a Fourier transform. Both of these high probability detections support that the vacuum field spectrum may vary in space and time and be enhanced by stellar sources.展开更多
There has been protracted historical evidence of a relative paucity in the distribution frequency of global earthquakes within the M = 3.5 to 4.0 range. We observed a similar phenomenon for all recently recorded earth...There has been protracted historical evidence of a relative paucity in the distribution frequency of global earthquakes within the M = 3.5 to 4.0 range. We observed a similar phenomenon for all recently recorded earthquakes from January 2009 through August 2013. Frequency distributions with increments of M = 0.1 verified the trough of the diminished incidence to be between M = 3.6 and 3.7 with an abrupt increase between M = 3.9 and 4.0. The calculated equivalent photon wavelength for the energies associated with M = 3.6 approaches Planck’s Length while the related time increment is the cutoff frequency for the Zero Point Fluctuation force coupled to gravity. The conspicuous congruence between Planck’s time and length and the lower than expected frequency based upon Gaussian assumptions of distribution for the discrete band of energy associated with this magnitude range of earthquakes suggests a conduit may exist between intrinsic features of Planck space-time and geophysical processes. The existence of such a connection would encourage alternative explanations for sun-seismic activities as due to solar instabilities. Instead, it may reflect influence upon both from alterations in the structure of space being traversed by the solar system as it moves through the galaxy.展开更多
Unifying quantum and classical physics has proved difficult as their postulates are conflicting. Using the notion of counts of the fundamental measures—length, mass, and time—a unifying description is resolved. A th...Unifying quantum and classical physics has proved difficult as their postulates are conflicting. Using the notion of counts of the fundamental measures—length, mass, and time—a unifying description is resolved. A theoretical framework is presented in a set of postulates by which a conversion between expressions from quantum and classical physics can be made. Conversions of well-known expressions from different areas of physics (quantum physics, gravitation, optics and cosmology) exemplify the approach and mathematical procedures. The postulated integer counts of fundamental measures change our understanding of length, suggesting that our current understanding of reality is distorted.展开更多
In this paper, we have determined the structure of the uncertainty relations obtained on the basis of the dimensions that describe the very origin of the Big Bang—in accordance with our Hypothesis of Primary Particle...In this paper, we have determined the structure of the uncertainty relations obtained on the basis of the dimensions that describe the very origin of the Big Bang—in accordance with our Hypothesis of Primary Particles, and with the logically introduced, smallest increment of speed that can exist, the “speed quantum”. This approach allowed us to theoretically move the margin for the description of this singularity to values smaller than the Planck time and the Planck length;hence, we also introduced a new constant in the uncertainty relations, which corresponds to the reduced Planck constant. We expect that such a result for the initial singularity itself will enable a more detailed study of the Big Bang, while opening new areas of study in physics.展开更多
The equations for energy, momentum, frequency, wavelength and also Schr?dinger equation of the electromagnetic wave in the atom are derived using the model of atom by analogy with the transmission line. The action con...The equations for energy, momentum, frequency, wavelength and also Schr?dinger equation of the electromagnetic wave in the atom are derived using the model of atom by analogy with the transmission line. The action constant A0 = (μ0/ε0)1/2s02e2 is a key term in the above mentioned equations. Besides the other well-known quantities, the only one unknown quantity in the last expression is a structural constant s0. Therefore, this article is dedicated to the calculation of the structural constant of the atoms on the basis of the above mentioned model. The structural constant of the atoms s0 = 8.277 56 shows up as a link between macroscopic and atomic world. After calculating this constant we get the theory of atoms based on Maxwell’s and Lorentz equations only. This theory does not require Planck constant h, which once was introduced empirically. Replacement for h is the action constant A0, which is here theoretically derived, while the replacement for fine structure constant α is 1/(2s02). In this way, the structural constant s0 replaces both constants, h and α. This paper also defines the stationary states of atoms and shows that the maximal atomic number is equal to 2s02 = 137.036, i.e., as integer should be Zmax=137. The presented model of the atoms covers three of the four fundamental interactions, namely the electromagnetic, weak and strong interactions.展开更多
A century ago the classical physics couldn’t explain many atomic physical phenomena. Now the situation has changed. It’s because within the framework of classical physics with the help of Maxwell’s equations we can...A century ago the classical physics couldn’t explain many atomic physical phenomena. Now the situation has changed. It’s because within the framework of classical physics with the help of Maxwell’s equations we can derive Schrödinger’s equation, which is the foundation of quantum physics. The equations for energy, momentum, frequency and wavelength of the electromagnetic wave in the atom are derived using the model of atom by analogy with the transmission line. The action constant A0 = (μ0/ε0)1/2s02e2 is a key term in the above mentioned equations. Besides the other well-known constants, the only unknown constant in the last expression is a structural constant of the atom s0. We have found that the value of this constant is 8.277 56 and that it shows up as a link between macroscopic and atomic world. After calculating this constant we get the theory of atoms based on Maxwell’s and Lorentz equations only. This theory does not require knowledge of Planck’s constant h, which is replaced with theoretically derived action constant A0, while the replacement for the fine structure constant α-1 is theoretically derived expression 2s02 = 137.036. So, the structural constant s0 replaces both constants h and α. This paper also defines the stationary states of atoms and shows that the maximal atomic number is equal to Zmax = 137. The presented model of the atoms covers three of the four fundamental interactions, namely the electromagnetic, weak and strong interactions.展开更多
Recently, the author read the Alicki-Van Ryn test as to behavior of photons in a test of violations of classicality. The same thing is propoosed via use of a spin two graviton, using typical spin 2 matrices. While the...Recently, the author read the Alicki-Van Ryn test as to behavior of photons in a test of violations of classicality. The same thing is propoosed via use of a spin two graviton, using typical spin 2 matrices. While the technology currently does not exist to perform such an analysis yet, the same sort of thought experiment is proposed in a way to allow for a first principle test of the either classical or quantum foundations of gravity. The reason for the present manuscript topic is due to a specific argument presented in a prior document as to how h is formed from semiclassical reasoning. We referred to a procedure as to how to use Maxwell’s equations involving a closed boundary regime, in the boundary re- gime between Octonionic Geometry and quantum flat space. Conceivably, a similar argument could be made forgravi- tons, pending further investigations. Also the anlysis of if gravitons are constructed by a similar semiclassical argument is pending if gravitons as by the Alicki-Van Ryn test result in semiclassical and matrix observable eigenvalue behavior. This paper also indirectly raises the question of if Baysian statistics would be the optimal way to differentiate between and matrix observable eigenvalue behavior for reasons brought up in the conclusion.展开更多
Making use of Newton’s classical shell theorem, the Schwarzschild metric is modified. This removes the singularity at r = 0 for a standard object (not a black hole). It is demonstrated how general relativity evidentl...Making use of Newton’s classical shell theorem, the Schwarzschild metric is modified. This removes the singularity at r = 0 for a standard object (not a black hole). It is demonstrated how general relativity evidently leads to quantization of space-time. Both classical and quantum mechanical limits on density give the same result. Based on Planck’s length and the assumption that density must have an upper limit, we conclude that the lower limit of the classical gravitation theory by Einstein is related to the Planck length, which is a quantum phenomenon posed by dimensional analysis of the universal constants. The Ricci tensor is considered under extreme densities (where Kretschmann invariant = 0) and a solution is considered for both outside and inside the object. Therefore, classical relativity and the relationship between the universal constants lead to quantization of space. A gedanken experiment of light passing through an extremely dense object is considered, which will allow for evaluation of the theory.展开更多
From 1990 to 2005 NASA did six flybys of Earth in order to boost the energy of each spacecraft, enabling them to go deeper into the solar system. These six flybys showed an unexpected violation in the conservation of ...From 1990 to 2005 NASA did six flybys of Earth in order to boost the energy of each spacecraft, enabling them to go deeper into the solar system. These six flybys showed an unexpected violation in the conservation of energy of up to 100 sigmas, matching a simple physical formula related to the input and output spacecraft velocities relative to the Earth rotational plane. Mysteriously, occasionally the effect was not present. After several years of reviewing the data and evaluating all sources of perturbation known to NASA, no solution was identified. NASA sent the final report to the author above for further review. Independently, the author’s firm Optical Physics Company had published research into the vacuum field, finding that it was not constant but varied across the Earth’s orbit and was also separately detected being radiated by the Sun. The physics we had learned was applied to the NASA passes, allowing all the anomalies they had encountered to be explained and adding considerably to our understanding of the vacuum field. We hypothesized a radially emitted vacuum field (which controls the rate of time) would couple the radial direction r with time t to add a gtr term in the metric tensor. We then combined the previously published experimental data of the vacuum field radiated by the Sun with the NASA data to develop a formula for the emission of the vacuum field from warm rotating bodies, accurate to about 1%. 25 candidate formulas were evaluated, based on powers of radial acceleration and temperature, and one was definitively selected. This research offers a linkage between the vacuum field whose spectrum is proportional to h and an effect on the metric tensor of gravity. Since both gravity and h control time rates, it seemed credible they could both affect the metric tensor.展开更多
文摘We initially look at a non singular universe representation of entropy, based in part on what was brought up by Muller and Lousto. This is a gateway to bringing up information and computational steps (as defined by Seth Lloyd) as to what would be available initially due to a modified ZPE formalism. The ZPE formalism is modified as due to Matt Visser’s alternation of k (maximum) ~ 1/(Planck length), with a specific initial density giving rise to initial information content which may permit fixing the initial Planck’s constant, h, which is pivotal to the setting of physical law. The settings of these parameters depend upon NLED.
文摘In quantum mechanics, there are two very famous formulas. One is the energy formula of the bose particle, called Planck’s law. The other is the wavelength formula, which is called the de Broy wavelength. According to Einstein’s mass-energy equation, we have studied Planck’s law and De Bloy’s wavelength, and generalized it to the De Bloy’s wavelength formula from low speed to light speed. Then, on this basis, the smallest particle is defined as mass quantum. The new wavelength formula is obtained from the mass quantum and converted into the frequency formula. The generalized Planck’s law is obtained.
文摘Annual variations of 1000 - 3000 ppm (peak-to-valley) have been observed in the decay rates of 8 radionuclides over a 20 year span by six organizations on three continents, including beta decay (weak interaction) and alpha decay (strong interaction). In searching for a common cause, we hypothesized that small variations in Planck’s constant might account for the observed synchronized variations in strong and weak decays. If so, then h would be a maximum around January-February of each year and a minimum around July-August of each year based on the 20 years of radioactive decay data. To test this hypothesis, a purely electromagnetic experiment was set up to search for the same annual variations. From Jun 14, 2011 to Jan 29, 2014 (941 days), annual variations in tunneling voltage through 5 parallel Esaki tunnel diodes were recorded. It found annual variations of 826 ppm peak-to-valley peaking around Jan 1. These variations lend support to the hypothesis that there is a gradient in h of about 21 ppm across the Earth’s orbit.
文摘Padmanabhan elucidated the concept of super radiance in black hole physics which would lead to loss mass of a black hole, and loss of angular momentum due to space-time infall of material into a black hole. As Padmanabhan explained it, to avoid super radiance, and probable break down of black holes, from in fall, one would need in fall material frequency, divided by mass of particles undergoing in fall in the black hole to be greater than the angular velocity of the black hole event horizon in question. We should keep in mind we bring this model up to improve the chance that Penrose’s conformal cyclic cosmology will allow for retention of enough information for preservation of Planck’s constant from cycle to cycle, as a counterpart to what we view as unacceptable reliance upon the LQG quantum bounce and its tetrad structure to preserve memory. In addition, we are presuming that at the time of z = 20 in red shift that there would be roughly about the same order of magnitude of entropy as number of operations in the electro weak era, and that the number of operations in the z = 20 case is close to the entropy at redshift z = 0. Finally, we have changed Λ with the result that after redshift = 20;there is a rapid collapse to the present-day vacuum energy value i.e. by z = 12 the value of the cosmological constant, Λ likely being the same, today, as for what it was when z = 12. And z = 12 is the redshift value about when Galaxies form.
文摘In the first step the extremal values of the vibrational specific heat and entropy represented by the Planck oscillators at the low temperatures could be calculated. The positions of the extrema are defined by the dimensionless ratios between the quanta of the vibrational energy and products of the actual temperature multiplied by the Boltzmann constant. It became evident that position of a local maximum obtained for the Planck’s average energy of a vibration mode and position of a local maximum of entropy are the same. In the next step the Haken’s time-dependent perturbation approach to the pair of quantum non-degenerate Schr<span style="white-space:nowrap;">?</span>dinger eigenstates of energy is re-examined. An averaging process done on the time variable leads to a very simple formula for the coefficients entering the perturbation terms.
文摘The singularity at distance r → 0 at the center of a spherically symmetric non-rotating, uncharged mass of radius R, is considered here. Under inverse square law force, the Schwarzschild metric, needs to be modified, to include Newton’s Shell Theorem (NST). By including NST for r, both Schwarzschild singularity at r = 2GM/c2 and at r → 0 singularities are removed from the metric. Near R → 0, the question of maximal density is considered based on Schwarzschild’s modified metric, and compared to the quantum limit of maximal mass density put by Planck’s quantum-based universal units. It is asserted, that General relativity, when combined with Planck’s universal units, inevitably leads to quantization of gravity.
文摘This document is due to reviewing an article by Maydanyuk and Olkhovsky, of a Nova Science conpendium as of “The big bang, theory assumptions and Problems”, as of 2012, which uses the Wheeler De Witt equation as an evolution equation assuming a closed universe. Having the value of k, not as the closed universe, but nearly zero of a nearly flat universe, which leads to serious problems of interpretation of what initial conditions are. These problems of interpretations of initial conditions tie in with difficulties in using QM as an initial driver of inflation. And argue in favor of using a different procedure as far as forming a wave function of the universe initially. The author wishes to thank Abhay Ashtekar for his well thought out criticism but asserts that limitations in space-time geometry largely due to when is formed from semi classical reasoning, i.e. Maxwell’s equation involving a close boundary value regime between Octonionic geometry and flat space non Octonionic geometry is a datum which Abhay Ashekhar may wish to consider in his quantum bounce model and in loop quantum gravity in the future.
文摘Planck’s constant <i>h</i> is a fundamental physical constant defined in the realm of quantum theory and is determined only by physical measurement and cannot be calculated. To this day, physicists do not have a convincing explanation for why action in microcosm is quantized or why <i>h</i> has a specific quantitative value. Here, a new theory is presented based on the idea that the elementary particles are vortices of a condensed superfluid vacuum. The vortex has a conserved angular momentum that can be calculated by applying hydrodynamic laws;in this way, the numerical value of Planck’s constant can be obtained. Therefore, the Planck constant is not a fundamental constant but an observable parameter of the elementary particle as a vortex that has constant vorticity and conserved angular momentum. This theory may offer a unique and comprehensive understanding of Planck’s constant and open a new perspective for a theory of everything.
文摘Classical Mechanics using Einstein’s theories of relativity places a limit on speed as the speed of light. Quantum Mechanics has no such limitation. To understand space accelerating faster than the speed of light and information being exchanged instantaneously between two entangled electrons separated by huge distances, one uses Planck’s length, Planck’s time, and Planck’s mass to indicate that space and time are discrete and therefore along with masses smaller than Planck’s mass are Quantum Mechanical in nature. Faster than the speed of light c = 3 × 10<sup>8</sup> m/s is a classical effect only in dimensions of space lower than our 3-D Universe, but it is a Quantum effect in all dimensions of space. Because space can oscillate sending out ripples from the source, it is the medium used for transporting light waves and gravity waves.
文摘From a basic probabilistic argumentation, the Zipfian distribution and Benford’s law are derived. It is argued that Zipf’s law fits to calculate the rank probabilities of identical indistinguishable objects and that Benford’s distribution fits to calculate the rank probabilities of distinguishable objects. i.e. in the distribution of words in long texts all the words in a given rank are identical, therefore, the rank distribution is Zipfian. In logarithmic tables, the objects with identical 1st digits are distinguishable as there are many different digits in the 2nd, 3rd… places, etc., and therefore the distribution is according to Benford’s law. Pareto 20 - 80 rule is shown to be an outcome of Benford’s distribution as when the number of ranks is about 10 the probability of 20% of the high probability ranks is equal to the probability of the rest of 80% low probability ranks. It is argued that all these distributions, including the central limit theorem, are outcomes of Planck’s law and are the result of the quantization of energy. This argumentation may be considered a physical origin of probability.
文摘We consider if a generalized HUP set greater than or equal to Planck’s constant divided by the square of a scale factor, as well as an inflaton field, yields the result that Delta E times Delta t is embedded in a 5 dimensional field which is within a deterministic structure. Our proof concludes with Delta t as of Planck time, resulting in enormous potential energy. If that potential energy is induced by a repeating universe structure, we get a free value of Delta E that is almost infinite, supporting a prior conclusion.
文摘This paper extends the previous experimental work on Planck’s constant h and the vacuum field, whose spectrum is determined by h. In particular it adds additional experimental evidence supporting temporal and spatial variations in the vacuum field, including the Sun as a source at 13 sigmas of certainty. The vacuum field has long been a mystery of physics, having enormous theoretical intensity set by Planck’s constant h and yet no obvious physical effect. Hendrick Casimir first proposed that this form of E & M radiation was real in 1948 and suggested an experiment to verify its existence. Over 50 experiments since then have confirmed that this vacuum radiation is real, is a form of electro-magnetic radiation, and varies in time and space over 10:1 in our laboratory compared to its standard QM spectrum. Two other authors have found the fine structure constant α (proportional to 1/h) is varying across the cosmos at up to 4.2 sigma certainty. All these results suggest that the vacuum field (and thus h) varies in time and space. In a previous paper we reported our tunnel diode experimental results as well as the results of six other organizations (including German, Russian and US national labs).The six organizations reported sinusoidal annual variations of 1000 - 3000 ppm (peak-to-valley) in the decay rates of 8 radionuclides over a 20-year span, including beta decay (weak interaction) and alpha decay (strong interaction). All decay rates peaked in January-February and minimized in July-August without any candidate cause suggested. We confirmed that Planck’s constant was the cause by verifying similar variations in Esaki tunnel diode current, which is purely electromagnetic. The combined data from previous strong and weak decays plus our own E & M tunnel data showed similar magnitude and time phasing for strong, weak and E & M interactions, except that the tunnel diode temporal variations were 180 deg out of phase—as we predicted. The logic for this 180 deg phase shift was straight forward. Radioactive decay and electron tunneling both have h in the denominator of the tunneling exponent, but tunnel diodes also have h2 in the numerator of the exponent due to the size of atoms being proportional to h2. This extra h2 makes the exponent proportional to h for electron tunneling instead of proportional to 1/h for strong and weak decay—shifting the annual oscillation for E & M tunnel current by 180 deg. Radioactive decay had a maximum around January-February of each year and a minimum around July-August of each year. Tunnel current (the equivalent to radioactive decay rate) had the opposite—a minimum around January of each year and a maximum around July of each year. This predicted and observed sign flip in the temporal variations between radioactive decay and electron tunneling provides strong evidence that h variations across the Earth’s orbit are the cause of these annual cycles. In this paper we take the next step by verifying whether the Sun and a potential more distant cosmic source radiate the vacuum E & M field, just as all stars generate massive amounts of regular E & M radiation. We reprocessed two years of data, 6 million data points, from our tunnel diode experiment to search for day-night oscillations in tunnel current. Here we assume that the Earth would block the radiated vacuum field half of each day. Sun-locked signals have 365 cycles per year and cosmos locked signals have 366 cycles per year. With our two years of data, these two signals are separated by a null-signal, which is not locked to the Earth or to the cosmos—allowing us to clearly distinguish the solar and cosmic sources. 1) We found sun-locked variations in the vacuum field, peaking around local noon with 10-13 probability of false alarm. Other potential causes are carefully examined and ruled out. 2) We also found cosmos-locked variations in the vacuum field, peaking at the right ascension of the red super-giant star Betelgeuse with 10-7 probability of false alarm. Cosmos locked sources are easily distinguished from the solar source because they have one extra cycle per year, two extra cycles during the two years of the experiment. They are thus independent Fourier components, easily separated by a Fourier transform. Both of these high probability detections support that the vacuum field spectrum may vary in space and time and be enhanced by stellar sources.
文摘There has been protracted historical evidence of a relative paucity in the distribution frequency of global earthquakes within the M = 3.5 to 4.0 range. We observed a similar phenomenon for all recently recorded earthquakes from January 2009 through August 2013. Frequency distributions with increments of M = 0.1 verified the trough of the diminished incidence to be between M = 3.6 and 3.7 with an abrupt increase between M = 3.9 and 4.0. The calculated equivalent photon wavelength for the energies associated with M = 3.6 approaches Planck’s Length while the related time increment is the cutoff frequency for the Zero Point Fluctuation force coupled to gravity. The conspicuous congruence between Planck’s time and length and the lower than expected frequency based upon Gaussian assumptions of distribution for the discrete band of energy associated with this magnitude range of earthquakes suggests a conduit may exist between intrinsic features of Planck space-time and geophysical processes. The existence of such a connection would encourage alternative explanations for sun-seismic activities as due to solar instabilities. Instead, it may reflect influence upon both from alterations in the structure of space being traversed by the solar system as it moves through the galaxy.
文摘Unifying quantum and classical physics has proved difficult as their postulates are conflicting. Using the notion of counts of the fundamental measures—length, mass, and time—a unifying description is resolved. A theoretical framework is presented in a set of postulates by which a conversion between expressions from quantum and classical physics can be made. Conversions of well-known expressions from different areas of physics (quantum physics, gravitation, optics and cosmology) exemplify the approach and mathematical procedures. The postulated integer counts of fundamental measures change our understanding of length, suggesting that our current understanding of reality is distorted.
文摘In this paper, we have determined the structure of the uncertainty relations obtained on the basis of the dimensions that describe the very origin of the Big Bang—in accordance with our Hypothesis of Primary Particles, and with the logically introduced, smallest increment of speed that can exist, the “speed quantum”. This approach allowed us to theoretically move the margin for the description of this singularity to values smaller than the Planck time and the Planck length;hence, we also introduced a new constant in the uncertainty relations, which corresponds to the reduced Planck constant. We expect that such a result for the initial singularity itself will enable a more detailed study of the Big Bang, while opening new areas of study in physics.
文摘The equations for energy, momentum, frequency, wavelength and also Schr?dinger equation of the electromagnetic wave in the atom are derived using the model of atom by analogy with the transmission line. The action constant A0 = (μ0/ε0)1/2s02e2 is a key term in the above mentioned equations. Besides the other well-known quantities, the only one unknown quantity in the last expression is a structural constant s0. Therefore, this article is dedicated to the calculation of the structural constant of the atoms on the basis of the above mentioned model. The structural constant of the atoms s0 = 8.277 56 shows up as a link between macroscopic and atomic world. After calculating this constant we get the theory of atoms based on Maxwell’s and Lorentz equations only. This theory does not require Planck constant h, which once was introduced empirically. Replacement for h is the action constant A0, which is here theoretically derived, while the replacement for fine structure constant α is 1/(2s02). In this way, the structural constant s0 replaces both constants, h and α. This paper also defines the stationary states of atoms and shows that the maximal atomic number is equal to 2s02 = 137.036, i.e., as integer should be Zmax=137. The presented model of the atoms covers three of the four fundamental interactions, namely the electromagnetic, weak and strong interactions.
文摘A century ago the classical physics couldn’t explain many atomic physical phenomena. Now the situation has changed. It’s because within the framework of classical physics with the help of Maxwell’s equations we can derive Schrödinger’s equation, which is the foundation of quantum physics. The equations for energy, momentum, frequency and wavelength of the electromagnetic wave in the atom are derived using the model of atom by analogy with the transmission line. The action constant A0 = (μ0/ε0)1/2s02e2 is a key term in the above mentioned equations. Besides the other well-known constants, the only unknown constant in the last expression is a structural constant of the atom s0. We have found that the value of this constant is 8.277 56 and that it shows up as a link between macroscopic and atomic world. After calculating this constant we get the theory of atoms based on Maxwell’s and Lorentz equations only. This theory does not require knowledge of Planck’s constant h, which is replaced with theoretically derived action constant A0, while the replacement for the fine structure constant α-1 is theoretically derived expression 2s02 = 137.036. So, the structural constant s0 replaces both constants h and α. This paper also defines the stationary states of atoms and shows that the maximal atomic number is equal to Zmax = 137. The presented model of the atoms covers three of the four fundamental interactions, namely the electromagnetic, weak and strong interactions.
文摘Recently, the author read the Alicki-Van Ryn test as to behavior of photons in a test of violations of classicality. The same thing is propoosed via use of a spin two graviton, using typical spin 2 matrices. While the technology currently does not exist to perform such an analysis yet, the same sort of thought experiment is proposed in a way to allow for a first principle test of the either classical or quantum foundations of gravity. The reason for the present manuscript topic is due to a specific argument presented in a prior document as to how h is formed from semiclassical reasoning. We referred to a procedure as to how to use Maxwell’s equations involving a closed boundary regime, in the boundary re- gime between Octonionic Geometry and quantum flat space. Conceivably, a similar argument could be made forgravi- tons, pending further investigations. Also the anlysis of if gravitons are constructed by a similar semiclassical argument is pending if gravitons as by the Alicki-Van Ryn test result in semiclassical and matrix observable eigenvalue behavior. This paper also indirectly raises the question of if Baysian statistics would be the optimal way to differentiate between and matrix observable eigenvalue behavior for reasons brought up in the conclusion.
文摘Making use of Newton’s classical shell theorem, the Schwarzschild metric is modified. This removes the singularity at r = 0 for a standard object (not a black hole). It is demonstrated how general relativity evidently leads to quantization of space-time. Both classical and quantum mechanical limits on density give the same result. Based on Planck’s length and the assumption that density must have an upper limit, we conclude that the lower limit of the classical gravitation theory by Einstein is related to the Planck length, which is a quantum phenomenon posed by dimensional analysis of the universal constants. The Ricci tensor is considered under extreme densities (where Kretschmann invariant = 0) and a solution is considered for both outside and inside the object. Therefore, classical relativity and the relationship between the universal constants lead to quantization of space. A gedanken experiment of light passing through an extremely dense object is considered, which will allow for evaluation of the theory.
文摘From 1990 to 2005 NASA did six flybys of Earth in order to boost the energy of each spacecraft, enabling them to go deeper into the solar system. These six flybys showed an unexpected violation in the conservation of energy of up to 100 sigmas, matching a simple physical formula related to the input and output spacecraft velocities relative to the Earth rotational plane. Mysteriously, occasionally the effect was not present. After several years of reviewing the data and evaluating all sources of perturbation known to NASA, no solution was identified. NASA sent the final report to the author above for further review. Independently, the author’s firm Optical Physics Company had published research into the vacuum field, finding that it was not constant but varied across the Earth’s orbit and was also separately detected being radiated by the Sun. The physics we had learned was applied to the NASA passes, allowing all the anomalies they had encountered to be explained and adding considerably to our understanding of the vacuum field. We hypothesized a radially emitted vacuum field (which controls the rate of time) would couple the radial direction r with time t to add a gtr term in the metric tensor. We then combined the previously published experimental data of the vacuum field radiated by the Sun with the NASA data to develop a formula for the emission of the vacuum field from warm rotating bodies, accurate to about 1%. 25 candidate formulas were evaluated, based on powers of radial acceleration and temperature, and one was definitively selected. This research offers a linkage between the vacuum field whose spectrum is proportional to h and an effect on the metric tensor of gravity. Since both gravity and h control time rates, it seemed credible they could both affect the metric tensor.