Based on the Mach's principle and the characteristic mass of the present universe, Mo a c3/2GHo, it is noticed that, 'rate of decrease in the laboratory fine structure ratio' is a measure of the cosmic rate of expa...Based on the Mach's principle and the characteristic mass of the present universe, Mo a c3/2GHo, it is noticed that, 'rate of decrease in the laboratory fine structure ratio' is a measure of the cosmic rate of expansion. If the observed laboratory fine structure ratio is a constant, then, independent of the cosmic red shift and CMBR observations, it can be suggested that, at present there is no cosmic acceleration. Obtained value of the present Hubble constant is 70.75 Km/sec/Mpc. If it is true that, rate of decrease in temperature is a measure of cosmic rate of expansion, then from the observed cosmic isotropy it can also be suggested that, at present there is no cosmic acceleration. At present if the characteristic mass of the universe is, Mo = c3/2GHo and if the primordial universe is a natural setting for the creation of black holes and other non-perturbative gravitational entities, it is also possible to assume that throughout its journey, the whole universe is a primordial growing and light speed rotating black hole. At any time, if cot is the angular velocity, then cosmic radius is c/ω1 and cosmic mass is c3/2Gω1 Instead of the Planck mass, initial conditions can be addressed with the Coulomb mass = Mc = √/4xeoG At present, if ω1= H0 the cosmic black hole's volume density, observed matter density and the thermal energy density are in geometric series and the geometric ratio is 1 + ln(M0 +Mc).展开更多
O. A. Teplov developed an approach to describe the meson quark model by establishing a mathematical quark series (harmonic quark series). With respect to the physical mesons, he made some basic hypotheses of his own a...O. A. Teplov developed an approach to describe the meson quark model by establishing a mathematical quark series (harmonic quark series). With respect to the physical mesons, he made some basic hypotheses of his own and used the well-known theory of harmonic oscillation to construct a numerical mass series that obeys a rigid multiplicative pattern and allows the physical meson masses to be calculated accurately. We have found that his numerical quark series, i.e., their masses, has a fundamental relation to the reduced Max Planck constant ħand report on it in the present paper. This discovery is obviously a theoretical contribution to the correctness of Teplov’s harmonic quark model approach and at the same time a confirmation of the importance of this simple and powerful research work.展开更多
We initially look at a non singular universe representation of entropy, based in part on what was brought up by Muller and Lousto. This is a gateway to bringing up information and computational steps (as defined by Se...We initially look at a non singular universe representation of entropy, based in part on what was brought up by Muller and Lousto. This is a gateway to bringing up information and computational steps (as defined by Seth Lloyd) as to what would be available initially due to a modified ZPE formalism. The ZPE formalism is modified as due to Matt Visser’s alternation of k (maximum) ~ 1/(Planck length), with a specific initial density giving rise to initial information content which may permit fixing the initial Planck’s constant, h, which is pivotal to the setting of physical law. The settings of these parameters depend upon NLED.展开更多
The discovery of the Planck relation is generally regarded as the starting point of quantum physics.Planck's constant h is now regarded as one of the most important universal constants.The physical nature of h,howeve...The discovery of the Planck relation is generally regarded as the starting point of quantum physics.Planck's constant h is now regarded as one of the most important universal constants.The physical nature of h,however,has not been well understood.It was originally suggested as a fitting constant to explain the black-body radiation.Although Planck had proposed a theoretical justification of h,he was never satisfied with that.To solve this outstanding problem,we use the Maxwell theory to directly calculate the energy and momentum of a radiation wave packet.We find that the energy of the wave packet is indeed proportional to its oscillation frequency.This allows us to derive the value of Planck's constant.Furthermore,we show that the emission and transmission of a photon follows the all-or-none principle.The "strength" of the wave packet can be characterized by ζ,which represents the integrated strength of the vector potential along a transverse axis.We reason that ζ should have a fixed cut-off value for all photons.Our results suggest that a wave packet can behave like a particle.This offers a simple explanation to the recent satellite observations that the cosmic microwave background follows closely the black-body radiation as predicted by Planck's law.展开更多
Annual variations of 1000 - 3000 ppm (peak-to-valley) have been observed in the decay rates of 8 radionuclides over a 20 year span by six organizations on three continents, including beta decay (weak interaction) and ...Annual variations of 1000 - 3000 ppm (peak-to-valley) have been observed in the decay rates of 8 radionuclides over a 20 year span by six organizations on three continents, including beta decay (weak interaction) and alpha decay (strong interaction). In searching for a common cause, we hypothesized that small variations in Planck’s constant might account for the observed synchronized variations in strong and weak decays. If so, then h would be a maximum around January-February of each year and a minimum around July-August of each year based on the 20 years of radioactive decay data. To test this hypothesis, a purely electromagnetic experiment was set up to search for the same annual variations. From Jun 14, 2011 to Jan 29, 2014 (941 days), annual variations in tunneling voltage through 5 parallel Esaki tunnel diodes were recorded. It found annual variations of 826 ppm peak-to-valley peaking around Jan 1. These variations lend support to the hypothesis that there is a gradient in h of about 21 ppm across the Earth’s orbit.展开更多
Padmanabhan elucidated the concept of super radiance in black hole physics which would lead to loss mass of a black hole, and loss of angular momentum due to space-time infall of material into a black hole. As Padmana...Padmanabhan elucidated the concept of super radiance in black hole physics which would lead to loss mass of a black hole, and loss of angular momentum due to space-time infall of material into a black hole. As Padmanabhan explained it, to avoid super radiance, and probable break down of black holes, from in fall, one would need in fall material frequency, divided by mass of particles undergoing in fall in the black hole to be greater than the angular velocity of the black hole event horizon in question. We should keep in mind we bring this model up to improve the chance that Penrose’s conformal cyclic cosmology will allow for retention of enough information for preservation of Planck’s constant from cycle to cycle, as a counterpart to what we view as unacceptable reliance upon the LQG quantum bounce and its tetrad structure to preserve memory. In addition, we are presuming that at the time of z = 20 in red shift that there would be roughly about the same order of magnitude of entropy as number of operations in the electro weak era, and that the number of operations in the z = 20 case is close to the entropy at redshift z = 0. Finally, we have changed Λ with the result that after redshift = 20;there is a rapid collapse to the present-day vacuum energy value i.e. by z = 12 the value of the cosmological constant, Λ likely being the same, today, as for what it was when z = 12. And z = 12 is the redshift value about when Galaxies form.展开更多
Planck’s constant <i>h</i> is a fundamental physical constant defined in the realm of quantum theory and is determined only by physical measurement and cannot be calculated. To this day, physicists do not...Planck’s constant <i>h</i> is a fundamental physical constant defined in the realm of quantum theory and is determined only by physical measurement and cannot be calculated. To this day, physicists do not have a convincing explanation for why action in microcosm is quantized or why <i>h</i> has a specific quantitative value. Here, a new theory is presented based on the idea that the elementary particles are vortices of a condensed superfluid vacuum. The vortex has a conserved angular momentum that can be calculated by applying hydrodynamic laws;in this way, the numerical value of Planck’s constant can be obtained. Therefore, the Planck constant is not a fundamental constant but an observable parameter of the elementary particle as a vortex that has constant vorticity and conserved angular momentum. This theory may offer a unique and comprehensive understanding of Planck’s constant and open a new perspective for a theory of everything.展开更多
Haug has recently introduced a new theory of unified quantum gravity coined “<em>Collision Space-Time</em>”. From this new and deeper understanding of mass, we can also understand how a grandfather pendu...Haug has recently introduced a new theory of unified quantum gravity coined “<em>Collision Space-Time</em>”. From this new and deeper understanding of mass, we can also understand how a grandfather pendulum clock can be used to measure the world’s shortest time interval, namely the Planck time, indirectly, without any knowledge of G. Therefore, such a clock can also be used to measure the diameter of an indivisible particle indirectly. Further, such a clock can easily measure the Schwarzschild radius of the gravity object and what we will call “Schwarzschild time”. These facts basically prove that the Newton gravitational constant is not needed to find the Planck length or the Planck time;it is also not needed to find the Schwarzschild radius. Unfortunately, there is significant inertia towards new ideas that could significantly alter our perspective on the fundamentals in the current physics establishment. However, this situation is not new in the history of science. Still, the idea that the Planck time can be measured totally independently of any knowledge of Newton’s gravitational constant could be very important for moving forward in physics. Interestingly, an old instrument that today is often thought of as primitive instrument can measure the world’s shortest possible time interval. No atomic clock or optical clock is even close to be able to do this.展开更多
Here we derive Newton’s and Einstein’s gravitational results for any mass less than or equal to a Planck mass. All of the new formulas presented in this paper give the same numerical output as the traditional formul...Here we derive Newton’s and Einstein’s gravitational results for any mass less than or equal to a Planck mass. All of the new formulas presented in this paper give the same numerical output as the traditional formulas. However, they have been rewritten in a way that gives a new perspective on the formulas when working with gravity at the level of the subatomic world. To rewrite the well-known formulas in this way could make it easier to understand the strengths and weaknesses in Newton’s and Einstein’s gravitation formulas at the subatomic scale, potentially opening them up for new important interpretations and extensions. For example, we suggest that the speed of gravity equal to that of light is actually embedded and hidden inside of Newton’s gravitational formula.展开更多
Newton did not invent or use the so-called Newton’s gravitational constant G. Newton’s original gravity formula was and not . In this paper, we will show how a series of major gravity phenomena can be calculated and...Newton did not invent or use the so-called Newton’s gravitational constant G. Newton’s original gravity formula was and not . In this paper, we will show how a series of major gravity phenomena can be calculated and predicted without the gravitational constant. This is, to some degree, well known, at least for those that have studied a significant amount of the older literature on gravity. However, to understand gravity at a deeper level, still without G, one needs to trust Newton’s formula. It is when we first combine Newton’s assumptionn, that matter and light ultimately consist of hard indivisible particles, with new insight in atomism that we can truly begin to understand gravity at a deeper level. This leads to a quantum gravity theory that is unified with quantum mechanics and in which there is no need for G and not even a need for the Planck constant. We claim that two mistakes have been made in physics, which have held back progress towards a unified quantum gravity theory. First, it has been common practice to consider Newton’s gravitational constant as almost holy and untouchable. Thus, we have neglected to see an important aspect of mass;namely, the indivisible particle that Newton also held in high regard. Second, standard physics have built their quantum mechanics around the de Broglie wavelength, rather than the Compton wavelength. We claim the de Broglie wavelength is merely a mathematical derivative of the Compton wavelength, the true matter wavelength.展开更多
The equations for energy, momentum, frequency, wavelength and also Schr?dinger equation of the electromagnetic wave in the atom are derived using the model of atom by analogy with the transmission line. The action con...The equations for energy, momentum, frequency, wavelength and also Schr?dinger equation of the electromagnetic wave in the atom are derived using the model of atom by analogy with the transmission line. The action constant A0 = (μ0/ε0)1/2s02e2 is a key term in the above mentioned equations. Besides the other well-known quantities, the only one unknown quantity in the last expression is a structural constant s0. Therefore, this article is dedicated to the calculation of the structural constant of the atoms on the basis of the above mentioned model. The structural constant of the atoms s0 = 8.277 56 shows up as a link between macroscopic and atomic world. After calculating this constant we get the theory of atoms based on Maxwell’s and Lorentz equations only. This theory does not require Planck constant h, which once was introduced empirically. Replacement for h is the action constant A0, which is here theoretically derived, while the replacement for fine structure constant α is 1/(2s02). In this way, the structural constant s0 replaces both constants, h and α. This paper also defines the stationary states of atoms and shows that the maximal atomic number is equal to 2s02 = 137.036, i.e., as integer should be Zmax=137. The presented model of the atoms covers three of the four fundamental interactions, namely the electromagnetic, weak and strong interactions.展开更多
This document is due to reviewing an article by Maydanyuk and Olkhovsky, of a Nova Science conpendium as of “The big bang, theory assumptions and Problems”, as of 2012, which uses the Wheeler De Witt equation as an ...This document is due to reviewing an article by Maydanyuk and Olkhovsky, of a Nova Science conpendium as of “The big bang, theory assumptions and Problems”, as of 2012, which uses the Wheeler De Witt equation as an evolution equation assuming a closed universe. Having the value of k, not as the closed universe, but nearly zero of a nearly flat universe, which leads to serious problems of interpretation of what initial conditions are. These problems of interpretations of initial conditions tie in with difficulties in using QM as an initial driver of inflation. And argue in favor of using a different procedure as far as forming a wave function of the universe initially. The author wishes to thank Abhay Ashtekar for his well thought out criticism but asserts that limitations in space-time geometry largely due to when is formed from semi classical reasoning, i.e. Maxwell’s equation involving a close boundary value regime between Octonionic geometry and flat space non Octonionic geometry is a datum which Abhay Ashekhar may wish to consider in his quantum bounce model and in loop quantum gravity in the future.展开更多
The Newton gravitational constant is considered a cornerstone of modern gravity theory. Newton did not invent or use the gravity constant;it was invented in 1873, about the same time as it became standard to use the k...The Newton gravitational constant is considered a cornerstone of modern gravity theory. Newton did not invent or use the gravity constant;it was invented in 1873, about the same time as it became standard to use the kilogram mass definition. We will claim that G is just a term needed to correct the incomplete kilogram definition so to be able to make gravity predictions. But there is another way;namely, to directly use a more complete mass definition, something that in recent years has been introduced as collision-time and a corresponding energy called collision-length. The collision-length is quantum gravitational energy. We will clearly demonstrate that by working with mass and energy based on these new concepts, rather than kilogram and the gravitational constant, one can significantly reduce the uncertainty in most gravity predictions.展开更多
Planck’s radiation law provides an equation for the intensity of the electromagnetic radiation from a physical body as a function of frequency and temperature. The frequency that corresponds to the maximum intensity ...Planck’s radiation law provides an equation for the intensity of the electromagnetic radiation from a physical body as a function of frequency and temperature. The frequency that corresponds to the maximum intensity is a function of temperature. At a specific temperature, for the frequencies correspond to much less than the maximum intensity, an equation was derived in the form of the Lambert <em>W</em> function. Numerical calculations validate the equation. A new form of solution for the Euler’s transcendental equation was derived in the form of the Lambert <em>W</em> function with logarithmic argument. Numerical solutions to the Euler’s equation were determined iteratively and iterative convergences were investigated. Numerical coincidences with physical constants were explored.展开更多
In this paper, we show how one can find the Planck units without any knowledge of Newton’s gravitational constant, by mainly focusing on the use of a Cavendish apparatus to accomplish this. This is in strong contrast...In this paper, we show how one can find the Planck units without any knowledge of Newton’s gravitational constant, by mainly focusing on the use of a Cavendish apparatus to accomplish this. This is in strong contrast to the assumption that one needs to know G in order to find the Planck units. The work strongly supports the idea that gravity is directly linked to the Planck scale, as suggested by several quantum gravity theories. We further demonstrate that there is no need for the Planck constant in observable gravity phenomena despite quantization, and we also suggest that standard physics uses two different mass definitions without acknowledging them directly. The quantization in gravity is linked to the Planck length and Planck time, which again is linked to what we can call the number of Planck mass events. That is, quantization in gravity is not only a hypothesis, but something we can currently and actually detect and measure.展开更多
In this paper, we have determined the structure of the uncertainty relations obtained on the basis of the dimensions that describe the very origin of the Big Bang—in accordance with our Hypothesis of Primary Particle...In this paper, we have determined the structure of the uncertainty relations obtained on the basis of the dimensions that describe the very origin of the Big Bang—in accordance with our Hypothesis of Primary Particles, and with the logically introduced, smallest increment of speed that can exist, the “speed quantum”. This approach allowed us to theoretically move the margin for the description of this singularity to values smaller than the Planck time and the Planck length;hence, we also introduced a new constant in the uncertainty relations, which corresponds to the reduced Planck constant. We expect that such a result for the initial singularity itself will enable a more detailed study of the Big Bang, while opening new areas of study in physics.展开更多
This paper extends the previous experimental work on Planck’s constant h and the vacuum field, whose spectrum is determined by h. In particular it adds additional experimental evidence supporting temporal and spatial...This paper extends the previous experimental work on Planck’s constant h and the vacuum field, whose spectrum is determined by h. In particular it adds additional experimental evidence supporting temporal and spatial variations in the vacuum field, including the Sun as a source at 13 sigmas of certainty. The vacuum field has long been a mystery of physics, having enormous theoretical intensity set by Planck’s constant h and yet no obvious physical effect. Hendrick Casimir first proposed that this form of E & M radiation was real in 1948 and suggested an experiment to verify its existence. Over 50 experiments since then have confirmed that this vacuum radiation is real, is a form of electro-magnetic radiation, and varies in time and space over 10:1 in our laboratory compared to its standard QM spectrum. Two other authors have found the fine structure constant α (proportional to 1/h) is varying across the cosmos at up to 4.2 sigma certainty. All these results suggest that the vacuum field (and thus h) varies in time and space. In a previous paper we reported our tunnel diode experimental results as well as the results of six other organizations (including German, Russian and US national labs).The six organizations reported sinusoidal annual variations of 1000 - 3000 ppm (peak-to-valley) in the decay rates of 8 radionuclides over a 20-year span, including beta decay (weak interaction) and alpha decay (strong interaction). All decay rates peaked in January-February and minimized in July-August without any candidate cause suggested. We confirmed that Planck’s constant was the cause by verifying similar variations in Esaki tunnel diode current, which is purely electromagnetic. The combined data from previous strong and weak decays plus our own E & M tunnel data showed similar magnitude and time phasing for strong, weak and E & M interactions, except that the tunnel diode temporal variations were 180 deg out of phase—as we predicted. The logic for this 180 deg phase shift was straight forward. Radioactive decay and electron tunneling both have h in the denominator of the tunneling exponent, but tunnel diodes also have h2 in the numerator of the exponent due to the size of atoms being proportional to h2. This extra h2 makes the exponent proportional to h for electron tunneling instead of proportional to 1/h for strong and weak decay—shifting the annual oscillation for E & M tunnel current by 180 deg. Radioactive decay had a maximum around January-February of each year and a minimum around July-August of each year. Tunnel current (the equivalent to radioactive decay rate) had the opposite—a minimum around January of each year and a maximum around July of each year. This predicted and observed sign flip in the temporal variations between radioactive decay and electron tunneling provides strong evidence that h variations across the Earth’s orbit are the cause of these annual cycles. In this paper we take the next step by verifying whether the Sun and a potential more distant cosmic source radiate the vacuum E & M field, just as all stars generate massive amounts of regular E & M radiation. We reprocessed two years of data, 6 million data points, from our tunnel diode experiment to search for day-night oscillations in tunnel current. Here we assume that the Earth would block the radiated vacuum field half of each day. Sun-locked signals have 365 cycles per year and cosmos locked signals have 366 cycles per year. With our two years of data, these two signals are separated by a null-signal, which is not locked to the Earth or to the cosmos—allowing us to clearly distinguish the solar and cosmic sources. 1) We found sun-locked variations in the vacuum field, peaking around local noon with 10-13 probability of false alarm. Other potential causes are carefully examined and ruled out. 2) We also found cosmos-locked variations in the vacuum field, peaking at the right ascension of the red super-giant star Betelgeuse with 10-7 probability of false alarm. Cosmos locked sources are easily distinguished from the solar source because they have one extra cycle per year, two extra cycles during the two years of the experiment. They are thus independent Fourier components, easily separated by a Fourier transform. Both of these high probability detections support that the vacuum field spectrum may vary in space and time and be enhanced by stellar sources.展开更多
Unifying quantum and classical physics has proved difficult as their postulates are conflicting. Using the notion of counts of the fundamental measures—length, mass, and time—a unifying description is resolved. A th...Unifying quantum and classical physics has proved difficult as their postulates are conflicting. Using the notion of counts of the fundamental measures—length, mass, and time—a unifying description is resolved. A theoretical framework is presented in a set of postulates by which a conversion between expressions from quantum and classical physics can be made. Conversions of well-known expressions from different areas of physics (quantum physics, gravitation, optics and cosmology) exemplify the approach and mathematical procedures. The postulated integer counts of fundamental measures change our understanding of length, suggesting that our current understanding of reality is distorted.展开更多
A century ago the classical physics couldn’t explain many atomic physical phenomena. Now the situation has changed. It’s because within the framework of classical physics with the help of Maxwell’s equations we can...A century ago the classical physics couldn’t explain many atomic physical phenomena. Now the situation has changed. It’s because within the framework of classical physics with the help of Maxwell’s equations we can derive Schrödinger’s equation, which is the foundation of quantum physics. The equations for energy, momentum, frequency and wavelength of the electromagnetic wave in the atom are derived using the model of atom by analogy with the transmission line. The action constant A0 = (μ0/ε0)1/2s02e2 is a key term in the above mentioned equations. Besides the other well-known constants, the only unknown constant in the last expression is a structural constant of the atom s0. We have found that the value of this constant is 8.277 56 and that it shows up as a link between macroscopic and atomic world. After calculating this constant we get the theory of atoms based on Maxwell’s and Lorentz equations only. This theory does not require knowledge of Planck’s constant h, which is replaced with theoretically derived action constant A0, while the replacement for the fine structure constant α-1 is theoretically derived expression 2s02 = 137.036. So, the structural constant s0 replaces both constants h and α. This paper also defines the stationary states of atoms and shows that the maximal atomic number is equal to Zmax = 137. The presented model of the atoms covers three of the four fundamental interactions, namely the electromagnetic, weak and strong interactions.展开更多
Recently, the author read the Alicki-Van Ryn test as to behavior of photons in a test of violations of classicality. The same thing is propoosed via use of a spin two graviton, using typical spin 2 matrices. While the...Recently, the author read the Alicki-Van Ryn test as to behavior of photons in a test of violations of classicality. The same thing is propoosed via use of a spin two graviton, using typical spin 2 matrices. While the technology currently does not exist to perform such an analysis yet, the same sort of thought experiment is proposed in a way to allow for a first principle test of the either classical or quantum foundations of gravity. The reason for the present manuscript topic is due to a specific argument presented in a prior document as to how h is formed from semiclassical reasoning. We referred to a procedure as to how to use Maxwell’s equations involving a closed boundary regime, in the boundary re- gime between Octonionic Geometry and quantum flat space. Conceivably, a similar argument could be made forgravi- tons, pending further investigations. Also the anlysis of if gravitons are constructed by a similar semiclassical argument is pending if gravitons as by the Alicki-Van Ryn test result in semiclassical and matrix observable eigenvalue behavior. This paper also indirectly raises the question of if Baysian statistics would be the optimal way to differentiate between and matrix observable eigenvalue behavior for reasons brought up in the conclusion.展开更多
文摘Based on the Mach's principle and the characteristic mass of the present universe, Mo a c3/2GHo, it is noticed that, 'rate of decrease in the laboratory fine structure ratio' is a measure of the cosmic rate of expansion. If the observed laboratory fine structure ratio is a constant, then, independent of the cosmic red shift and CMBR observations, it can be suggested that, at present there is no cosmic acceleration. Obtained value of the present Hubble constant is 70.75 Km/sec/Mpc. If it is true that, rate of decrease in temperature is a measure of cosmic rate of expansion, then from the observed cosmic isotropy it can also be suggested that, at present there is no cosmic acceleration. At present if the characteristic mass of the universe is, Mo = c3/2GHo and if the primordial universe is a natural setting for the creation of black holes and other non-perturbative gravitational entities, it is also possible to assume that throughout its journey, the whole universe is a primordial growing and light speed rotating black hole. At any time, if cot is the angular velocity, then cosmic radius is c/ω1 and cosmic mass is c3/2Gω1 Instead of the Planck mass, initial conditions can be addressed with the Coulomb mass = Mc = √/4xeoG At present, if ω1= H0 the cosmic black hole's volume density, observed matter density and the thermal energy density are in geometric series and the geometric ratio is 1 + ln(M0 +Mc).
文摘O. A. Teplov developed an approach to describe the meson quark model by establishing a mathematical quark series (harmonic quark series). With respect to the physical mesons, he made some basic hypotheses of his own and used the well-known theory of harmonic oscillation to construct a numerical mass series that obeys a rigid multiplicative pattern and allows the physical meson masses to be calculated accurately. We have found that his numerical quark series, i.e., their masses, has a fundamental relation to the reduced Max Planck constant ħand report on it in the present paper. This discovery is obviously a theoretical contribution to the correctness of Teplov’s harmonic quark model approach and at the same time a confirmation of the importance of this simple and powerful research work.
文摘We initially look at a non singular universe representation of entropy, based in part on what was brought up by Muller and Lousto. This is a gateway to bringing up information and computational steps (as defined by Seth Lloyd) as to what would be available initially due to a modified ZPE formalism. The ZPE formalism is modified as due to Matt Visser’s alternation of k (maximum) ~ 1/(Planck length), with a specific initial density giving rise to initial information content which may permit fixing the initial Planck’s constant, h, which is pivotal to the setting of physical law. The settings of these parameters depend upon NLED.
基金Project partially supported by the Research Grant Council of Hong Kong,China(Grant No.RGC 660207)the Macro-Science Program,Hong Kong University of Science and Technology,China(Grant No.DCC 00/01.SC01)
文摘The discovery of the Planck relation is generally regarded as the starting point of quantum physics.Planck's constant h is now regarded as one of the most important universal constants.The physical nature of h,however,has not been well understood.It was originally suggested as a fitting constant to explain the black-body radiation.Although Planck had proposed a theoretical justification of h,he was never satisfied with that.To solve this outstanding problem,we use the Maxwell theory to directly calculate the energy and momentum of a radiation wave packet.We find that the energy of the wave packet is indeed proportional to its oscillation frequency.This allows us to derive the value of Planck's constant.Furthermore,we show that the emission and transmission of a photon follows the all-or-none principle.The "strength" of the wave packet can be characterized by ζ,which represents the integrated strength of the vector potential along a transverse axis.We reason that ζ should have a fixed cut-off value for all photons.Our results suggest that a wave packet can behave like a particle.This offers a simple explanation to the recent satellite observations that the cosmic microwave background follows closely the black-body radiation as predicted by Planck's law.
文摘Annual variations of 1000 - 3000 ppm (peak-to-valley) have been observed in the decay rates of 8 radionuclides over a 20 year span by six organizations on three continents, including beta decay (weak interaction) and alpha decay (strong interaction). In searching for a common cause, we hypothesized that small variations in Planck’s constant might account for the observed synchronized variations in strong and weak decays. If so, then h would be a maximum around January-February of each year and a minimum around July-August of each year based on the 20 years of radioactive decay data. To test this hypothesis, a purely electromagnetic experiment was set up to search for the same annual variations. From Jun 14, 2011 to Jan 29, 2014 (941 days), annual variations in tunneling voltage through 5 parallel Esaki tunnel diodes were recorded. It found annual variations of 826 ppm peak-to-valley peaking around Jan 1. These variations lend support to the hypothesis that there is a gradient in h of about 21 ppm across the Earth’s orbit.
文摘Padmanabhan elucidated the concept of super radiance in black hole physics which would lead to loss mass of a black hole, and loss of angular momentum due to space-time infall of material into a black hole. As Padmanabhan explained it, to avoid super radiance, and probable break down of black holes, from in fall, one would need in fall material frequency, divided by mass of particles undergoing in fall in the black hole to be greater than the angular velocity of the black hole event horizon in question. We should keep in mind we bring this model up to improve the chance that Penrose’s conformal cyclic cosmology will allow for retention of enough information for preservation of Planck’s constant from cycle to cycle, as a counterpart to what we view as unacceptable reliance upon the LQG quantum bounce and its tetrad structure to preserve memory. In addition, we are presuming that at the time of z = 20 in red shift that there would be roughly about the same order of magnitude of entropy as number of operations in the electro weak era, and that the number of operations in the z = 20 case is close to the entropy at redshift z = 0. Finally, we have changed Λ with the result that after redshift = 20;there is a rapid collapse to the present-day vacuum energy value i.e. by z = 12 the value of the cosmological constant, Λ likely being the same, today, as for what it was when z = 12. And z = 12 is the redshift value about when Galaxies form.
文摘Planck’s constant <i>h</i> is a fundamental physical constant defined in the realm of quantum theory and is determined only by physical measurement and cannot be calculated. To this day, physicists do not have a convincing explanation for why action in microcosm is quantized or why <i>h</i> has a specific quantitative value. Here, a new theory is presented based on the idea that the elementary particles are vortices of a condensed superfluid vacuum. The vortex has a conserved angular momentum that can be calculated by applying hydrodynamic laws;in this way, the numerical value of Planck’s constant can be obtained. Therefore, the Planck constant is not a fundamental constant but an observable parameter of the elementary particle as a vortex that has constant vorticity and conserved angular momentum. This theory may offer a unique and comprehensive understanding of Planck’s constant and open a new perspective for a theory of everything.
文摘Haug has recently introduced a new theory of unified quantum gravity coined “<em>Collision Space-Time</em>”. From this new and deeper understanding of mass, we can also understand how a grandfather pendulum clock can be used to measure the world’s shortest time interval, namely the Planck time, indirectly, without any knowledge of G. Therefore, such a clock can also be used to measure the diameter of an indivisible particle indirectly. Further, such a clock can easily measure the Schwarzschild radius of the gravity object and what we will call “Schwarzschild time”. These facts basically prove that the Newton gravitational constant is not needed to find the Planck length or the Planck time;it is also not needed to find the Schwarzschild radius. Unfortunately, there is significant inertia towards new ideas that could significantly alter our perspective on the fundamentals in the current physics establishment. However, this situation is not new in the history of science. Still, the idea that the Planck time can be measured totally independently of any knowledge of Newton’s gravitational constant could be very important for moving forward in physics. Interestingly, an old instrument that today is often thought of as primitive instrument can measure the world’s shortest possible time interval. No atomic clock or optical clock is even close to be able to do this.
文摘Here we derive Newton’s and Einstein’s gravitational results for any mass less than or equal to a Planck mass. All of the new formulas presented in this paper give the same numerical output as the traditional formulas. However, they have been rewritten in a way that gives a new perspective on the formulas when working with gravity at the level of the subatomic world. To rewrite the well-known formulas in this way could make it easier to understand the strengths and weaknesses in Newton’s and Einstein’s gravitation formulas at the subatomic scale, potentially opening them up for new important interpretations and extensions. For example, we suggest that the speed of gravity equal to that of light is actually embedded and hidden inside of Newton’s gravitational formula.
文摘Newton did not invent or use the so-called Newton’s gravitational constant G. Newton’s original gravity formula was and not . In this paper, we will show how a series of major gravity phenomena can be calculated and predicted without the gravitational constant. This is, to some degree, well known, at least for those that have studied a significant amount of the older literature on gravity. However, to understand gravity at a deeper level, still without G, one needs to trust Newton’s formula. It is when we first combine Newton’s assumptionn, that matter and light ultimately consist of hard indivisible particles, with new insight in atomism that we can truly begin to understand gravity at a deeper level. This leads to a quantum gravity theory that is unified with quantum mechanics and in which there is no need for G and not even a need for the Planck constant. We claim that two mistakes have been made in physics, which have held back progress towards a unified quantum gravity theory. First, it has been common practice to consider Newton’s gravitational constant as almost holy and untouchable. Thus, we have neglected to see an important aspect of mass;namely, the indivisible particle that Newton also held in high regard. Second, standard physics have built their quantum mechanics around the de Broglie wavelength, rather than the Compton wavelength. We claim the de Broglie wavelength is merely a mathematical derivative of the Compton wavelength, the true matter wavelength.
文摘The equations for energy, momentum, frequency, wavelength and also Schr?dinger equation of the electromagnetic wave in the atom are derived using the model of atom by analogy with the transmission line. The action constant A0 = (μ0/ε0)1/2s02e2 is a key term in the above mentioned equations. Besides the other well-known quantities, the only one unknown quantity in the last expression is a structural constant s0. Therefore, this article is dedicated to the calculation of the structural constant of the atoms on the basis of the above mentioned model. The structural constant of the atoms s0 = 8.277 56 shows up as a link between macroscopic and atomic world. After calculating this constant we get the theory of atoms based on Maxwell’s and Lorentz equations only. This theory does not require Planck constant h, which once was introduced empirically. Replacement for h is the action constant A0, which is here theoretically derived, while the replacement for fine structure constant α is 1/(2s02). In this way, the structural constant s0 replaces both constants, h and α. This paper also defines the stationary states of atoms and shows that the maximal atomic number is equal to 2s02 = 137.036, i.e., as integer should be Zmax=137. The presented model of the atoms covers three of the four fundamental interactions, namely the electromagnetic, weak and strong interactions.
文摘This document is due to reviewing an article by Maydanyuk and Olkhovsky, of a Nova Science conpendium as of “The big bang, theory assumptions and Problems”, as of 2012, which uses the Wheeler De Witt equation as an evolution equation assuming a closed universe. Having the value of k, not as the closed universe, but nearly zero of a nearly flat universe, which leads to serious problems of interpretation of what initial conditions are. These problems of interpretations of initial conditions tie in with difficulties in using QM as an initial driver of inflation. And argue in favor of using a different procedure as far as forming a wave function of the universe initially. The author wishes to thank Abhay Ashtekar for his well thought out criticism but asserts that limitations in space-time geometry largely due to when is formed from semi classical reasoning, i.e. Maxwell’s equation involving a close boundary value regime between Octonionic geometry and flat space non Octonionic geometry is a datum which Abhay Ashekhar may wish to consider in his quantum bounce model and in loop quantum gravity in the future.
文摘The Newton gravitational constant is considered a cornerstone of modern gravity theory. Newton did not invent or use the gravity constant;it was invented in 1873, about the same time as it became standard to use the kilogram mass definition. We will claim that G is just a term needed to correct the incomplete kilogram definition so to be able to make gravity predictions. But there is another way;namely, to directly use a more complete mass definition, something that in recent years has been introduced as collision-time and a corresponding energy called collision-length. The collision-length is quantum gravitational energy. We will clearly demonstrate that by working with mass and energy based on these new concepts, rather than kilogram and the gravitational constant, one can significantly reduce the uncertainty in most gravity predictions.
文摘Planck’s radiation law provides an equation for the intensity of the electromagnetic radiation from a physical body as a function of frequency and temperature. The frequency that corresponds to the maximum intensity is a function of temperature. At a specific temperature, for the frequencies correspond to much less than the maximum intensity, an equation was derived in the form of the Lambert <em>W</em> function. Numerical calculations validate the equation. A new form of solution for the Euler’s transcendental equation was derived in the form of the Lambert <em>W</em> function with logarithmic argument. Numerical solutions to the Euler’s equation were determined iteratively and iterative convergences were investigated. Numerical coincidences with physical constants were explored.
文摘In this paper, we show how one can find the Planck units without any knowledge of Newton’s gravitational constant, by mainly focusing on the use of a Cavendish apparatus to accomplish this. This is in strong contrast to the assumption that one needs to know G in order to find the Planck units. The work strongly supports the idea that gravity is directly linked to the Planck scale, as suggested by several quantum gravity theories. We further demonstrate that there is no need for the Planck constant in observable gravity phenomena despite quantization, and we also suggest that standard physics uses two different mass definitions without acknowledging them directly. The quantization in gravity is linked to the Planck length and Planck time, which again is linked to what we can call the number of Planck mass events. That is, quantization in gravity is not only a hypothesis, but something we can currently and actually detect and measure.
文摘In this paper, we have determined the structure of the uncertainty relations obtained on the basis of the dimensions that describe the very origin of the Big Bang—in accordance with our Hypothesis of Primary Particles, and with the logically introduced, smallest increment of speed that can exist, the “speed quantum”. This approach allowed us to theoretically move the margin for the description of this singularity to values smaller than the Planck time and the Planck length;hence, we also introduced a new constant in the uncertainty relations, which corresponds to the reduced Planck constant. We expect that such a result for the initial singularity itself will enable a more detailed study of the Big Bang, while opening new areas of study in physics.
文摘This paper extends the previous experimental work on Planck’s constant h and the vacuum field, whose spectrum is determined by h. In particular it adds additional experimental evidence supporting temporal and spatial variations in the vacuum field, including the Sun as a source at 13 sigmas of certainty. The vacuum field has long been a mystery of physics, having enormous theoretical intensity set by Planck’s constant h and yet no obvious physical effect. Hendrick Casimir first proposed that this form of E & M radiation was real in 1948 and suggested an experiment to verify its existence. Over 50 experiments since then have confirmed that this vacuum radiation is real, is a form of electro-magnetic radiation, and varies in time and space over 10:1 in our laboratory compared to its standard QM spectrum. Two other authors have found the fine structure constant α (proportional to 1/h) is varying across the cosmos at up to 4.2 sigma certainty. All these results suggest that the vacuum field (and thus h) varies in time and space. In a previous paper we reported our tunnel diode experimental results as well as the results of six other organizations (including German, Russian and US national labs).The six organizations reported sinusoidal annual variations of 1000 - 3000 ppm (peak-to-valley) in the decay rates of 8 radionuclides over a 20-year span, including beta decay (weak interaction) and alpha decay (strong interaction). All decay rates peaked in January-February and minimized in July-August without any candidate cause suggested. We confirmed that Planck’s constant was the cause by verifying similar variations in Esaki tunnel diode current, which is purely electromagnetic. The combined data from previous strong and weak decays plus our own E & M tunnel data showed similar magnitude and time phasing for strong, weak and E & M interactions, except that the tunnel diode temporal variations were 180 deg out of phase—as we predicted. The logic for this 180 deg phase shift was straight forward. Radioactive decay and electron tunneling both have h in the denominator of the tunneling exponent, but tunnel diodes also have h2 in the numerator of the exponent due to the size of atoms being proportional to h2. This extra h2 makes the exponent proportional to h for electron tunneling instead of proportional to 1/h for strong and weak decay—shifting the annual oscillation for E & M tunnel current by 180 deg. Radioactive decay had a maximum around January-February of each year and a minimum around July-August of each year. Tunnel current (the equivalent to radioactive decay rate) had the opposite—a minimum around January of each year and a maximum around July of each year. This predicted and observed sign flip in the temporal variations between radioactive decay and electron tunneling provides strong evidence that h variations across the Earth’s orbit are the cause of these annual cycles. In this paper we take the next step by verifying whether the Sun and a potential more distant cosmic source radiate the vacuum E & M field, just as all stars generate massive amounts of regular E & M radiation. We reprocessed two years of data, 6 million data points, from our tunnel diode experiment to search for day-night oscillations in tunnel current. Here we assume that the Earth would block the radiated vacuum field half of each day. Sun-locked signals have 365 cycles per year and cosmos locked signals have 366 cycles per year. With our two years of data, these two signals are separated by a null-signal, which is not locked to the Earth or to the cosmos—allowing us to clearly distinguish the solar and cosmic sources. 1) We found sun-locked variations in the vacuum field, peaking around local noon with 10-13 probability of false alarm. Other potential causes are carefully examined and ruled out. 2) We also found cosmos-locked variations in the vacuum field, peaking at the right ascension of the red super-giant star Betelgeuse with 10-7 probability of false alarm. Cosmos locked sources are easily distinguished from the solar source because they have one extra cycle per year, two extra cycles during the two years of the experiment. They are thus independent Fourier components, easily separated by a Fourier transform. Both of these high probability detections support that the vacuum field spectrum may vary in space and time and be enhanced by stellar sources.
文摘Unifying quantum and classical physics has proved difficult as their postulates are conflicting. Using the notion of counts of the fundamental measures—length, mass, and time—a unifying description is resolved. A theoretical framework is presented in a set of postulates by which a conversion between expressions from quantum and classical physics can be made. Conversions of well-known expressions from different areas of physics (quantum physics, gravitation, optics and cosmology) exemplify the approach and mathematical procedures. The postulated integer counts of fundamental measures change our understanding of length, suggesting that our current understanding of reality is distorted.
文摘A century ago the classical physics couldn’t explain many atomic physical phenomena. Now the situation has changed. It’s because within the framework of classical physics with the help of Maxwell’s equations we can derive Schrödinger’s equation, which is the foundation of quantum physics. The equations for energy, momentum, frequency and wavelength of the electromagnetic wave in the atom are derived using the model of atom by analogy with the transmission line. The action constant A0 = (μ0/ε0)1/2s02e2 is a key term in the above mentioned equations. Besides the other well-known constants, the only unknown constant in the last expression is a structural constant of the atom s0. We have found that the value of this constant is 8.277 56 and that it shows up as a link between macroscopic and atomic world. After calculating this constant we get the theory of atoms based on Maxwell’s and Lorentz equations only. This theory does not require knowledge of Planck’s constant h, which is replaced with theoretically derived action constant A0, while the replacement for the fine structure constant α-1 is theoretically derived expression 2s02 = 137.036. So, the structural constant s0 replaces both constants h and α. This paper also defines the stationary states of atoms and shows that the maximal atomic number is equal to Zmax = 137. The presented model of the atoms covers three of the four fundamental interactions, namely the electromagnetic, weak and strong interactions.
文摘Recently, the author read the Alicki-Van Ryn test as to behavior of photons in a test of violations of classicality. The same thing is propoosed via use of a spin two graviton, using typical spin 2 matrices. While the technology currently does not exist to perform such an analysis yet, the same sort of thought experiment is proposed in a way to allow for a first principle test of the either classical or quantum foundations of gravity. The reason for the present manuscript topic is due to a specific argument presented in a prior document as to how h is formed from semiclassical reasoning. We referred to a procedure as to how to use Maxwell’s equations involving a closed boundary regime, in the boundary re- gime between Octonionic Geometry and quantum flat space. Conceivably, a similar argument could be made forgravi- tons, pending further investigations. Also the anlysis of if gravitons are constructed by a similar semiclassical argument is pending if gravitons as by the Alicki-Van Ryn test result in semiclassical and matrix observable eigenvalue behavior. This paper also indirectly raises the question of if Baysian statistics would be the optimal way to differentiate between and matrix observable eigenvalue behavior for reasons brought up in the conclusion.