Einstein’s field equation is a highly general equation consisting of sixteen equations. However, the equation itself provides limited information about the universe unless it is solved with different boundary conditi...Einstein’s field equation is a highly general equation consisting of sixteen equations. However, the equation itself provides limited information about the universe unless it is solved with different boundary conditions. Multiple solutions have been utilized to predict cosmic scales, and among them, the Friedmann-Lemaître-Robertson-Walker solution that is the back-bone of the development into today standard model of modern cosmology: The Λ-CDM model. However, this is naturally not the only solution to Einstein’s field equation. We will investigate the extremal solutions of the Reissner-Nordström, Kerr, and Kerr-Newman metrics. Interestingly, in their extremal cases, these solutions yield identical predictions for horizons and escape velocity. These solutions can be employed to formulate a new cosmological model that resembles the Friedmann equation. However, a significant distinction arises in the extremal universe solution, which does not necessitate the ad hoc insertion of the cosmological constant;instead, it emerges naturally from the derivation itself. To the best of our knowledge, all other solutions relying on the cosmological constant do so by initially ad hoc inserting it into Einstein’s field equation. This clarification unveils the true nature of the cosmological constant, suggesting that it serves as a correction factor for strong gravitational fields, accurately predicting real-world cosmological phenomena only within the extremal solutions of the discussed metrics, all derived strictly from Einstein’s field equation.展开更多
This paper integrates a quantum conception of the Planck epoch early universe with FSC model formulae and the holographic principle, to offer a reasonable explanation and solution of the cosmological constant problem....This paper integrates a quantum conception of the Planck epoch early universe with FSC model formulae and the holographic principle, to offer a reasonable explanation and solution of the cosmological constant problem. Such a solution does not appear to be achievable in cosmological models which do not integrate black hole formulae with quantum formulae such as the Stephan-Boltzmann law. As demonstrated herein, assuming a constant value of Lambda over the great span of cosmic time appears to have been a mistake. It appears that Einstein’s assumption of a constant, in terms of vacuum energy density, was not only a mistake for a statically-balanced universe, but also a mistake for a dynamically-expanding universe.展开更多
We initially look at a non singular universe representation of entropy, based in part on what was brought up by Muller and Lousto. This is a gateway to bringing up information and computational steps (as defined by Se...We initially look at a non singular universe representation of entropy, based in part on what was brought up by Muller and Lousto. This is a gateway to bringing up information and computational steps (as defined by Seth Lloyd) as to what would be available initially due to a modified ZPE formalism. The ZPE formalism is modified as due to Matt Visser’s alternation of k (maximum) ~ 1/(Planck length), with a specific initial density giving rise to initial information content which may permit fixing the initial Planck’s constant, h, which is pivotal to the setting of physical law. The settings of these parameters depend upon NLED.展开更多
In this article we present a model of Hubble-Lemaître law using the notions of a transmitter (galaxy) and a receiver (MW) coupled to a model of the universe (Slow Bang Model, SB), based on a quantum approach of t...In this article we present a model of Hubble-Lemaître law using the notions of a transmitter (galaxy) and a receiver (MW) coupled to a model of the universe (Slow Bang Model, SB), based on a quantum approach of the evolution of space-time as well as an equation of state that retains all the infinitesimal terms. We find an explanation of the Hubble tension H<sub>0</sub>. Indeed, we have seen that this constant depends on the transceiver pair which can vary from the lowest observable value, from photons of the CMB (theoretical [km/s/Mpc]) to increasingly higher values depending on the earlier origin of the formation of the observed galaxy or cluster (ETG ~0.3 [Gy], ~74 [km/s/Mpc]). We have produced a theoretical table of the values of the constant according to the possible pairs of transmitter/receiver in the case where these galaxies follow the Hubble flow without large disturbance. The calculated theoretical values of the constant are in the order of magnitude of all values mentioned in past studies. Subsequently, we applied the models to 9 galaxies and COMA cluster and found that the models predict acceptable values of their distances and Hubble constant since these galaxies mainly follow the Hubble flow rather than the effects of a galaxy cluster or a group of clusters. In conclusion, we affirm that this Hubble tension does not really exist and it is rather the understanding of the meaning of this constant that is questioned.展开更多
The overabundance of the red and massive candidate galaxies observed by the James Webb Space Telescope(JWST)implies efficient structure formation or large star formation efficiency at high redshift z~10.In the scenari...The overabundance of the red and massive candidate galaxies observed by the James Webb Space Telescope(JWST)implies efficient structure formation or large star formation efficiency at high redshift z~10.In the scenario of a low or moderate star formation efficiency,because massive neutrinos tend to suppress the growth of structure of the universe,the JWST observation tightens the upper bound of the neutrino masses.Assuming A cold dark matter cosmology and a star formation efficiency∈[0.05,0.3](flat prior),we perform joint analyses of Planck+JWST and Planck+BAO+JWST,and obtain improved constraints∑m_(ν)<0.196 eV and ∑m_(ν)+<0.111 eV at 95% confidence level,respectively.Based on the above assumptions,the inverted mass ordering,which implies ∑m_(ν)≥0.1 eV,is excluded by Planck+BAO+JWST at 92.7% confidence level.展开更多
We develop a Python tool to estimate the tail distribution of the number of dark matter halos beyond a mass threshold and in a given volume in a light-cone.The code is based on the extended Press-Schechter model and i...We develop a Python tool to estimate the tail distribution of the number of dark matter halos beyond a mass threshold and in a given volume in a light-cone.The code is based on the extended Press-Schechter model and is computationally efficient,typically taking a few seconds on a personal laptop for a given set of cosmological parameters.The high efficiency of the code allows a quick estimation of the tension between cosmological models and the red candidate massive galaxies released by the James Webb Space Telescope,as well as scanning the theory space with the Markov Chain Monte Carlo method.As an example application,we use the tool to study the cosmological implication of the candidate galaxies presented in Labbéet al.The standard Λcold dark matter(ΛCDM)model is well consistent with the data if the star formation efficiency can reach~0.3 at high redshift.For a low star formation efficiency ε~0.1,theΛCDM model is disfavored at~2σ-3σconfidence level.展开更多
The Multi-channel Photometric Survey Telescope(Mephisto)is a real-time,three-color photometric system designed to capture the color evolution of stars and transients accurately.This telescope system can be crucial in ...The Multi-channel Photometric Survey Telescope(Mephisto)is a real-time,three-color photometric system designed to capture the color evolution of stars and transients accurately.This telescope system can be crucial in cosmological distance measurements of low-redshift(low-z,z■0.1)Type Ia supernovae(SNe Ia).To optimize the capabilities of this instrument,we perform a comprehensive simulation study before its official operation is scheduled to start.By considering the impact of atmospheric extinction,weather conditions,and the lunar phase at the observing site involving the instrumental features,we simulate light curves of SNe Ia obtained by Mephisto.The best strategy in the case of SN Ia cosmology is to take the image at an exposure time of 130 s with a cadence of 3 days.In this condition,Mephisto can obtain hundreds of high-quality SNe Ia to achieve a distance measurement better than 4.5%.Given the on-time spectral classification and monitoring of the Lijiang 2.4 m Telescope at the same observatory,Mephisto,in the whole operation,can significantly enrich the well-calibrated sample of supernovae at low-z and improve the calibration accuracy of high-z SNe Ia.展开更多
The Big Bang model was first proposed in 1931 by Georges Lemaitre. Lemaitre and Hubble discovered a linear correlation between distances to galaxies and their redshifts. The correlation between redshifts and distances...The Big Bang model was first proposed in 1931 by Georges Lemaitre. Lemaitre and Hubble discovered a linear correlation between distances to galaxies and their redshifts. The correlation between redshifts and distances arises in all expanding models of universe as the cosmological redshift is commonly attributed to stretching of wavelengths of photons propagating through the expanding space. Fritz Zwicky suggested that the cosmological redshift could be caused by the interaction of propagating light photons with certain inherent features of the cosmos to lose a fraction of their energy. However, Zwicky did not provide any physical mechanism to support his tired light hypothesis. In this paper, we have developed the mechanism of producing cosmological redshift through head-on collision between light and CMB photons. The process of repeated energy loss of visual photons through n head-on collisions with CMB photons, constitutes a primary mechanism for producing the Cosmological redshift z. While this process results in steady reduction in the energy of visual photons, it also results in continuous increase in the number of photons in the CMB. After a head-on collision with a CMB photon, the incoming light photon, with reduced energy, keeps moving on its original path without any deflection or scattering in any way. After propagation through very large distances in the intergalactic space, all light photons will tend to lose bulk of their energy and fall into the invisible region of the spectrum. Thus, this mechanism of producing cosmological redshift through gradual energy depletion, also explains the Olbers’s paradox.展开更多
This paper introduces the two Upsilon constants to the reader. Their usefulness is described with respect to acting as coupling constants between the CMB temperature and the Hubble constant. In addition, this paper su...This paper introduces the two Upsilon constants to the reader. Their usefulness is described with respect to acting as coupling constants between the CMB temperature and the Hubble constant. In addition, this paper summarizes the current state of quantum cosmology with respect to the Flat Space Cosmology (FSC) model. Although the FSC quantum cosmology formulae were published in 2018, they are only rearrangements and substitutions of the other assumptions into the original FSC Hubble temperature formula. In a real sense, this temperature formula was the first quantum cosmology formula developed since Hawking’s black hole temperature formula. A recent development in the last month proves that the FSC Hubble temperature formula can be derived from the Stephan-Boltzmann law. Thus, this Hubble temperature formula effectively unites some quantum developments with the general relativity model inherent in FSC. More progress towards unification in the near-future is expected.展开更多
The Friedmann-Lemaître-Robertson-Walker (FLRW) metric is an exact solution of the Einstein field equations and it describes a homogeneous, isotropic and expanding universe. The FLRW metric and the Friedmann equat...The Friedmann-Lemaître-Robertson-Walker (FLRW) metric is an exact solution of the Einstein field equations and it describes a homogeneous, isotropic and expanding universe. The FLRW metric and the Friedmann equations form the basis of the ΛCDM model. In this article, a metric which is based on the FLRW metric and that includes a space scale factor and a newly introduced time scale factor T(t)is elaborated. The assumption is that the expansion or contraction of the dimensions of space and time in a homogeneous and isotropic universe depend on the energy density. The Christoffel symbols, Ricci tensor and Ricci scalar are derived. By evaluating the results using Einstein’s field equations and the energy momentum tensor, a hypothetical modified cosmological model is obtained. This theoretical model provides for a cosmic inflation, the accelerated expansion of spacetime as well avoids the flatness and fine-tuning problems.展开更多
This brief note brings the reader up-to-date with the recent successes of the new Haug-Tatum cosmology model. In particular, the significance of recent proof that the Stefan-Boltzmann law applies to such a model is em...This brief note brings the reader up-to-date with the recent successes of the new Haug-Tatum cosmology model. In particular, the significance of recent proof that the Stefan-Boltzmann law applies to such a model is emphasized and a rationale for this is given. Remarkably, the proposed solutions of this model have incorporated all 580 supernova redshifts in the Union2 database. Therefore, one can usefully apply this thermodynamic law in the form of a continually expanding black-body universe model. To our knowledge, no other cosmological model has achieved such high-precision observational correlation.展开更多
We develop a theory of cosmology, which is not based on the cosmological principle. We achieve this without violating the Copernican principle. It is well known that the gravitational redshift associated with the Schw...We develop a theory of cosmology, which is not based on the cosmological principle. We achieve this without violating the Copernican principle. It is well known that the gravitational redshift associated with the Schwarzschild solution applied to the distant supernova does not lead to the observed redsift-distance relationship. We show, however, that generalizations of the Schwarzschild metric, the Taub-NUT metrics, do indeed lead to the observed redshift-distance relationship and to the observed time dilation. These universes are not expanding rather the observed cosmological redshift is due to the gravitational redshift associated with these solutions. Time dilation in these stationary universes has the same dependency on redshift that generally has been seen as proof that space is expanding. Our theory resolves the Hubble tension.展开更多
We demonstrate that: 1) The Taub-NUT universe is finite. 2) The Taub-NUT universe is much larger than the maximum observable distance according to the standard theory of cosmology. 3) At large distances the spectral s...We demonstrate that: 1) The Taub-NUT universe is finite. 2) The Taub-NUT universe is much larger than the maximum observable distance according to the standard theory of cosmology. 3) At large distances the spectral shift turns into a blueshift. 4) At large distances time dilation turns into time contraction.展开更多
We develop a cosmological model in a physical background scenario of four time and four space dimensions ((4+4)-dimensions or (4+4)-universe). We show that in this framework the (1+3)-universe is deeply connected with...We develop a cosmological model in a physical background scenario of four time and four space dimensions ((4+4)-dimensions or (4+4)-universe). We show that in this framework the (1+3)-universe is deeply connected with the (3+1)-universe. We argue that this means that in the (4+4)-universe there exists a duality relation between the (1+3)-universe and the (3+1)-universe.展开更多
In 1998, two groups of astronomers, one led by Saul Perlmutter and the other by Brian Schmidt, set out to determine the deceleration—and hence the total mass/energy—of the universe by measuring the recession speeds ...In 1998, two groups of astronomers, one led by Saul Perlmutter and the other by Brian Schmidt, set out to determine the deceleration—and hence the total mass/energy—of the universe by measuring the recession speeds of type la supernovae (SN1a), came to an unexpected conclusion: ever since the universe was about 7 billion years old, its expansion rate has not been decelerating. Instead, the expansion rate has been speeding up. To justify this acceleration, they suggested that the universe does have a mysterious dark energy and they have emerged from oblivion the cosmological constant, positive this time, which is consistent with the image of an inflationary universe. To explain the observed dimming of high-redshift SN1a they have bet essentially on their distance revised upwards. We consider that an accelerated expansion leads right to a “dark energy catastrophe” (i.e., the chasm between the current cosmological vacuum density value of 10 GeV/m<sup>3</sup> and the vacuum energy density proposed by quantum field theory of ~10<sup>122</sup> GeV/m<sup>3</sup>). We suppose rather that the universe knows a slowdown expansion under the positive pressure of a dark energy, otherwise called a variable cosmological constant. The dark luminosity of the latter would be that of a “tired light” which has lost energy with distance. As for the low brilliance of SN1a, it is explained by two physical processes: The first relates to their intrinsic brightness—supposedly do not vary over time—which would depend on the chemical conditions which change with the temporal evolution;the second would concern their apparent luminosity. Besides the serious arguments already known, we strongly propose that their luminosity continually fades by interactions with cosmic magnetic fields, like the earthly PVLAS experiment which loses much more laser photons than expected by crossing a magnetic field. It goes in the sense of a “tired light” which has lost energy with distance, and therefore, a decelerated expansion of the universe. Moreover, we propose the “centrist” principle to complete the hypothesis of the cosmological principle of homogeneity and isotropy considered verified. Without denying the Copernican principle, he is opposed to a “spatial” theoretical construction which accelerates the world towards infinity. The centrist principle gives a “temporal” and privileged vision which tends to demonstrate the deceleration of expansion.展开更多
Instant preheating as given in terms of window where adiabaticity is violated is a completely inefficient form of particle production if we use Padmandabhan scalar potentials. This necessitates using a very different ...Instant preheating as given in terms of window where adiabaticity is violated is a completely inefficient form of particle production if we use Padmandabhan scalar potentials. This necessitates using a very different mechanism for early universe gravition production as an example which is to break up the initial “mass” formed about 10<sup>60</sup> times Planck mass into graviton emitting 10<sup>5</sup> gram sized micro black holes. The mechanism is to assume that we have a different condition than the usual adiabaticity idea which is connected with reheating of the universe. Hence, we will be looking at an earlier primordial black hole generation for generation of gravitons.展开更多
A framework to estimate the mass of the universe from quarks is presented, taking spacetime into account. This is a link currently missing in our understanding of physics/science. The focus on mass-energy balance is a...A framework to estimate the mass of the universe from quarks is presented, taking spacetime into account. This is a link currently missing in our understanding of physics/science. The focus on mass-energy balance is aimed at finding a solution to the Cosmological Constant (CC) problem by attempting to quantize space-time and linking the vacuum energy density at the beginning of the universe and the current energy density. The CC problem is the famous disagreement of approximately 120 orders of magnitude between the theoretical energy density at the Planck scale and the indirectly measured cosmological energy density. Same framework is also used to determine the mass of the proton and neutron from first principles. The only input is the up quark (u-quark) mass, or precisely, the 1st generation quarks. The method assumes that the u-quark is twice as massive as the down-quark (d-quark). The gap equation is the starting point, introduced in its simplest form. The main idea is to assume that all the particles and fields in the unit universe are divided into quarks and everything else. Everything else means all fields and forces present in the universe. It is assumed that everything else can be “quark-quantized”;that is, assume that they can be quantized into similar sizeable u-quarks and/or it’s associated interactions and relations. The result is surprisingly almost as measured and known values. The proton structure and mass composition are also analysed, showing that it likely has more than 3 quarks and more than 3 valence quarks. It is also possible to estimate the percentage of dark matter, dark energy, ordinary matter, and anti-matter. Finally, the cosmological constant problem or puzzle is resolved by connecting the vacuum energy density of Quantum Field Theory (5.1E+96 kg/m<sup>3</sup>) and the energy density of General Relativity (1.04E−26 kg/m<sup>3</sup>). Upon maturation, this framework can serve as a bridging platform between Quantum Field Theory and General Relativity. Other aspects of natures’ field theories can be successfully ported to the platform. It also increases the chances of solving some of the unanswered questions in physics.展开更多
By means of the dimensional analysis a spherically simmetric universe with a mass M = c<sup>3</sup>/(2HG) and radius equal to c/H is considered, where H is the Hubble constant, c the speed of light and G t...By means of the dimensional analysis a spherically simmetric universe with a mass M = c<sup>3</sup>/(2HG) and radius equal to c/H is considered, where H is the Hubble constant, c the speed of light and G the Newton gravitational constant. The density corresponding to this mass is equal to the critical density ρ<sub>cr </sub>= 3H<sup>2</sup>/(8πG). This universe evolves according to a Bondi-Gold-Hoyle scenario, with continuous creation of matter at a rate such to maintain, during the expansion, the density always critical density. Using the Margolus-Levitin theorem and the Landauer’s principle, an entropy is associated with this universe, obtaining a formula having the same structure as the Bekenstein-Hawking formula of the entropy of a black hole. Furthermore, a time-dependent cosmological constant Λ, function of the Hubble constant and the speed of light, is proposed.展开更多
The dependence of chaos on two parameters of the cosmological constant and the self-interacting coefficient in the imaginary phase space for a closed Friedman- Robertson-Walker (FRW) universe with a conformally coup...The dependence of chaos on two parameters of the cosmological constant and the self-interacting coefficient in the imaginary phase space for a closed Friedman- Robertson-Walker (FRW) universe with a conformally coupled scalar field, as the full understanding of the dependence in real phase space, is investigated numerically. It is found that Poincar6 plots for the two parameters less than 1 are almost the same as those in the absence of the cosmological constant and self-interacting terms. For energies below the energy threshold of 0.5 for the imaginary problem in which there are no cosmological constant and self-interacting terms, an abrupt transition to chaos occurs when at least one of the two parameters is 1. However, the strength of the chaos does not increase for energies larger than the threshold. For other situations of the two parameters larger than 1, chaos is weaker, and even disappears as the two parameters increase.展开更多
We are looking at comparison of two action integrals and we identify the Lagrangian multiplier as setting up a constraint equation (on cosmological expansion). This is a direct result of the fourth equation of our man...We are looking at comparison of two action integrals and we identify the Lagrangian multiplier as setting up a constraint equation (on cosmological expansion). This is a direct result of the fourth equation of our manuscript which unconventionally compares the action integral of General relativity with the second derived action integral, which then permits Equation (5), which is a bound on the Cosmological constant. What we have done is to replace the Hamber Quantum gravity reference-based action integral with a result from John Klauder’s “Enhanced Quantization”. In doing so, with Padamabhan’s treatment of the inflaton, we then initiate an explicit bound upon the cosmological constant. The other approximation is to use the inflaton results and conflate them with John Klauder’s Action principle for a way, if we have the idea of a potential well, generalized by Klauder, with a wall of space time in the Pre Planckian-regime to ask what bounds the Cosmological constant prior to inflation, and to get an upper bound on the mass of a graviton. We conclude with a redo of a multiverse version of the Penrose cyclic conformal cosmology. Our objective is to show how a value of the rest mass of the heavy graviton is invariant from cycle to cycle. All this is possible due to Equation (4). And we compare all these with results of Reference [1] in the conclusion, while showing its relevance to early universe production of black holes, and the volume of space producing 100 black holes of value about 10^2 times Planck Mass. Initially evaluated in a space-time of about 10^3 Planck length, in spherical length, we assume a starting entropy of about 1000 initially.展开更多
文摘Einstein’s field equation is a highly general equation consisting of sixteen equations. However, the equation itself provides limited information about the universe unless it is solved with different boundary conditions. Multiple solutions have been utilized to predict cosmic scales, and among them, the Friedmann-Lemaître-Robertson-Walker solution that is the back-bone of the development into today standard model of modern cosmology: The Λ-CDM model. However, this is naturally not the only solution to Einstein’s field equation. We will investigate the extremal solutions of the Reissner-Nordström, Kerr, and Kerr-Newman metrics. Interestingly, in their extremal cases, these solutions yield identical predictions for horizons and escape velocity. These solutions can be employed to formulate a new cosmological model that resembles the Friedmann equation. However, a significant distinction arises in the extremal universe solution, which does not necessitate the ad hoc insertion of the cosmological constant;instead, it emerges naturally from the derivation itself. To the best of our knowledge, all other solutions relying on the cosmological constant do so by initially ad hoc inserting it into Einstein’s field equation. This clarification unveils the true nature of the cosmological constant, suggesting that it serves as a correction factor for strong gravitational fields, accurately predicting real-world cosmological phenomena only within the extremal solutions of the discussed metrics, all derived strictly from Einstein’s field equation.
文摘This paper integrates a quantum conception of the Planck epoch early universe with FSC model formulae and the holographic principle, to offer a reasonable explanation and solution of the cosmological constant problem. Such a solution does not appear to be achievable in cosmological models which do not integrate black hole formulae with quantum formulae such as the Stephan-Boltzmann law. As demonstrated herein, assuming a constant value of Lambda over the great span of cosmic time appears to have been a mistake. It appears that Einstein’s assumption of a constant, in terms of vacuum energy density, was not only a mistake for a statically-balanced universe, but also a mistake for a dynamically-expanding universe.
文摘We initially look at a non singular universe representation of entropy, based in part on what was brought up by Muller and Lousto. This is a gateway to bringing up information and computational steps (as defined by Seth Lloyd) as to what would be available initially due to a modified ZPE formalism. The ZPE formalism is modified as due to Matt Visser’s alternation of k (maximum) ~ 1/(Planck length), with a specific initial density giving rise to initial information content which may permit fixing the initial Planck’s constant, h, which is pivotal to the setting of physical law. The settings of these parameters depend upon NLED.
文摘In this article we present a model of Hubble-Lemaître law using the notions of a transmitter (galaxy) and a receiver (MW) coupled to a model of the universe (Slow Bang Model, SB), based on a quantum approach of the evolution of space-time as well as an equation of state that retains all the infinitesimal terms. We find an explanation of the Hubble tension H<sub>0</sub>. Indeed, we have seen that this constant depends on the transceiver pair which can vary from the lowest observable value, from photons of the CMB (theoretical [km/s/Mpc]) to increasingly higher values depending on the earlier origin of the formation of the observed galaxy or cluster (ETG ~0.3 [Gy], ~74 [km/s/Mpc]). We have produced a theoretical table of the values of the constant according to the possible pairs of transmitter/receiver in the case where these galaxies follow the Hubble flow without large disturbance. The calculated theoretical values of the constant are in the order of magnitude of all values mentioned in past studies. Subsequently, we applied the models to 9 galaxies and COMA cluster and found that the models predict acceptable values of their distances and Hubble constant since these galaxies mainly follow the Hubble flow rather than the effects of a galaxy cluster or a group of clusters. In conclusion, we affirm that this Hubble tension does not really exist and it is rather the understanding of the meaning of this constant that is questioned.
基金supported by the National SKA Program of China No.2020SKA0110402the National Natural Science Foundationof China(NSFC)under grant No.12073088the National Key R&D Program of China(grant No.2020YFC2201600)。
文摘The overabundance of the red and massive candidate galaxies observed by the James Webb Space Telescope(JWST)implies efficient structure formation or large star formation efficiency at high redshift z~10.In the scenario of a low or moderate star formation efficiency,because massive neutrinos tend to suppress the growth of structure of the universe,the JWST observation tightens the upper bound of the neutrino masses.Assuming A cold dark matter cosmology and a star formation efficiency∈[0.05,0.3](flat prior),we perform joint analyses of Planck+JWST and Planck+BAO+JWST,and obtain improved constraints∑m_(ν)<0.196 eV and ∑m_(ν)+<0.111 eV at 95% confidence level,respectively.Based on the above assumptions,the inverted mass ordering,which implies ∑m_(ν)≥0.1 eV,is excluded by Planck+BAO+JWST at 92.7% confidence level.
基金supported by the National Key R&D Program of China(grant No.2020YFC2201600)the National Natural Science Foundation of China(NSFC,grant No.12073088)the National SKA Program of China(grant No.2020SKA0110402)。
文摘We develop a Python tool to estimate the tail distribution of the number of dark matter halos beyond a mass threshold and in a given volume in a light-cone.The code is based on the extended Press-Schechter model and is computationally efficient,typically taking a few seconds on a personal laptop for a given set of cosmological parameters.The high efficiency of the code allows a quick estimation of the tension between cosmological models and the red candidate massive galaxies released by the James Webb Space Telescope,as well as scanning the theory space with the Markov Chain Monte Carlo method.As an example application,we use the tool to study the cosmological implication of the candidate galaxies presented in Labbéet al.The standard Λcold dark matter(ΛCDM)model is well consistent with the data if the star formation efficiency can reach~0.3 at high redshift.For a low star formation efficiency ε~0.1,theΛCDM model is disfavored at~2σ-3σconfidence level.
基金supported by the National Key R&D Program of China(2021YFA1600404)the National Natural Science Foundation of China(NSFC,grant No.12173082)+11 种基金science research grants from the China Manned Space Project(CMS-CSST-2021-A12)the Yunnan Province Foundation(202201AT070069)the Top-notch Young Talents Program of Yunnan Provincethe Light of West China Program provided by the Chinese Academy of Sciencesthe International Centre of Supernovae,Yunnan Key Laboratory(202302AN360001)Funding for the LJT has been provided by the CAS and the People’s Government of Yunnan Provincefunded by the“Yunnan University Development Plan for World-Class University”“Yunnan University Development Plan for World-Class Astronomy Discipline”obtained supports from the“Science&Technology Champion Project”(202005AB160002)from two“Team Projects”—the“Innovation Team”(202105AE160021)the“Top Team”(202305AT350002)funded by the“Yunnan Revitalization Talent Support Program.”。
文摘The Multi-channel Photometric Survey Telescope(Mephisto)is a real-time,three-color photometric system designed to capture the color evolution of stars and transients accurately.This telescope system can be crucial in cosmological distance measurements of low-redshift(low-z,z■0.1)Type Ia supernovae(SNe Ia).To optimize the capabilities of this instrument,we perform a comprehensive simulation study before its official operation is scheduled to start.By considering the impact of atmospheric extinction,weather conditions,and the lunar phase at the observing site involving the instrumental features,we simulate light curves of SNe Ia obtained by Mephisto.The best strategy in the case of SN Ia cosmology is to take the image at an exposure time of 130 s with a cadence of 3 days.In this condition,Mephisto can obtain hundreds of high-quality SNe Ia to achieve a distance measurement better than 4.5%.Given the on-time spectral classification and monitoring of the Lijiang 2.4 m Telescope at the same observatory,Mephisto,in the whole operation,can significantly enrich the well-calibrated sample of supernovae at low-z and improve the calibration accuracy of high-z SNe Ia.
文摘The Big Bang model was first proposed in 1931 by Georges Lemaitre. Lemaitre and Hubble discovered a linear correlation between distances to galaxies and their redshifts. The correlation between redshifts and distances arises in all expanding models of universe as the cosmological redshift is commonly attributed to stretching of wavelengths of photons propagating through the expanding space. Fritz Zwicky suggested that the cosmological redshift could be caused by the interaction of propagating light photons with certain inherent features of the cosmos to lose a fraction of their energy. However, Zwicky did not provide any physical mechanism to support his tired light hypothesis. In this paper, we have developed the mechanism of producing cosmological redshift through head-on collision between light and CMB photons. The process of repeated energy loss of visual photons through n head-on collisions with CMB photons, constitutes a primary mechanism for producing the Cosmological redshift z. While this process results in steady reduction in the energy of visual photons, it also results in continuous increase in the number of photons in the CMB. After a head-on collision with a CMB photon, the incoming light photon, with reduced energy, keeps moving on its original path without any deflection or scattering in any way. After propagation through very large distances in the intergalactic space, all light photons will tend to lose bulk of their energy and fall into the invisible region of the spectrum. Thus, this mechanism of producing cosmological redshift through gradual energy depletion, also explains the Olbers’s paradox.
文摘This paper introduces the two Upsilon constants to the reader. Their usefulness is described with respect to acting as coupling constants between the CMB temperature and the Hubble constant. In addition, this paper summarizes the current state of quantum cosmology with respect to the Flat Space Cosmology (FSC) model. Although the FSC quantum cosmology formulae were published in 2018, they are only rearrangements and substitutions of the other assumptions into the original FSC Hubble temperature formula. In a real sense, this temperature formula was the first quantum cosmology formula developed since Hawking’s black hole temperature formula. A recent development in the last month proves that the FSC Hubble temperature formula can be derived from the Stephan-Boltzmann law. Thus, this Hubble temperature formula effectively unites some quantum developments with the general relativity model inherent in FSC. More progress towards unification in the near-future is expected.
文摘The Friedmann-Lemaître-Robertson-Walker (FLRW) metric is an exact solution of the Einstein field equations and it describes a homogeneous, isotropic and expanding universe. The FLRW metric and the Friedmann equations form the basis of the ΛCDM model. In this article, a metric which is based on the FLRW metric and that includes a space scale factor and a newly introduced time scale factor T(t)is elaborated. The assumption is that the expansion or contraction of the dimensions of space and time in a homogeneous and isotropic universe depend on the energy density. The Christoffel symbols, Ricci tensor and Ricci scalar are derived. By evaluating the results using Einstein’s field equations and the energy momentum tensor, a hypothetical modified cosmological model is obtained. This theoretical model provides for a cosmic inflation, the accelerated expansion of spacetime as well avoids the flatness and fine-tuning problems.
文摘This brief note brings the reader up-to-date with the recent successes of the new Haug-Tatum cosmology model. In particular, the significance of recent proof that the Stefan-Boltzmann law applies to such a model is emphasized and a rationale for this is given. Remarkably, the proposed solutions of this model have incorporated all 580 supernova redshifts in the Union2 database. Therefore, one can usefully apply this thermodynamic law in the form of a continually expanding black-body universe model. To our knowledge, no other cosmological model has achieved such high-precision observational correlation.
文摘We develop a theory of cosmology, which is not based on the cosmological principle. We achieve this without violating the Copernican principle. It is well known that the gravitational redshift associated with the Schwarzschild solution applied to the distant supernova does not lead to the observed redsift-distance relationship. We show, however, that generalizations of the Schwarzschild metric, the Taub-NUT metrics, do indeed lead to the observed redshift-distance relationship and to the observed time dilation. These universes are not expanding rather the observed cosmological redshift is due to the gravitational redshift associated with these solutions. Time dilation in these stationary universes has the same dependency on redshift that generally has been seen as proof that space is expanding. Our theory resolves the Hubble tension.
文摘We demonstrate that: 1) The Taub-NUT universe is finite. 2) The Taub-NUT universe is much larger than the maximum observable distance according to the standard theory of cosmology. 3) At large distances the spectral shift turns into a blueshift. 4) At large distances time dilation turns into time contraction.
文摘We develop a cosmological model in a physical background scenario of four time and four space dimensions ((4+4)-dimensions or (4+4)-universe). We show that in this framework the (1+3)-universe is deeply connected with the (3+1)-universe. We argue that this means that in the (4+4)-universe there exists a duality relation between the (1+3)-universe and the (3+1)-universe.
文摘In 1998, two groups of astronomers, one led by Saul Perlmutter and the other by Brian Schmidt, set out to determine the deceleration—and hence the total mass/energy—of the universe by measuring the recession speeds of type la supernovae (SN1a), came to an unexpected conclusion: ever since the universe was about 7 billion years old, its expansion rate has not been decelerating. Instead, the expansion rate has been speeding up. To justify this acceleration, they suggested that the universe does have a mysterious dark energy and they have emerged from oblivion the cosmological constant, positive this time, which is consistent with the image of an inflationary universe. To explain the observed dimming of high-redshift SN1a they have bet essentially on their distance revised upwards. We consider that an accelerated expansion leads right to a “dark energy catastrophe” (i.e., the chasm between the current cosmological vacuum density value of 10 GeV/m<sup>3</sup> and the vacuum energy density proposed by quantum field theory of ~10<sup>122</sup> GeV/m<sup>3</sup>). We suppose rather that the universe knows a slowdown expansion under the positive pressure of a dark energy, otherwise called a variable cosmological constant. The dark luminosity of the latter would be that of a “tired light” which has lost energy with distance. As for the low brilliance of SN1a, it is explained by two physical processes: The first relates to their intrinsic brightness—supposedly do not vary over time—which would depend on the chemical conditions which change with the temporal evolution;the second would concern their apparent luminosity. Besides the serious arguments already known, we strongly propose that their luminosity continually fades by interactions with cosmic magnetic fields, like the earthly PVLAS experiment which loses much more laser photons than expected by crossing a magnetic field. It goes in the sense of a “tired light” which has lost energy with distance, and therefore, a decelerated expansion of the universe. Moreover, we propose the “centrist” principle to complete the hypothesis of the cosmological principle of homogeneity and isotropy considered verified. Without denying the Copernican principle, he is opposed to a “spatial” theoretical construction which accelerates the world towards infinity. The centrist principle gives a “temporal” and privileged vision which tends to demonstrate the deceleration of expansion.
文摘Instant preheating as given in terms of window where adiabaticity is violated is a completely inefficient form of particle production if we use Padmandabhan scalar potentials. This necessitates using a very different mechanism for early universe gravition production as an example which is to break up the initial “mass” formed about 10<sup>60</sup> times Planck mass into graviton emitting 10<sup>5</sup> gram sized micro black holes. The mechanism is to assume that we have a different condition than the usual adiabaticity idea which is connected with reheating of the universe. Hence, we will be looking at an earlier primordial black hole generation for generation of gravitons.
文摘A framework to estimate the mass of the universe from quarks is presented, taking spacetime into account. This is a link currently missing in our understanding of physics/science. The focus on mass-energy balance is aimed at finding a solution to the Cosmological Constant (CC) problem by attempting to quantize space-time and linking the vacuum energy density at the beginning of the universe and the current energy density. The CC problem is the famous disagreement of approximately 120 orders of magnitude between the theoretical energy density at the Planck scale and the indirectly measured cosmological energy density. Same framework is also used to determine the mass of the proton and neutron from first principles. The only input is the up quark (u-quark) mass, or precisely, the 1st generation quarks. The method assumes that the u-quark is twice as massive as the down-quark (d-quark). The gap equation is the starting point, introduced in its simplest form. The main idea is to assume that all the particles and fields in the unit universe are divided into quarks and everything else. Everything else means all fields and forces present in the universe. It is assumed that everything else can be “quark-quantized”;that is, assume that they can be quantized into similar sizeable u-quarks and/or it’s associated interactions and relations. The result is surprisingly almost as measured and known values. The proton structure and mass composition are also analysed, showing that it likely has more than 3 quarks and more than 3 valence quarks. It is also possible to estimate the percentage of dark matter, dark energy, ordinary matter, and anti-matter. Finally, the cosmological constant problem or puzzle is resolved by connecting the vacuum energy density of Quantum Field Theory (5.1E+96 kg/m<sup>3</sup>) and the energy density of General Relativity (1.04E−26 kg/m<sup>3</sup>). Upon maturation, this framework can serve as a bridging platform between Quantum Field Theory and General Relativity. Other aspects of natures’ field theories can be successfully ported to the platform. It also increases the chances of solving some of the unanswered questions in physics.
文摘By means of the dimensional analysis a spherically simmetric universe with a mass M = c<sup>3</sup>/(2HG) and radius equal to c/H is considered, where H is the Hubble constant, c the speed of light and G the Newton gravitational constant. The density corresponding to this mass is equal to the critical density ρ<sub>cr </sub>= 3H<sup>2</sup>/(8πG). This universe evolves according to a Bondi-Gold-Hoyle scenario, with continuous creation of matter at a rate such to maintain, during the expansion, the density always critical density. Using the Margolus-Levitin theorem and the Landauer’s principle, an entropy is associated with this universe, obtaining a formula having the same structure as the Bekenstein-Hawking formula of the entropy of a black hole. Furthermore, a time-dependent cosmological constant Λ, function of the Hubble constant and the speed of light, is proposed.
基金supported by the Natural Science Foundation of China (Grant No. 10873007)supported by the Science Foundation of Jiangxi Education Bureau (GJJ09072)the Program for Innovative Research Teams of Nanchang University
文摘The dependence of chaos on two parameters of the cosmological constant and the self-interacting coefficient in the imaginary phase space for a closed Friedman- Robertson-Walker (FRW) universe with a conformally coupled scalar field, as the full understanding of the dependence in real phase space, is investigated numerically. It is found that Poincar6 plots for the two parameters less than 1 are almost the same as those in the absence of the cosmological constant and self-interacting terms. For energies below the energy threshold of 0.5 for the imaginary problem in which there are no cosmological constant and self-interacting terms, an abrupt transition to chaos occurs when at least one of the two parameters is 1. However, the strength of the chaos does not increase for energies larger than the threshold. For other situations of the two parameters larger than 1, chaos is weaker, and even disappears as the two parameters increase.
文摘We are looking at comparison of two action integrals and we identify the Lagrangian multiplier as setting up a constraint equation (on cosmological expansion). This is a direct result of the fourth equation of our manuscript which unconventionally compares the action integral of General relativity with the second derived action integral, which then permits Equation (5), which is a bound on the Cosmological constant. What we have done is to replace the Hamber Quantum gravity reference-based action integral with a result from John Klauder’s “Enhanced Quantization”. In doing so, with Padamabhan’s treatment of the inflaton, we then initiate an explicit bound upon the cosmological constant. The other approximation is to use the inflaton results and conflate them with John Klauder’s Action principle for a way, if we have the idea of a potential well, generalized by Klauder, with a wall of space time in the Pre Planckian-regime to ask what bounds the Cosmological constant prior to inflation, and to get an upper bound on the mass of a graviton. We conclude with a redo of a multiverse version of the Penrose cyclic conformal cosmology. Our objective is to show how a value of the rest mass of the heavy graviton is invariant from cycle to cycle. All this is possible due to Equation (4). And we compare all these with results of Reference [1] in the conclusion, while showing its relevance to early universe production of black holes, and the volume of space producing 100 black holes of value about 10^2 times Planck Mass. Initially evaluated in a space-time of about 10^3 Planck length, in spherical length, we assume a starting entropy of about 1000 initially.