The purpose of this work is to prove that only by applying a theoretically sound information approach to developing a model for measuring the Boltzmann constant, one can justify and calculate the value of the required...The purpose of this work is to prove that only by applying a theoretically sound information approach to developing a model for measuring the Boltzmann constant, one can justify and calculate the value of the required relative uncertainty. A dimensionless parameter (comparative uncertainty) was proposed as a universal metric for comparing experimental measurements of Boltzmann constant and simulated data. Examples are given of applying the proposed original method for calculating the relative uncertainty in measuring the Boltzmann constant using an acoustic gas thermometer, dielectric constant gas thermometer, Johnson noise thermometer, Doppler broadening thermometer. The proposed approach is theoretically justified and devoid of the shortcomings inherent in the CODATA concept: a statistically significant trend, a cumulative value of consensus or a statistical control. We tried to show how a mathematical-expert formalism can be replaced by a simple, theoretically grounded postulate on the use of information theory in measurements.展开更多
The thermodynamic properties of xanthone(XTH) and 135 polybrominated xanthones(PBXTHs) in the standard state have been calculated at the B3LYP/6-31G* level using Gaussian 03 program.The isodesmic reactions were d...The thermodynamic properties of xanthone(XTH) and 135 polybrominated xanthones(PBXTHs) in the standard state have been calculated at the B3LYP/6-31G* level using Gaussian 03 program.The isodesmic reactions were designed to calculate the standard enthalpy of formation(△fHθ) and standard free energy of formation(△fGθ) of PBXTH congeners.The relations of these thermodynamic parameters with the number and position of Br atom substitution(NPBS) were discussed,and it was found that there exist high correlation between thermodynamic parameters(entropy(Sθ),△fHθ and △fGθ) and NPBS.According to the relative magnitude of their △fGθ,the relative stability order of PBXTH congeners was theoretically proposed.The relative rate constants of formation reactions of PBXTH congeners were calculated,Moreover,the values of molar heat capacity at constant pressure(Cp,m) from 200 to 1000 K for PBXTH congeners were also calculated,and the temperature dependence relation of them was obtained,suggesting very good relationships between Cp,m and temperature(T,T^1 and T^2) for almost all PBXTH congeners.展开更多
This work was aimed to study the relative floatability of phosphate flotation by means of kinetic analysis.The relative floatability is important to determine how selectively the phosphate is separated from its impuri...This work was aimed to study the relative floatability of phosphate flotation by means of kinetic analysis.The relative floatability is important to determine how selectively the phosphate is separated from its impurities. The effects of pulp pH, solid content, reagents dosage(depressant, collector and co-collector) and conditioning time were investigated on the ratio of the modified rate constant of phosphate to the modified rate constant of iron(relative floatability). The results showed that a large dosage of depressant associated with a low value of collector resulted in a better relative floatability. Increasing the co-collector dosage, conditioning time and pH increased the relative floatability up to a certain value and thereafter resulted in diminishing the relative floatability. Meanwhile, the results indicated that increment of solid concentration increased the relative floatability in range investigated. It was also found that that maximum relative floatability(16.05) could be obtained in pulp pH, 9.32, solid percentage, 30,depressant dosage, 440 g/t, collector dosage, 560 g/t, co-collector dosage, 84.63 g/t and conditioning time,9.43 min.展开更多
The discovery of the Planck relation is generally regarded as the starting point of quantum physics.Planck's constant h is now regarded as one of the most important universal constants.The physical nature of h,howeve...The discovery of the Planck relation is generally regarded as the starting point of quantum physics.Planck's constant h is now regarded as one of the most important universal constants.The physical nature of h,however,has not been well understood.It was originally suggested as a fitting constant to explain the black-body radiation.Although Planck had proposed a theoretical justification of h,he was never satisfied with that.To solve this outstanding problem,we use the Maxwell theory to directly calculate the energy and momentum of a radiation wave packet.We find that the energy of the wave packet is indeed proportional to its oscillation frequency.This allows us to derive the value of Planck's constant.Furthermore,we show that the emission and transmission of a photon follows the all-or-none principle.The "strength" of the wave packet can be characterized by ζ,which represents the integrated strength of the vector potential along a transverse axis.We reason that ζ should have a fixed cut-off value for all photons.Our results suggest that a wave packet can behave like a particle.This offers a simple explanation to the recent satellite observations that the cosmic microwave background follows closely the black-body radiation as predicted by Planck's law.展开更多
The gravitational constant discovered by Newton is still measured with a relative uncertainty that is several orders of magnitude larger than the relative uncertainty of other fundamental constants. Numerous methods a...The gravitational constant discovered by Newton is still measured with a relative uncertainty that is several orders of magnitude larger than the relative uncertainty of other fundamental constants. Numerous methods are used to measure it. This article discusses the information-oriented approach for analyzing the achievable relative measurement uncertainty, in which the magnitude of the gravitational constant can be considered as plausible. A comparison is made and the advantages and disadvantages of various methods are discussed in terms of the possibility of achieving higher accuracy using a new metric called comparative uncertainty, which was proposed by Brillouin.展开更多
In this short contribution, a reciprocity relation between mass constituents of the universe was explained governed by Hardy’s maximum entanglement probability of φ5 = 0.09017. While well explainable through a set-t...In this short contribution, a reciprocity relation between mass constituents of the universe was explained governed by Hardy’s maximum entanglement probability of φ5 = 0.09017. While well explainable through a set-theoretical argumentation, the relation may also be a consequence of a coupling factor attributed to the normed dimensions of the universe. Also, very simple expressions for the mass amounts were obtained, when replacing the Golden Mean φ by the Archimedes’ constant π. A brief statement was devoted to the similarity between the E-Infinity Theory of El Naschie and the Information Relativity Theory of Suleiman. In addition, superconductivity was also linked with Hardy’s entanglement probability.展开更多
Sommerfeld’s fundamental fine-structure constant α once more gives reason to be amazed. This comment is a Chapter of a publication in preparation dealing mainly with golden ratio signature behind Preston Guynn’s fa...Sommerfeld’s fundamental fine-structure constant α once more gives reason to be amazed. This comment is a Chapter of a publication in preparation dealing mainly with golden ratio signature behind Preston Guynn’s famous matter/space approach. As a result we present a relation of α to the galactic velocity , mediated by the circle constant π, which points to an omnipresent importance of this constant and its intrinsic reciprocity pecularity: α ≈ π<sup>2</sup>|β<sub>g</sub>| respectively . The designation fine-structure constant should be replaced simply by Sommerfeld’s constant. We present golden mean-based approximations for α as well as for electron’s charge and mass and connect the word average value of interaction coupling constant α<sub>s</sub>(m<sub>z</sub>) with |β<sub>g</sub>|.展开更多
Constant weight code is an important error-correcting control code in communications. Basic structure of constant weight codes for some arriving at Johnson bound, A(n, 2u, w), is presented. Some correlative property...Constant weight code is an important error-correcting control code in communications. Basic structure of constant weight codes for some arriving at Johnson bound, A(n, 2u, w), is presented. Some correlative propertys of the codes, the solution of arriving at Johnson bound, and the results on the couple constant code and some constant weight codes are discussed. The conclusion is verified through four examples.展开更多
The phonon dispersion relations of crystalline solids play an important role in determining the mechanical and thermal properties of materials.The phonon dispersion relation,as well as the vibrational density of state...The phonon dispersion relations of crystalline solids play an important role in determining the mechanical and thermal properties of materials.The phonon dispersion relation,as well as the vibrational density of states,is also often used as an indicator of variation of lattice thermal conductivity with the external stress,defects,etc.In this study,a simple and fast tool is proposed to acquire the phonon dispersion relation of crystalline solids based on the LAMMPS package.The theoretical details for the calculation of the phonon dispersion relation are derived mathematically and the computational flow chart is present.The tool is first used to calculate the phonon dispersion relation of graphene with two atoms in the unit cell.Then,the phonon dispersions corresponding to several potentials or force fields,which are commonly used in the LAMMPS package to modeling the graphene,are obtained to compare with that from the DFT calculation.They are further extended to evaluate the accuracy of the used potentials before the molecular dynamics simulation.The tool is also used to calculate the phonon dispersion relation of superlattice structures that contains more than one hundred of atoms in the unit cell,which predicts the phonon band gaps along the cross-plane direction.Since the phonon dispersion relation plays an important role in the physical properties of condensed matter,the proposed tool for the calculation of the phonon dispersion relation is of great significance for predicting and explaining the mechanical and thermal properties of crystalline solids.展开更多
In this paper, a mathematical relation was found between interatomic Hooke’s force constant and both the bulk modulus and interatomic distance in solid crystals, considering that the forces which have effect on an at...In this paper, a mathematical relation was found between interatomic Hooke’s force constant and both the bulk modulus and interatomic distance in solid crystals, considering that the forces which have effect on an atom are only those resulted from the neighboring atoms, and the forces are subject to Hooke’s law as the deflections of atoms from their equilibrium positions are very small. This work has been applied on some solid semiconducting crystals of diatomic primitive cell, including crystals of mono-atomic primitive cell automatically, by using linear statistical fitting with computer programming and, then, using mathematical analysis, proceeding from the vibrational dispersion relation of solid linear lattice, these two methods have been used in the process in order to support each other and for the result to be satisfying and reasonable. This is a contribution to the process of using computer programming in physics to facilitate mathematical analyses and obtain the required relations and functions by designing and developing appropriate computer programs in line with the macro and micro natures of materials. The importance of this is in enhancing our understanding of the interatomic actions in cells and of the crystal structure of materials in general and semiconductors in particular, as it is a step of the initial steps to facilitate the process of calculating energies and extracting mathematical relations between correlation energy and temperature as well as between sub-fusion and fusion energies with temperature.展开更多
The practical value of high-precision models of the studied physical phenomena and technological processes is a decisive factor in science and technology. Currently, numerous methods and criteria for optimizing models...The practical value of high-precision models of the studied physical phenomena and technological processes is a decisive factor in science and technology. Currently, numerous methods and criteria for optimizing models have been proposed. However, the classification of measurement uncertainties due to the number of variables taken into account and their qualitative choice is still not given sufficient attention. The goal is to develop a new criterion suitable for any groups of experimental data obtained as a result of applying various measurement methods. Using the “information-theoretic method”, we propose two procedures for analyzing experimental results using a quantitative indicator to calculate the relative uncertainty of the measurement model, which, in turn, determines the legitimacy of the declared value of a physical constant. The presented procedure is used to analyze the results of measurements of the Boltzmann constant, Planck constant, Hubble constant and gravitational constant.展开更多
Some fundamental physical quantities need an alternative description. We derive the word average value of interaction coupling constant α<sub>s</sub>(m<sub>z</sub>) from the observed maximum g...Some fundamental physical quantities need an alternative description. We derive the word average value of interaction coupling constant α<sub>s</sub>(m<sub>z</sub>) from the observed maximum galactic rotation velocity by the simple relation , where is the velocity, at which the difference between galactic rotation velocity and Thomas precession is equal, and α is Sommerfeld’s constant. The result is in excellent agreement with the value of α<sub>s</sub> = 0.1170 ± 0.0019, recently measured and verified via QCE analysis by CERN researchers. One can formulate a reciprocity relation, connecting α<sub>s</sub> with the circle constant: . It is the merit of Preston Guynn to derive the Milky Way maximum value of the galactic rotation velocity β<sub>g</sub>, pointing to its “extremely important role in all physics”. The mass (energy) constituents of the Universe follow a golden mean hierarchy and can simply be related to the maximum of Guynn’s difference velocity respectively to α<sub>s</sub>(m<sub>z</sub>), therewith excellently confirming Bouchet’s WMAP data analysis. We conclude once more that the golden mean concept is the leading one of nature.展开更多
In 1998, two groups of astronomers, one led by Saul Perlmutter and the other by Brian Schmidt, set out to determine the deceleration—and hence the total mass/energy—of the universe by measuring the recession speeds ...In 1998, two groups of astronomers, one led by Saul Perlmutter and the other by Brian Schmidt, set out to determine the deceleration—and hence the total mass/energy—of the universe by measuring the recession speeds of type la supernovae (SN1a), came to an unexpected conclusion: ever since the universe was about 7 billion years old, its expansion rate has not been decelerating. Instead, the expansion rate has been speeding up. To justify this acceleration, they suggested that the universe does have a mysterious dark energy and they have emerged from oblivion the cosmological constant, positive this time, which is consistent with the image of an inflationary universe. To explain the observed dimming of high-redshift SN1a they have bet essentially on their distance revised upwards. We consider that an accelerated expansion leads right to a “dark energy catastrophe” (i.e., the chasm between the current cosmological vacuum density value of 10 GeV/m<sup>3</sup> and the vacuum energy density proposed by quantum field theory of ~10<sup>122</sup> GeV/m<sup>3</sup>). We suppose rather that the universe knows a slowdown expansion under the positive pressure of a dark energy, otherwise called a variable cosmological constant. The dark luminosity of the latter would be that of a “tired light” which has lost energy with distance. As for the low brilliance of SN1a, it is explained by two physical processes: The first relates to their intrinsic brightness—supposedly do not vary over time—which would depend on the chemical conditions which change with the temporal evolution;the second would concern their apparent luminosity. Besides the serious arguments already known, we strongly propose that their luminosity continually fades by interactions with cosmic magnetic fields, like the earthly PVLAS experiment which loses much more laser photons than expected by crossing a magnetic field. It goes in the sense of a “tired light” which has lost energy with distance, and therefore, a decelerated expansion of the universe. Moreover, we propose the “centrist” principle to complete the hypothesis of the cosmological principle of homogeneity and isotropy considered verified. Without denying the Copernican principle, he is opposed to a “spatial” theoretical construction which accelerates the world towards infinity. The centrist principle gives a “temporal” and privileged vision which tends to demonstrate the deceleration of expansion.展开更多
Quantum uncertainty relations constrain the precision of measurements across multiple non-commuting quantum mechanical observables.Here,we introduce the concept of optimal observable sets and define the tightest uncer...Quantum uncertainty relations constrain the precision of measurements across multiple non-commuting quantum mechanical observables.Here,we introduce the concept of optimal observable sets and define the tightest uncertainty constants to accurately describe these measurement uncertainties.For any quantum state,we establish optimal sets of three observables for both product and summation forms of uncertainty relations,and analytically derive the corresponding tightest uncertainty constants.We demonstrate that the optimality of these sets remains consistent regardless of the uncertainty relation form.Furthermore,the existence of the tightest constants excludes the validity of standard real quantum mechanics,underscoring the essential role of complex numbers in this field.Additionally,our findings resolve the conjecture posed in[Phys.Rev.Lett.118,180402(2017)],offering novel insights and potential applications in understanding preparation uncertainties.展开更多
The gravitational constant G is a basic quantity in physics, and, despite its relative imprecision, appears in many formulas, for example, also in the formulas of the Planck units. The “relative inaccuracy” lies in ...The gravitational constant G is a basic quantity in physics, and, despite its relative imprecision, appears in many formulas, for example, also in the formulas of the Planck units. The “relative inaccuracy” lies in the fact that each measurement gives different values, depending on where and with which device the measurement is taken. Ultimately, the mean value was formed and agreed upon as the official value that is used in all calculations. In an effort to explore the reason for the inaccuracy of this quantity, some formulas were configured using G, so that the respective quantity assumed the value = 1. The gravitational constant thus modified was also used in the other Planck equations instead of the conventional G. It turned out that the new values were all equivalent to each other. It was also shown that the new values were all represented by powers of the speed of light. The G was therefore no longer needed. Just like the famous mass/energy equivalence E = m * c2, similar formulas emerged, e.g. mass/momentum = m * c, mass/velocity = m * c2 and so on. This article takes up the idea that emerges in the article by Weber [1], who describes the gravitational constant as a variable (Gvar) and gives some reasons for this. Further reasons are given in the present paper and are computed. For example, the Planck units are set iteratively with the help of the variable Gvar, so that the value of one unit equals 1 in each case. In this article, eleven Planck units are set iteratively using the variable Gvar, so that the value of one unit equals 1 in each case. If all other units are based on the Gvar determined in this way, a matrix of values is created that can be regarded both as conversion factors and as equivalence relationships. It is astonishing, but not surprising that the equivalence relation E = m * c2 is one of these results. All formulas for these equivalence relationships work with the vacuum speed of light c and a new constant K. G, both as a variable and as a constant, no longer appears in these formulae. The new thing about this theory is that the gravitational constant is no longer needed. And if it no longer exists, it can no longer cause any difficulties. The example of the Planck units shows this fact very clearly. This is a radical break with current views. It is also interesting to note that the “magic” number 137 can be calculated from the distances between the values of the matrix. In addition, a similar number can be calculated from the distances between the Planck units. This number is 131 and differs from 137 with 4.14 percent. This difference has certainly often led to confusion, for example, when measuring the Fine Structure Constant.展开更多
Relative dielectric constant is an important physical factor in the theory of microwave remote sensing and electromagnetic transmission. This note reports the results of detecting relative dielectric constant from 197...Relative dielectric constant is an important physical factor in the theory of microwave remote sensing and electromagnetic transmission. This note reports the results of detecting relative dielectric constant from 197 rock samples. The regular pattern of the relative dielectric constant varying with microwave spectrum is revealed. The relative dielectric constants, correlated with the type, density, structure and chemical component of the rock, are discussed. The re-展开更多
文摘The purpose of this work is to prove that only by applying a theoretically sound information approach to developing a model for measuring the Boltzmann constant, one can justify and calculate the value of the required relative uncertainty. A dimensionless parameter (comparative uncertainty) was proposed as a universal metric for comparing experimental measurements of Boltzmann constant and simulated data. Examples are given of applying the proposed original method for calculating the relative uncertainty in measuring the Boltzmann constant using an acoustic gas thermometer, dielectric constant gas thermometer, Johnson noise thermometer, Doppler broadening thermometer. The proposed approach is theoretically justified and devoid of the shortcomings inherent in the CODATA concept: a statistically significant trend, a cumulative value of consensus or a statistical control. We tried to show how a mathematical-expert formalism can be replaced by a simple, theoretically grounded postulate on the use of information theory in measurements.
基金Supported by the NNSFC (20737001, 20977046)NSF of Zhejiang Province (2008Y507280)
文摘The thermodynamic properties of xanthone(XTH) and 135 polybrominated xanthones(PBXTHs) in the standard state have been calculated at the B3LYP/6-31G* level using Gaussian 03 program.The isodesmic reactions were designed to calculate the standard enthalpy of formation(△fHθ) and standard free energy of formation(△fGθ) of PBXTH congeners.The relations of these thermodynamic parameters with the number and position of Br atom substitution(NPBS) were discussed,and it was found that there exist high correlation between thermodynamic parameters(entropy(Sθ),△fHθ and △fGθ) and NPBS.According to the relative magnitude of their △fGθ,the relative stability order of PBXTH congeners was theoretically proposed.The relative rate constants of formation reactions of PBXTH congeners were calculated,Moreover,the values of molar heat capacity at constant pressure(Cp,m) from 200 to 1000 K for PBXTH congeners were also calculated,and the temperature dependence relation of them was obtained,suggesting very good relationships between Cp,m and temperature(T,T^1 and T^2) for almost all PBXTH congeners.
基金the phosphate Esfordi MineShahrood University of Technology for their support during this research
文摘This work was aimed to study the relative floatability of phosphate flotation by means of kinetic analysis.The relative floatability is important to determine how selectively the phosphate is separated from its impurities. The effects of pulp pH, solid content, reagents dosage(depressant, collector and co-collector) and conditioning time were investigated on the ratio of the modified rate constant of phosphate to the modified rate constant of iron(relative floatability). The results showed that a large dosage of depressant associated with a low value of collector resulted in a better relative floatability. Increasing the co-collector dosage, conditioning time and pH increased the relative floatability up to a certain value and thereafter resulted in diminishing the relative floatability. Meanwhile, the results indicated that increment of solid concentration increased the relative floatability in range investigated. It was also found that that maximum relative floatability(16.05) could be obtained in pulp pH, 9.32, solid percentage, 30,depressant dosage, 440 g/t, collector dosage, 560 g/t, co-collector dosage, 84.63 g/t and conditioning time,9.43 min.
基金Project partially supported by the Research Grant Council of Hong Kong,China(Grant No.RGC 660207)the Macro-Science Program,Hong Kong University of Science and Technology,China(Grant No.DCC 00/01.SC01)
文摘The discovery of the Planck relation is generally regarded as the starting point of quantum physics.Planck's constant h is now regarded as one of the most important universal constants.The physical nature of h,however,has not been well understood.It was originally suggested as a fitting constant to explain the black-body radiation.Although Planck had proposed a theoretical justification of h,he was never satisfied with that.To solve this outstanding problem,we use the Maxwell theory to directly calculate the energy and momentum of a radiation wave packet.We find that the energy of the wave packet is indeed proportional to its oscillation frequency.This allows us to derive the value of Planck's constant.Furthermore,we show that the emission and transmission of a photon follows the all-or-none principle.The "strength" of the wave packet can be characterized by ζ,which represents the integrated strength of the vector potential along a transverse axis.We reason that ζ should have a fixed cut-off value for all photons.Our results suggest that a wave packet can behave like a particle.This offers a simple explanation to the recent satellite observations that the cosmic microwave background follows closely the black-body radiation as predicted by Planck's law.
文摘The gravitational constant discovered by Newton is still measured with a relative uncertainty that is several orders of magnitude larger than the relative uncertainty of other fundamental constants. Numerous methods are used to measure it. This article discusses the information-oriented approach for analyzing the achievable relative measurement uncertainty, in which the magnitude of the gravitational constant can be considered as plausible. A comparison is made and the advantages and disadvantages of various methods are discussed in terms of the possibility of achieving higher accuracy using a new metric called comparative uncertainty, which was proposed by Brillouin.
文摘In this short contribution, a reciprocity relation between mass constituents of the universe was explained governed by Hardy’s maximum entanglement probability of φ5 = 0.09017. While well explainable through a set-theoretical argumentation, the relation may also be a consequence of a coupling factor attributed to the normed dimensions of the universe. Also, very simple expressions for the mass amounts were obtained, when replacing the Golden Mean φ by the Archimedes’ constant π. A brief statement was devoted to the similarity between the E-Infinity Theory of El Naschie and the Information Relativity Theory of Suleiman. In addition, superconductivity was also linked with Hardy’s entanglement probability.
文摘Sommerfeld’s fundamental fine-structure constant α once more gives reason to be amazed. This comment is a Chapter of a publication in preparation dealing mainly with golden ratio signature behind Preston Guynn’s famous matter/space approach. As a result we present a relation of α to the galactic velocity , mediated by the circle constant π, which points to an omnipresent importance of this constant and its intrinsic reciprocity pecularity: α ≈ π<sup>2</sup>|β<sub>g</sub>| respectively . The designation fine-structure constant should be replaced simply by Sommerfeld’s constant. We present golden mean-based approximations for α as well as for electron’s charge and mass and connect the word average value of interaction coupling constant α<sub>s</sub>(m<sub>z</sub>) with |β<sub>g</sub>|.
文摘Constant weight code is an important error-correcting control code in communications. Basic structure of constant weight codes for some arriving at Johnson bound, A(n, 2u, w), is presented. Some correlative propertys of the codes, the solution of arriving at Johnson bound, and the results on the couple constant code and some constant weight codes are discussed. The conclusion is verified through four examples.
基金Project supported by the National Key R&D Program of China (Grant No. 2017YFB0406000)the Southeast University “Zhongying Young Scholars”Project
文摘The phonon dispersion relations of crystalline solids play an important role in determining the mechanical and thermal properties of materials.The phonon dispersion relation,as well as the vibrational density of states,is also often used as an indicator of variation of lattice thermal conductivity with the external stress,defects,etc.In this study,a simple and fast tool is proposed to acquire the phonon dispersion relation of crystalline solids based on the LAMMPS package.The theoretical details for the calculation of the phonon dispersion relation are derived mathematically and the computational flow chart is present.The tool is first used to calculate the phonon dispersion relation of graphene with two atoms in the unit cell.Then,the phonon dispersions corresponding to several potentials or force fields,which are commonly used in the LAMMPS package to modeling the graphene,are obtained to compare with that from the DFT calculation.They are further extended to evaluate the accuracy of the used potentials before the molecular dynamics simulation.The tool is also used to calculate the phonon dispersion relation of superlattice structures that contains more than one hundred of atoms in the unit cell,which predicts the phonon band gaps along the cross-plane direction.Since the phonon dispersion relation plays an important role in the physical properties of condensed matter,the proposed tool for the calculation of the phonon dispersion relation is of great significance for predicting and explaining the mechanical and thermal properties of crystalline solids.
文摘In this paper, a mathematical relation was found between interatomic Hooke’s force constant and both the bulk modulus and interatomic distance in solid crystals, considering that the forces which have effect on an atom are only those resulted from the neighboring atoms, and the forces are subject to Hooke’s law as the deflections of atoms from their equilibrium positions are very small. This work has been applied on some solid semiconducting crystals of diatomic primitive cell, including crystals of mono-atomic primitive cell automatically, by using linear statistical fitting with computer programming and, then, using mathematical analysis, proceeding from the vibrational dispersion relation of solid linear lattice, these two methods have been used in the process in order to support each other and for the result to be satisfying and reasonable. This is a contribution to the process of using computer programming in physics to facilitate mathematical analyses and obtain the required relations and functions by designing and developing appropriate computer programs in line with the macro and micro natures of materials. The importance of this is in enhancing our understanding of the interatomic actions in cells and of the crystal structure of materials in general and semiconductors in particular, as it is a step of the initial steps to facilitate the process of calculating energies and extracting mathematical relations between correlation energy and temperature as well as between sub-fusion and fusion energies with temperature.
文摘The practical value of high-precision models of the studied physical phenomena and technological processes is a decisive factor in science and technology. Currently, numerous methods and criteria for optimizing models have been proposed. However, the classification of measurement uncertainties due to the number of variables taken into account and their qualitative choice is still not given sufficient attention. The goal is to develop a new criterion suitable for any groups of experimental data obtained as a result of applying various measurement methods. Using the “information-theoretic method”, we propose two procedures for analyzing experimental results using a quantitative indicator to calculate the relative uncertainty of the measurement model, which, in turn, determines the legitimacy of the declared value of a physical constant. The presented procedure is used to analyze the results of measurements of the Boltzmann constant, Planck constant, Hubble constant and gravitational constant.
文摘Some fundamental physical quantities need an alternative description. We derive the word average value of interaction coupling constant α<sub>s</sub>(m<sub>z</sub>) from the observed maximum galactic rotation velocity by the simple relation , where is the velocity, at which the difference between galactic rotation velocity and Thomas precession is equal, and α is Sommerfeld’s constant. The result is in excellent agreement with the value of α<sub>s</sub> = 0.1170 ± 0.0019, recently measured and verified via QCE analysis by CERN researchers. One can formulate a reciprocity relation, connecting α<sub>s</sub> with the circle constant: . It is the merit of Preston Guynn to derive the Milky Way maximum value of the galactic rotation velocity β<sub>g</sub>, pointing to its “extremely important role in all physics”. The mass (energy) constituents of the Universe follow a golden mean hierarchy and can simply be related to the maximum of Guynn’s difference velocity respectively to α<sub>s</sub>(m<sub>z</sub>), therewith excellently confirming Bouchet’s WMAP data analysis. We conclude once more that the golden mean concept is the leading one of nature.
文摘In 1998, two groups of astronomers, one led by Saul Perlmutter and the other by Brian Schmidt, set out to determine the deceleration—and hence the total mass/energy—of the universe by measuring the recession speeds of type la supernovae (SN1a), came to an unexpected conclusion: ever since the universe was about 7 billion years old, its expansion rate has not been decelerating. Instead, the expansion rate has been speeding up. To justify this acceleration, they suggested that the universe does have a mysterious dark energy and they have emerged from oblivion the cosmological constant, positive this time, which is consistent with the image of an inflationary universe. To explain the observed dimming of high-redshift SN1a they have bet essentially on their distance revised upwards. We consider that an accelerated expansion leads right to a “dark energy catastrophe” (i.e., the chasm between the current cosmological vacuum density value of 10 GeV/m<sup>3</sup> and the vacuum energy density proposed by quantum field theory of ~10<sup>122</sup> GeV/m<sup>3</sup>). We suppose rather that the universe knows a slowdown expansion under the positive pressure of a dark energy, otherwise called a variable cosmological constant. The dark luminosity of the latter would be that of a “tired light” which has lost energy with distance. As for the low brilliance of SN1a, it is explained by two physical processes: The first relates to their intrinsic brightness—supposedly do not vary over time—which would depend on the chemical conditions which change with the temporal evolution;the second would concern their apparent luminosity. Besides the serious arguments already known, we strongly propose that their luminosity continually fades by interactions with cosmic magnetic fields, like the earthly PVLAS experiment which loses much more laser photons than expected by crossing a magnetic field. It goes in the sense of a “tired light” which has lost energy with distance, and therefore, a decelerated expansion of the universe. Moreover, we propose the “centrist” principle to complete the hypothesis of the cosmological principle of homogeneity and isotropy considered verified. Without denying the Copernican principle, he is opposed to a “spatial” theoretical construction which accelerates the world towards infinity. The centrist principle gives a “temporal” and privileged vision which tends to demonstrate the deceleration of expansion.
基金supported by the National Natural Science Foundation of China(NSFC)(Grant Nos.12065021,12075159,12171044,and 12175147)。
文摘Quantum uncertainty relations constrain the precision of measurements across multiple non-commuting quantum mechanical observables.Here,we introduce the concept of optimal observable sets and define the tightest uncertainty constants to accurately describe these measurement uncertainties.For any quantum state,we establish optimal sets of three observables for both product and summation forms of uncertainty relations,and analytically derive the corresponding tightest uncertainty constants.We demonstrate that the optimality of these sets remains consistent regardless of the uncertainty relation form.Furthermore,the existence of the tightest constants excludes the validity of standard real quantum mechanics,underscoring the essential role of complex numbers in this field.Additionally,our findings resolve the conjecture posed in[Phys.Rev.Lett.118,180402(2017)],offering novel insights and potential applications in understanding preparation uncertainties.
文摘The gravitational constant G is a basic quantity in physics, and, despite its relative imprecision, appears in many formulas, for example, also in the formulas of the Planck units. The “relative inaccuracy” lies in the fact that each measurement gives different values, depending on where and with which device the measurement is taken. Ultimately, the mean value was formed and agreed upon as the official value that is used in all calculations. In an effort to explore the reason for the inaccuracy of this quantity, some formulas were configured using G, so that the respective quantity assumed the value = 1. The gravitational constant thus modified was also used in the other Planck equations instead of the conventional G. It turned out that the new values were all equivalent to each other. It was also shown that the new values were all represented by powers of the speed of light. The G was therefore no longer needed. Just like the famous mass/energy equivalence E = m * c2, similar formulas emerged, e.g. mass/momentum = m * c, mass/velocity = m * c2 and so on. This article takes up the idea that emerges in the article by Weber [1], who describes the gravitational constant as a variable (Gvar) and gives some reasons for this. Further reasons are given in the present paper and are computed. For example, the Planck units are set iteratively with the help of the variable Gvar, so that the value of one unit equals 1 in each case. In this article, eleven Planck units are set iteratively using the variable Gvar, so that the value of one unit equals 1 in each case. If all other units are based on the Gvar determined in this way, a matrix of values is created that can be regarded both as conversion factors and as equivalence relationships. It is astonishing, but not surprising that the equivalence relation E = m * c2 is one of these results. All formulas for these equivalence relationships work with the vacuum speed of light c and a new constant K. G, both as a variable and as a constant, no longer appears in these formulae. The new thing about this theory is that the gravitational constant is no longer needed. And if it no longer exists, it can no longer cause any difficulties. The example of the Planck units shows this fact very clearly. This is a radical break with current views. It is also interesting to note that the “magic” number 137 can be calculated from the distances between the values of the matrix. In addition, a similar number can be calculated from the distances between the Planck units. This number is 131 and differs from 137 with 4.14 percent. This difference has certainly often led to confusion, for example, when measuring the Fine Structure Constant.
文摘Relative dielectric constant is an important physical factor in the theory of microwave remote sensing and electromagnetic transmission. This note reports the results of detecting relative dielectric constant from 197 rock samples. The regular pattern of the relative dielectric constant varying with microwave spectrum is revealed. The relative dielectric constants, correlated with the type, density, structure and chemical component of the rock, are discussed. The re-