As a novel paradigm,semantic communication provides an effective solution for breaking through the future development dilemma of classical communication systems.However,it remains an unsolved problem of how to measure...As a novel paradigm,semantic communication provides an effective solution for breaking through the future development dilemma of classical communication systems.However,it remains an unsolved problem of how to measure the information transmission capability for a given semantic communication method and subsequently compare it with the classical communication method.In this paper,we first present a review of the semantic communication system,including its system model and the two typical coding and transmission methods for its implementations.To address the unsolved issue of the information transmission capability measure for semantic communication methods,we propose a new universal performance measure called Information Conductivity.We provide the definition and the physical significance to state its effectiveness in representing the information transmission capabilities of the semantic communication systems and present elaborations including its measure methods,degrees of freedom,and progressive analysis.Experimental results in image transmission scenarios validate its practical applicability.展开更多
Extensive numerical simulations and scaling analysis are performed to investigate competitive growth between the linear and nonlinear stochastic dynamic growth systems, which belong to the Edwards–Wilkinson(EW) and K...Extensive numerical simulations and scaling analysis are performed to investigate competitive growth between the linear and nonlinear stochastic dynamic growth systems, which belong to the Edwards–Wilkinson(EW) and Kardar–Parisi–Zhang(KPZ) universality classes, respectively. The linear growth systems include the EW equation and the model of random deposition with surface relaxation(RDSR), the nonlinear growth systems involve the KPZ equation and typical discrete models including ballistic deposition(BD), etching, and restricted solid on solid(RSOS). The scaling exponents are obtained in both the(1 + 1)-and(2 + 1)-dimensional competitive growth with the nonlinear growth probability p and the linear proportion 1-p. Our results show that, when p changes from 0 to 1, there exist non-trivial crossover effects from EW to KPZ universality classes based on different competitive growth rules. Furthermore, the growth rate and the porosity are also estimated within various linear and nonlinear growths of cooperation and competition.展开更多
The geometric characteristics of fractures within a rock mass can be inferred by the data sampling from boreholes or exposed surfaces.Recently,the universal elliptical disc(UED)model was developed to represent natural...The geometric characteristics of fractures within a rock mass can be inferred by the data sampling from boreholes or exposed surfaces.Recently,the universal elliptical disc(UED)model was developed to represent natural fractures,where the fracture is assumed to be an elliptical disc and the fracture orientation,rotation angle,length of the long axis and ratio of short-long axis lengths are considered as variables.This paper aims to estimate the fracture size-and azimuth-related parameters in the UED model based on the trace information from sampling windows.The stereological relationship between the trace length,size-and azimuth-related parameters of the UED model was established,and the formulae of the mean value and standard deviation of trace length were proposed.The proposed formulae were validated via the Monte Carlo simulations with less than 5%of error rate between the calculated and true values.With respect to the estimation of the size-and azimuth-related parameters using the trace length,an optimization method was developed based on the pre-assumed size and azimuth distribution forms.A hypothetical case study was designed to illustrate and verify the parameter estimation method,where three combinations of the sampling windows were used to estimate the parameters,and the results showed that the estimated values could agree well with the true values.Furthermore,a hypothetical three-dimensional(3D)elliptical fracture network was constructed,and the circular disc,non-UED and UED models were used to represent it.The simulated trace information from different models was compared,and the results clearly illustrated the superiority of the proposed UED model over the existing circular disc and non-UED models。展开更多
Olbers’s paradox, known as the dark night paradox, is an argument in astrophysics that the darkness of the night sky conflicts with the assumption of an infinite and eternal static universe. Big-Bang theory was used ...Olbers’s paradox, known as the dark night paradox, is an argument in astrophysics that the darkness of the night sky conflicts with the assumption of an infinite and eternal static universe. Big-Bang theory was used to partially explain this paradox, while introducing new problems. Hereby, we propose a better theory, named Sun Matters Theory, to explain this paradox. Moreover, this unique theory supports and extended the Einstein’s static universe model proposed by Albert Einstein in 1917. Further, we proposed our new universe model, “Sun Model of Universe”. Based on the new model and novel theory, we generated innovative field equation by upgrading Einstein’s Field Equation through adding back the cosmological constant, introducing a new variable and modifying the gravitationally-related concepts. According to the Sun Model of Universe, the dark matter and dark energy comprise the so-called “Sun Matters”. The observed phenomenon like the red shift is explained as due to the interaction of ordinary light with Sun Matters leading to its energy and frequency decrease. In Sun Model, our big universe consists of many universes with ordinary matter at the core mixed and surrounded with the Sun Matters. In those universes, the laws of physics may be completely or partially different from that of our ordinary universe with parallel civilizations. The darkness of night can be easily explained as resulting from the interaction of light with the Sun Matters leading to the sharp decrease in the light intensity. Sun Matters also scatter the light from a star, which makes it shining as observed by Hubble. Further, there is a kind of Sun Matters named “Sun Waters”, surrounding every starts. When lights pass by the sun, the Sun Waters deflect the lights to bend the light path. According to the Sun Model, it is the light bent not the space bent that was proposed in the theory of relativities.展开更多
The article considers a conceptual universe model as a periodic lattice (network) with nodes defined by the wave function in a background-independent Hamiltonian based on their relations and interactions. This model g...The article considers a conceptual universe model as a periodic lattice (network) with nodes defined by the wave function in a background-independent Hamiltonian based on their relations and interactions. This model gives rise to energy bands, similar to those in semiconductor solid-state models. In this context, valence band holes are described as dark matter particles with a heavy effective mass. The conducting band, with a spontaneously symmetry-breaking energy profile, contains particles with several times lighter effective mass, which can represent luminous matter. Some possible analogies with solid-state physics, such as the comparison between dark and luminous matter, are discussed. Additionally, tiny dark energy, as intrinsic lattice Casimir energy, is calculated for a lattice with a large number of lattice nodes.展开更多
Bayesian inference model is an optimal processing of incomplete information that, more than other models, better captures the way in which any decision-maker learns and updates his degree of rational beliefs about pos...Bayesian inference model is an optimal processing of incomplete information that, more than other models, better captures the way in which any decision-maker learns and updates his degree of rational beliefs about possible states of nature, in order to make a better judgment while taking new evidence into account. Such a scientific model proposed for the general theory of decision-making, like all others in general, whether in statistics, economics, operations research, A.I., data science or applied mathematics, regardless of whether they are time-dependent, have in common a theoretical basis that is axiomatized by relying on related concepts of a universe of possibles, especially the so-called universe (or the world), the state of nature (or the state of the world), when formulated explicitly. The issue of where to stand as an observer or a decision-maker to reframe such a universe of possibles together with a partition structure of knowledge (i.e. semantic formalisms), including a copy of itself as it was initially while generalizing it, is not addressed. Memory being the substratum, whether human or artificial, wherein everything stands, to date, even the theoretical possibility of such an operation of self-inclusion is prohibited by pure mathematics. We make this blind spot come to light through a counter-example (namely Archimedes’ Eureka experiment) and explore novel theoretical foundations, fitting better with a quantum form than with fuzzy modeling, to deal with more than a reference universe of possibles. This could open up a new path of investigation for the general theory of decision-making, as well as for Artificial Intelligence, often considered as the science of the imitation of human abilities, while being also the science of knowledge representation and the science of concept formation and reasoning.展开更多
Against the backdrop of continuous development in the field of education,universities are encouraged to innovate their talent cultivation systems and objectives.The deep integration of industry and education has emerg...Against the backdrop of continuous development in the field of education,universities are encouraged to innovate their talent cultivation systems and objectives.The deep integration of industry and education has emerged as an effective strategy,aligning with the basic requirements of the new engineering education initiative and exerting a positive impact on socioeconomic development.However,an analysis of the current state of industry-education integration in universities reveals several issues that require optimization,affecting the ultimate effectiveness of integration.To optimize this phenomenon and achieve high-quality development,universities need to further explore the construction of a deep integration model of industry and education,adhering to corresponding principles to form a comprehensive system.On this basis,pathways for deep industry-education integration can be summarized.展开更多
With the continuous advancement of education informatization,Technological Pedagogical Content Knowledge(TPACK),as a new theoretical framework,provides a novel method for measuring teachers’informatization teaching a...With the continuous advancement of education informatization,Technological Pedagogical Content Knowledge(TPACK),as a new theoretical framework,provides a novel method for measuring teachers’informatization teaching ability.This study takes normal students of English majors from three ethnic universities as the research object,collects relevant data through questionnaires,and uses structural equation modeling to conduct data analysis and empirical research to investigate the differences in the TPACK levels of these students at different grades and the structural relationships among the elements in the TPACK structure.The technological pedagogical knowledge element of the TPACK structure was not obtained by exploratory factors analysis but through path analysis and structural equation modeling,the results show that the one-dimensional core knowledge of technological knowledge(TK),content knowledge(CK),and pedagogical knowledge(PK)have a positive effect on the two-dimensional interaction knowledge of technological content knowledge(TCK)and pedagogical content knowledge(PCK);furthermore,TCK and PCK have a positive effect on TPACK;and TK,CK,and PK indirectly affect TPACK through TCK and PCK.On this basis,suggestions are provided to ethnic colleges and universities to develop the TPACK knowledge competence of normal students of English majors.展开更多
Twenty-six years ago, a small committee report built upon earlier studies to articulate a compelling and poetic vision for the future of astronomy. This vision called for an infrared-optimized space telescope with an ...Twenty-six years ago, a small committee report built upon earlier studies to articulate a compelling and poetic vision for the future of astronomy. This vision called for an infrared-optimized space telescope with an aperture of at least four meters. With the support of their governments in the US, Europe, and Canada, 20,000 people brought this vision to life as the 6.5-meter James Webb Space Telescope (JWST). The telescope is working perfectly, delivering much better image quality than expected [1]. JWST is one hundred times more powerful than the Hubble Space Telescope and has already captured spectacular images of the distant universe. A view of a tiny part of the sky reveals many well-formed spiral galaxies, some over thirteen billion light-years away. These observations challenge the standard Big Bang Model (BBM), which posits that early galaxies should be small and lack well-formed spiral structures. JWST’s findings are prompting scientists to reconsider the BBM in its current form. Throughout the history of science, technological advancements have led to new results that challenge established theories, sometimes necessitating their modification or even abandonment. This happened with the geocentric model four centuries ago, and the BBM may face a similar reevaluation as JWST provides more images of the distant universe. In 1937, P. Dirac proposed the Large Number Hypothesis and the Hypothesis of Variable Gravitational Constant, later incorporating the concept of Continuous Creation of Matter in the universe. The Hypersphere World-Universe Model (WUM) builds on these ideas, introducing a distinct mechanism for matter creation. WUM is proposed as an alternative to the prevailing BBM. Its main advantage is the elimination of the “Initial Singularity” and “Inflation”, offering explanations for many unresolved problems in Cosmology. WUM is presented as a natural extension of Classical Physics with the potential to bring about a significant transformation in both Cosmology and Classical Physics. Considering JWST’s discoveries, WUM’s achievements, and 87 years of Dirac’s proposals, it is time to initiate a fundamental transformation in Astronomy, Cosmology, and Classical Physics. The present paper is a continuation of the published article “JWST Discoveries—Confirmation of World-Universe Model Predictions” [2] and a summary of the paper “Hypersphere World-Universe Model: Digest of Presentations John Chappell Natural Philosophy Society” [3]. Many results obtained there are quoted in the current work without full justification;interested readers are encouraged to view the referenced papers for detailed explanations.展开更多
This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and par...This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and parameters. This methodology is designated by the acronym 2<sup>nd</sup>-BERRU-PMP, where the attribute “2<sup>nd</sup>” indicates that this methodology incorporates second- order uncertainties (means and covariances) and second (and higher) order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best-Estimate Results with Reduced Uncertainties” and the last letter (“P”) in the acronym indicates “probabilistic,” referring to the MaxEnt probabilistic inclusion of the computational model responses. This is in contradistinction to the 2<sup>nd</sup>-BERRU-PMD methodology, which deterministically combines the computed model responses with the experimental information, as presented in the accompanying work (Part I). Although both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies yield expressions that include second (and higher) order sensitivities of responses to model parameters, the respective expressions for the predicted responses, for the calibrated predicted parameters and for their predicted uncertainties (covariances), are not identical to each other. Nevertheless, the results predicted by both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies encompass, as particular cases, the results produced by the extant data assimilation and data adjustment procedures, which rely on the minimization, in a least-square sense, of a user-defined functional meant to represent the discrepancies between measured and computed model responses.展开更多
This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this met...This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this methodology incorporates second-order uncertainties (means and covariances) and second-order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best- Estimate Results with Reduced Uncertainties” and the last letter (“D”) in the acronym indicates “deterministic,” referring to the deterministic inclusion of the computational model responses. The 2<sup>nd</sup>-BERRU-PMD methodology is fundamentally based on the maximum entropy (MaxEnt) principle. This principle is in contradistinction to the fundamental principle that underlies the extant data assimilation and/or adjustment procedures which minimize in a least-square sense a subjective user-defined functional which is meant to represent the discrepancies between measured and computed model responses. It is shown that the 2<sup>nd</sup>-BERRU-PMD methodology generalizes and extends current data assimilation and/or data adjustment procedures while overcoming the fundamental limitations of these procedures. In the accompanying work (Part II), the alternative framework for developing the “second- order MaxEnt predictive modelling methodology” is presented by incorporating probabilistically (as opposed to “deterministically”) the computed model responses.展开更多
This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the ...This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the acronym BERRU denotes “best-estimate results with reduced uncertainties” and “PM” denotes “predictive modeling.” The physical system selected for this illustrative application is a polyethylene-reflected plutonium (acronym: PERP) OECD/NEA reactor physics benchmark. This benchmark is modeled using the neutron transport Boltzmann equation (involving 21,976 uncertain parameters), the solution of which is representative of “large-scale computations.” The results obtained in this work confirm the fact that the 2<sup>nd</sup>-BERRU-PM methodology predicts best-estimate results that fall in between the corresponding computed and measured values, while reducing the predicted standard deviations of the predicted results to values smaller than either the experimentally measured or the computed values of the respective standard deviations. The obtained results also indicate that 2<sup>nd</sup>-order response sensitivities must always be included to quantify the need for including (or not) the 3<sup>rd</sup>- and/or 4<sup>th</sup>-order sensitivities. When the parameters are known with high precision, the contributions of the higher-order sensitivities diminish with increasing order, so that the inclusion of the 1<sup>st</sup>- and 2<sup>nd</sup>-order sensitivities may suffice for obtaining accurate predicted best- estimate response values and best-estimate standard deviations. On the other hand, when the parameters’ standard deviations are sufficiently large to approach (or be outside of) the radius of convergence of the multivariate Taylor-series which represents the response in the phase-space of model parameters, the contributions stemming from the 3<sup>rd</sup>- and even 4<sup>th</sup>-order sensitivities are necessary to ensure consistency between the computed and measured response. In such cases, the use of only the 1<sup>st</sup>-order sensitivities erroneously indicates that the computed results are inconsistent with the respective measured response. Ongoing research aims at extending the 2<sup>nd</sup>-BERRU-PM methodology to fourth-order, thus enabling the computation of third-order response correlations (skewness) and fourth-order response correlations (kurtosis).展开更多
A universal thermodynamic model of calculating mass action concentrations for structural units or ion couples in ternary and binary strong electrolyte aqueous solution was developed based on the ion and molecule coexi...A universal thermodynamic model of calculating mass action concentrations for structural units or ion couples in ternary and binary strong electrolyte aqueous solution was developed based on the ion and molecule coexistence theory and verified in four kinds of binary aqueous solutions and two kinds of ternary aqueous solutions. The calculated mass action concentrations of structural units or ion couples in four binary aqueous solutions and two ternary solutions at 298.15 K have good agreement with the reported activity data from literatures after shifting the standard state and concentration unit. Therefore, the calculated mass action concentrations of structural units or ion couples from the developed universal thermodynamic model for ternary and binary aqueous solutions can be applied to predict reaction ability of components in ternary and binary strong electrolyte aqueous solutions. It is also proved that the assumptions applied in the developed thermodynamic model are correct and reasonable, i.e., strong electrolyte aqueous solution is composed of cations and anions as simple ions, H2O as simple molecule and other hydrous salt compounds as complex molecules. The calculated mass action concentrations of structural units or ion couples in ternary and binary strong electrolyte aqueous solutions strictly follow the mass action law.展开更多
A new second-order moment model for turbulent combustion is applied in the simulation of methane-air turbulent jet flame. The predicted results are compared with the experimental results and with those predicted using...A new second-order moment model for turbulent combustion is applied in the simulation of methane-air turbulent jet flame. The predicted results are compared with the experimental results and with those predicted using the well-known EBU-Arrhenius model and the original second-order moment model. The comparison shows the advantage of the new model that it requires almost the same computational storage and time as that of the original second-order moment model, but its modeling results are in better agreement with experiments than those using other models. Hence, the new second-order moment model is promising in modeling turbulent combustion with NOx formation with finite reaction rate for engineering application.展开更多
A full second-order moment (FSM) model and an algebraic stress (ASM) two-phase turbulence modelare proposed and applied to predict turbulent bubble-liquid flows in a 2D rectangular bubble column. Predictiongives the b...A full second-order moment (FSM) model and an algebraic stress (ASM) two-phase turbulence modelare proposed and applied to predict turbulent bubble-liquid flows in a 2D rectangular bubble column. Predictiongives the bubble and liquid velocities, bubble volume fraction, bubble and liquid Reynolds stresses and bubble-liquidvelocity correlation. For predicted two-phase velocities and bubble volume fraction there is only slight differencebetween these two models, and the simulation results using both two models are in good agreement with the particleimage velocimetry (PIV) measurements. Although the predicted two-phase Reynolds stresses using the FSM are insomewhat better agreement with the PIV measurements than those predicted using the ASM, the Reynolds stressespredicted using both two models are in general agreement with the experiments. Therefore, it is suggested to usethe ASM two-phase turbulence model in engineering application for saving the computation time.展开更多
Second-order axially moving systems are common models in the field of dynamics, such as axially moving strings, cables, and belts. In the traditional research work, it is difficult to obtain closed-form solutions for ...Second-order axially moving systems are common models in the field of dynamics, such as axially moving strings, cables, and belts. In the traditional research work, it is difficult to obtain closed-form solutions for the forced vibration when the damping effect and the coupling effect of multiple second-order models are considered.In this paper, Green's function method based on the Laplace transform is used to obtain closed-form solutions for the forced vibration of second-order axially moving systems. By taking the axially moving damping string system and multi-string system connected by springs as examples, the detailed solution methods and the analytical Green's functions of these second-order systems are given. The mode functions and frequency equations are also obtained by the obtained Green's functions. The reliability and convenience of the results are verified by several examples. This paper provides a systematic analytical method for the dynamic analysis of second-order axially moving systems, and the obtained Green's functions are applicable to different second-order systems rather than just string systems. In addition, the work of this paper also has positive significance for the study on the forced vibration of high-order systems.展开更多
The products of an archival culture in colleges and universities are the final result of the development of archival cultural resources,and the development of archival cultural effects in colleges and universities sho...The products of an archival culture in colleges and universities are the final result of the development of archival cultural resources,and the development of archival cultural effects in colleges and universities should be an important part of improving the artistic level of libraries.The existing RippleNet model doesn’t consider the influence of key nodes on recommendation results,and the recommendation accuracy is not high.Therefore,based on the RippleNet model,this paper introduces the influence of complex network nodes into the model and puts forward the Cn RippleNet model.The performance of the model is verified by experiments,which provide a theoretical basis for the promotion and recommendation of its cultural products of universarchives,solve the problem that RippleNet doesn’t consider the influence of key nodes on recommendation results,and improve the recommendation accuracy.This paper also combs the development course of archival cultural products in detail.Finally,based on the Cn-RippleNet model,the cultural effect of university archives is recommended and popularized.展开更多
We investigate the area distribution of clusters (loops) in the honeycomb O(n) loop model by means of the worm algorithm with n = 0.5, 1, 1.5, and 2. At the critical point, the number of clusters, whose enclosed a...We investigate the area distribution of clusters (loops) in the honeycomb O(n) loop model by means of the worm algorithm with n = 0.5, 1, 1.5, and 2. At the critical point, the number of clusters, whose enclosed area is greater than A, is proportional to A-1 with a proportionality constant C. We confirm numerically that C is universal, and its value agrees well with the predictions based on the Coulomb gas method.展开更多
A two-scale second-order moment two-phase turbulence model accounting for inter-particle collision is developed, based on the concepts of particle large-scale fluctuation due to turbulence and particle small-scale flu...A two-scale second-order moment two-phase turbulence model accounting for inter-particle collision is developed, based on the concepts of particle large-scale fluctuation due to turbulence and particle small-scale fluctuation due to collision and through a unified treatment of these two kinds of fluctuations. The proposed model is used to simulate gas-particle flows in a channel and in a downer. Simulation results are in agreement with the experimental results reported in references and are near the results obtained using the sin- gle-scale second-order moment two-phase turbulence model superposed with a particle collision model (USM-θ model) in most regions.展开更多
In this paper,the static output feedback stabilization for large-scale unstable second-order singular systems is investigated.First,the upper bound of all unstable eigenvalues of second-order singular systems is deriv...In this paper,the static output feedback stabilization for large-scale unstable second-order singular systems is investigated.First,the upper bound of all unstable eigenvalues of second-order singular systems is derived.Then,by using the argument principle,a computable stability criterion is proposed to check the stability of secondorder singular systems.Furthermore,by applying model reduction methods to original systems,a static output feedback design algorithm for stabilizing second-order singular systems is presented.A simulation example is provided to illustrate the effectiveness of the design algorithm.展开更多
基金supported by the National Natural Science Foundation of China(No.62293481,No.62071058)。
文摘As a novel paradigm,semantic communication provides an effective solution for breaking through the future development dilemma of classical communication systems.However,it remains an unsolved problem of how to measure the information transmission capability for a given semantic communication method and subsequently compare it with the classical communication method.In this paper,we first present a review of the semantic communication system,including its system model and the two typical coding and transmission methods for its implementations.To address the unsolved issue of the information transmission capability measure for semantic communication methods,we propose a new universal performance measure called Information Conductivity.We provide the definition and the physical significance to state its effectiveness in representing the information transmission capabilities of the semantic communication systems and present elaborations including its measure methods,degrees of freedom,and progressive analysis.Experimental results in image transmission scenarios validate its practical applicability.
基金supported by Undergraduate Training Program for Innovation and Entrepreneurship of China University of Mining and Technology (CUMT)(Grant No. 202110290059Z)Fundamental Research Funds for the Central Universities of CUMT (Grant No. 2020ZDPYMS33)。
文摘Extensive numerical simulations and scaling analysis are performed to investigate competitive growth between the linear and nonlinear stochastic dynamic growth systems, which belong to the Edwards–Wilkinson(EW) and Kardar–Parisi–Zhang(KPZ) universality classes, respectively. The linear growth systems include the EW equation and the model of random deposition with surface relaxation(RDSR), the nonlinear growth systems involve the KPZ equation and typical discrete models including ballistic deposition(BD), etching, and restricted solid on solid(RSOS). The scaling exponents are obtained in both the(1 + 1)-and(2 + 1)-dimensional competitive growth with the nonlinear growth probability p and the linear proportion 1-p. Our results show that, when p changes from 0 to 1, there exist non-trivial crossover effects from EW to KPZ universality classes based on different competitive growth rules. Furthermore, the growth rate and the porosity are also estimated within various linear and nonlinear growths of cooperation and competition.
基金funded by National Natural Science Foundation of China(Grant No.41972264)Zhejiang Provincial Natural Science Foundation of China(Grant No.LR22E080002)the Observation and Research Station of Geohazards in Zhejiang,Ministry of Natural Resources,China(Grant No.ZJDZGCZ-2021).
文摘The geometric characteristics of fractures within a rock mass can be inferred by the data sampling from boreholes or exposed surfaces.Recently,the universal elliptical disc(UED)model was developed to represent natural fractures,where the fracture is assumed to be an elliptical disc and the fracture orientation,rotation angle,length of the long axis and ratio of short-long axis lengths are considered as variables.This paper aims to estimate the fracture size-and azimuth-related parameters in the UED model based on the trace information from sampling windows.The stereological relationship between the trace length,size-and azimuth-related parameters of the UED model was established,and the formulae of the mean value and standard deviation of trace length were proposed.The proposed formulae were validated via the Monte Carlo simulations with less than 5%of error rate between the calculated and true values.With respect to the estimation of the size-and azimuth-related parameters using the trace length,an optimization method was developed based on the pre-assumed size and azimuth distribution forms.A hypothetical case study was designed to illustrate and verify the parameter estimation method,where three combinations of the sampling windows were used to estimate the parameters,and the results showed that the estimated values could agree well with the true values.Furthermore,a hypothetical three-dimensional(3D)elliptical fracture network was constructed,and the circular disc,non-UED and UED models were used to represent it.The simulated trace information from different models was compared,and the results clearly illustrated the superiority of the proposed UED model over the existing circular disc and non-UED models。
文摘Olbers’s paradox, known as the dark night paradox, is an argument in astrophysics that the darkness of the night sky conflicts with the assumption of an infinite and eternal static universe. Big-Bang theory was used to partially explain this paradox, while introducing new problems. Hereby, we propose a better theory, named Sun Matters Theory, to explain this paradox. Moreover, this unique theory supports and extended the Einstein’s static universe model proposed by Albert Einstein in 1917. Further, we proposed our new universe model, “Sun Model of Universe”. Based on the new model and novel theory, we generated innovative field equation by upgrading Einstein’s Field Equation through adding back the cosmological constant, introducing a new variable and modifying the gravitationally-related concepts. According to the Sun Model of Universe, the dark matter and dark energy comprise the so-called “Sun Matters”. The observed phenomenon like the red shift is explained as due to the interaction of ordinary light with Sun Matters leading to its energy and frequency decrease. In Sun Model, our big universe consists of many universes with ordinary matter at the core mixed and surrounded with the Sun Matters. In those universes, the laws of physics may be completely or partially different from that of our ordinary universe with parallel civilizations. The darkness of night can be easily explained as resulting from the interaction of light with the Sun Matters leading to the sharp decrease in the light intensity. Sun Matters also scatter the light from a star, which makes it shining as observed by Hubble. Further, there is a kind of Sun Matters named “Sun Waters”, surrounding every starts. When lights pass by the sun, the Sun Waters deflect the lights to bend the light path. According to the Sun Model, it is the light bent not the space bent that was proposed in the theory of relativities.
文摘The article considers a conceptual universe model as a periodic lattice (network) with nodes defined by the wave function in a background-independent Hamiltonian based on their relations and interactions. This model gives rise to energy bands, similar to those in semiconductor solid-state models. In this context, valence band holes are described as dark matter particles with a heavy effective mass. The conducting band, with a spontaneously symmetry-breaking energy profile, contains particles with several times lighter effective mass, which can represent luminous matter. Some possible analogies with solid-state physics, such as the comparison between dark and luminous matter, are discussed. Additionally, tiny dark energy, as intrinsic lattice Casimir energy, is calculated for a lattice with a large number of lattice nodes.
文摘Bayesian inference model is an optimal processing of incomplete information that, more than other models, better captures the way in which any decision-maker learns and updates his degree of rational beliefs about possible states of nature, in order to make a better judgment while taking new evidence into account. Such a scientific model proposed for the general theory of decision-making, like all others in general, whether in statistics, economics, operations research, A.I., data science or applied mathematics, regardless of whether they are time-dependent, have in common a theoretical basis that is axiomatized by relying on related concepts of a universe of possibles, especially the so-called universe (or the world), the state of nature (or the state of the world), when formulated explicitly. The issue of where to stand as an observer or a decision-maker to reframe such a universe of possibles together with a partition structure of knowledge (i.e. semantic formalisms), including a copy of itself as it was initially while generalizing it, is not addressed. Memory being the substratum, whether human or artificial, wherein everything stands, to date, even the theoretical possibility of such an operation of self-inclusion is prohibited by pure mathematics. We make this blind spot come to light through a counter-example (namely Archimedes’ Eureka experiment) and explore novel theoretical foundations, fitting better with a quantum form than with fuzzy modeling, to deal with more than a reference universe of possibles. This could open up a new path of investigation for the general theory of decision-making, as well as for Artificial Intelligence, often considered as the science of the imitation of human abilities, while being also the science of knowledge representation and the science of concept formation and reasoning.
基金2023 Annual Project of the China Association for Construction Education“Research on the Development Path of Private Colleges and Industry Integration in Liaoning Province Under the Strategy of Intelligent Manufacturing Strong Province”(Project number:2023239)。
文摘Against the backdrop of continuous development in the field of education,universities are encouraged to innovate their talent cultivation systems and objectives.The deep integration of industry and education has emerged as an effective strategy,aligning with the basic requirements of the new engineering education initiative and exerting a positive impact on socioeconomic development.However,an analysis of the current state of industry-education integration in universities reveals several issues that require optimization,affecting the ultimate effectiveness of integration.To optimize this phenomenon and achieve high-quality development,universities need to further explore the construction of a deep integration model of industry and education,adhering to corresponding principles to form a comprehensive system.On this basis,pathways for deep industry-education integration can be summarized.
文摘With the continuous advancement of education informatization,Technological Pedagogical Content Knowledge(TPACK),as a new theoretical framework,provides a novel method for measuring teachers’informatization teaching ability.This study takes normal students of English majors from three ethnic universities as the research object,collects relevant data through questionnaires,and uses structural equation modeling to conduct data analysis and empirical research to investigate the differences in the TPACK levels of these students at different grades and the structural relationships among the elements in the TPACK structure.The technological pedagogical knowledge element of the TPACK structure was not obtained by exploratory factors analysis but through path analysis and structural equation modeling,the results show that the one-dimensional core knowledge of technological knowledge(TK),content knowledge(CK),and pedagogical knowledge(PK)have a positive effect on the two-dimensional interaction knowledge of technological content knowledge(TCK)and pedagogical content knowledge(PCK);furthermore,TCK and PCK have a positive effect on TPACK;and TK,CK,and PK indirectly affect TPACK through TCK and PCK.On this basis,suggestions are provided to ethnic colleges and universities to develop the TPACK knowledge competence of normal students of English majors.
文摘Twenty-six years ago, a small committee report built upon earlier studies to articulate a compelling and poetic vision for the future of astronomy. This vision called for an infrared-optimized space telescope with an aperture of at least four meters. With the support of their governments in the US, Europe, and Canada, 20,000 people brought this vision to life as the 6.5-meter James Webb Space Telescope (JWST). The telescope is working perfectly, delivering much better image quality than expected [1]. JWST is one hundred times more powerful than the Hubble Space Telescope and has already captured spectacular images of the distant universe. A view of a tiny part of the sky reveals many well-formed spiral galaxies, some over thirteen billion light-years away. These observations challenge the standard Big Bang Model (BBM), which posits that early galaxies should be small and lack well-formed spiral structures. JWST’s findings are prompting scientists to reconsider the BBM in its current form. Throughout the history of science, technological advancements have led to new results that challenge established theories, sometimes necessitating their modification or even abandonment. This happened with the geocentric model four centuries ago, and the BBM may face a similar reevaluation as JWST provides more images of the distant universe. In 1937, P. Dirac proposed the Large Number Hypothesis and the Hypothesis of Variable Gravitational Constant, later incorporating the concept of Continuous Creation of Matter in the universe. The Hypersphere World-Universe Model (WUM) builds on these ideas, introducing a distinct mechanism for matter creation. WUM is proposed as an alternative to the prevailing BBM. Its main advantage is the elimination of the “Initial Singularity” and “Inflation”, offering explanations for many unresolved problems in Cosmology. WUM is presented as a natural extension of Classical Physics with the potential to bring about a significant transformation in both Cosmology and Classical Physics. Considering JWST’s discoveries, WUM’s achievements, and 87 years of Dirac’s proposals, it is time to initiate a fundamental transformation in Astronomy, Cosmology, and Classical Physics. The present paper is a continuation of the published article “JWST Discoveries—Confirmation of World-Universe Model Predictions” [2] and a summary of the paper “Hypersphere World-Universe Model: Digest of Presentations John Chappell Natural Philosophy Society” [3]. Many results obtained there are quoted in the current work without full justification;interested readers are encouraged to view the referenced papers for detailed explanations.
文摘This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and parameters. This methodology is designated by the acronym 2<sup>nd</sup>-BERRU-PMP, where the attribute “2<sup>nd</sup>” indicates that this methodology incorporates second- order uncertainties (means and covariances) and second (and higher) order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best-Estimate Results with Reduced Uncertainties” and the last letter (“P”) in the acronym indicates “probabilistic,” referring to the MaxEnt probabilistic inclusion of the computational model responses. This is in contradistinction to the 2<sup>nd</sup>-BERRU-PMD methodology, which deterministically combines the computed model responses with the experimental information, as presented in the accompanying work (Part I). Although both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies yield expressions that include second (and higher) order sensitivities of responses to model parameters, the respective expressions for the predicted responses, for the calibrated predicted parameters and for their predicted uncertainties (covariances), are not identical to each other. Nevertheless, the results predicted by both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies encompass, as particular cases, the results produced by the extant data assimilation and data adjustment procedures, which rely on the minimization, in a least-square sense, of a user-defined functional meant to represent the discrepancies between measured and computed model responses.
文摘This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this methodology incorporates second-order uncertainties (means and covariances) and second-order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best- Estimate Results with Reduced Uncertainties” and the last letter (“D”) in the acronym indicates “deterministic,” referring to the deterministic inclusion of the computational model responses. The 2<sup>nd</sup>-BERRU-PMD methodology is fundamentally based on the maximum entropy (MaxEnt) principle. This principle is in contradistinction to the fundamental principle that underlies the extant data assimilation and/or adjustment procedures which minimize in a least-square sense a subjective user-defined functional which is meant to represent the discrepancies between measured and computed model responses. It is shown that the 2<sup>nd</sup>-BERRU-PMD methodology generalizes and extends current data assimilation and/or data adjustment procedures while overcoming the fundamental limitations of these procedures. In the accompanying work (Part II), the alternative framework for developing the “second- order MaxEnt predictive modelling methodology” is presented by incorporating probabilistically (as opposed to “deterministically”) the computed model responses.
文摘This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the acronym BERRU denotes “best-estimate results with reduced uncertainties” and “PM” denotes “predictive modeling.” The physical system selected for this illustrative application is a polyethylene-reflected plutonium (acronym: PERP) OECD/NEA reactor physics benchmark. This benchmark is modeled using the neutron transport Boltzmann equation (involving 21,976 uncertain parameters), the solution of which is representative of “large-scale computations.” The results obtained in this work confirm the fact that the 2<sup>nd</sup>-BERRU-PM methodology predicts best-estimate results that fall in between the corresponding computed and measured values, while reducing the predicted standard deviations of the predicted results to values smaller than either the experimentally measured or the computed values of the respective standard deviations. The obtained results also indicate that 2<sup>nd</sup>-order response sensitivities must always be included to quantify the need for including (or not) the 3<sup>rd</sup>- and/or 4<sup>th</sup>-order sensitivities. When the parameters are known with high precision, the contributions of the higher-order sensitivities diminish with increasing order, so that the inclusion of the 1<sup>st</sup>- and 2<sup>nd</sup>-order sensitivities may suffice for obtaining accurate predicted best- estimate response values and best-estimate standard deviations. On the other hand, when the parameters’ standard deviations are sufficiently large to approach (or be outside of) the radius of convergence of the multivariate Taylor-series which represents the response in the phase-space of model parameters, the contributions stemming from the 3<sup>rd</sup>- and even 4<sup>th</sup>-order sensitivities are necessary to ensure consistency between the computed and measured response. In such cases, the use of only the 1<sup>st</sup>-order sensitivities erroneously indicates that the computed results are inconsistent with the respective measured response. Ongoing research aims at extending the 2<sup>nd</sup>-BERRU-PM methodology to fourth-order, thus enabling the computation of third-order response correlations (skewness) and fourth-order response correlations (kurtosis).
基金Project supported by Publication Foundation of National Science and Technology Academic Books of China
文摘A universal thermodynamic model of calculating mass action concentrations for structural units or ion couples in ternary and binary strong electrolyte aqueous solution was developed based on the ion and molecule coexistence theory and verified in four kinds of binary aqueous solutions and two kinds of ternary aqueous solutions. The calculated mass action concentrations of structural units or ion couples in four binary aqueous solutions and two ternary solutions at 298.15 K have good agreement with the reported activity data from literatures after shifting the standard state and concentration unit. Therefore, the calculated mass action concentrations of structural units or ion couples from the developed universal thermodynamic model for ternary and binary aqueous solutions can be applied to predict reaction ability of components in ternary and binary strong electrolyte aqueous solutions. It is also proved that the assumptions applied in the developed thermodynamic model are correct and reasonable, i.e., strong electrolyte aqueous solution is composed of cations and anions as simple ions, H2O as simple molecule and other hydrous salt compounds as complex molecules. The calculated mass action concentrations of structural units or ion couples in ternary and binary strong electrolyte aqueous solutions strictly follow the mass action law.
基金The project sponsored by the Foundation for Doctorate Thesis of Tsinghua Universitythe National Key Project in 1999-2004 sponsored by the Ministry of Science and Technology of China
文摘A new second-order moment model for turbulent combustion is applied in the simulation of methane-air turbulent jet flame. The predicted results are compared with the experimental results and with those predicted using the well-known EBU-Arrhenius model and the original second-order moment model. The comparison shows the advantage of the new model that it requires almost the same computational storage and time as that of the original second-order moment model, but its modeling results are in better agreement with experiments than those using other models. Hence, the new second-order moment model is promising in modeling turbulent combustion with NOx formation with finite reaction rate for engineering application.
基金Supported by the Special Funds for Major State Basic Research Projects, PRC(G1999-0222-08) and the National Natural Science Foundation of China(No. 19872039).
文摘A full second-order moment (FSM) model and an algebraic stress (ASM) two-phase turbulence modelare proposed and applied to predict turbulent bubble-liquid flows in a 2D rectangular bubble column. Predictiongives the bubble and liquid velocities, bubble volume fraction, bubble and liquid Reynolds stresses and bubble-liquidvelocity correlation. For predicted two-phase velocities and bubble volume fraction there is only slight differencebetween these two models, and the simulation results using both two models are in good agreement with the particleimage velocimetry (PIV) measurements. Although the predicted two-phase Reynolds stresses using the FSM are insomewhat better agreement with the PIV measurements than those predicted using the ASM, the Reynolds stressespredicted using both two models are in general agreement with the experiments. Therefore, it is suggested to usethe ASM two-phase turbulence model in engineering application for saving the computation time.
基金Project supported by the National Natural Science Foundation of China (No. 12272323)。
文摘Second-order axially moving systems are common models in the field of dynamics, such as axially moving strings, cables, and belts. In the traditional research work, it is difficult to obtain closed-form solutions for the forced vibration when the damping effect and the coupling effect of multiple second-order models are considered.In this paper, Green's function method based on the Laplace transform is used to obtain closed-form solutions for the forced vibration of second-order axially moving systems. By taking the axially moving damping string system and multi-string system connected by springs as examples, the detailed solution methods and the analytical Green's functions of these second-order systems are given. The mode functions and frequency equations are also obtained by the obtained Green's functions. The reliability and convenience of the results are verified by several examples. This paper provides a systematic analytical method for the dynamic analysis of second-order axially moving systems, and the obtained Green's functions are applicable to different second-order systems rather than just string systems. In addition, the work of this paper also has positive significance for the study on the forced vibration of high-order systems.
文摘The products of an archival culture in colleges and universities are the final result of the development of archival cultural resources,and the development of archival cultural effects in colleges and universities should be an important part of improving the artistic level of libraries.The existing RippleNet model doesn’t consider the influence of key nodes on recommendation results,and the recommendation accuracy is not high.Therefore,based on the RippleNet model,this paper introduces the influence of complex network nodes into the model and puts forward the Cn RippleNet model.The performance of the model is verified by experiments,which provide a theoretical basis for the promotion and recommendation of its cultural products of universarchives,solve the problem that RippleNet doesn’t consider the influence of key nodes on recommendation results,and improve the recommendation accuracy.This paper also combs the development course of archival cultural products in detail.Finally,based on the Cn-RippleNet model,the cultural effect of university archives is recommended and popularized.
基金Project supported by the National Natural Science Foundation of China (Grant No. 10975127)the Specialized Research Fund for the Doctoral Program of Higher Education, China (Grant No. 20113402110040)
文摘We investigate the area distribution of clusters (loops) in the honeycomb O(n) loop model by means of the worm algorithm with n = 0.5, 1, 1.5, and 2. At the critical point, the number of clusters, whose enclosed area is greater than A, is proportional to A-1 with a proportionality constant C. We confirm numerically that C is universal, and its value agrees well with the predictions based on the Coulomb gas method.
基金The project supported by the Special Funds for Major State Basic Research,China(G-1999-0222-08)the Postdoctoral Science Foundation(2004036239)
文摘A two-scale second-order moment two-phase turbulence model accounting for inter-particle collision is developed, based on the concepts of particle large-scale fluctuation due to turbulence and particle small-scale fluctuation due to collision and through a unified treatment of these two kinds of fluctuations. The proposed model is used to simulate gas-particle flows in a channel and in a downer. Simulation results are in agreement with the experimental results reported in references and are near the results obtained using the sin- gle-scale second-order moment two-phase turbulence model superposed with a particle collision model (USM-θ model) in most regions.
基金Project supported by the National Natural Science Foundation of China(Nos.11971303 and 11871330)。
文摘In this paper,the static output feedback stabilization for large-scale unstable second-order singular systems is investigated.First,the upper bound of all unstable eigenvalues of second-order singular systems is derived.Then,by using the argument principle,a computable stability criterion is proposed to check the stability of secondorder singular systems.Furthermore,by applying model reduction methods to original systems,a static output feedback design algorithm for stabilizing second-order singular systems is presented.A simulation example is provided to illustrate the effectiveness of the design algorithm.