Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss pos...Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.展开更多
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr...Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.展开更多
The Einstein ring is usually explained in the framework of the gravitational lens. Conversely here we apply the framework of the expansion of a superbubble (SB) in order to explain the spherical appearance of the ring...The Einstein ring is usually explained in the framework of the gravitational lens. Conversely here we apply the framework of the expansion of a superbubble (SB) in order to explain the spherical appearance of the ring. Two classical equations of motion for SBs are derived in the presence of a linear and a trigonometric decrease for density. A relativistic equation of motion with an inverse square dependence for the density is derived. The angular distance, adopting the minimax approximation, is derived for three relativistic cosmologies: the standard, the flat and the wCDM. We derive the relation between redshift and Euclidean distance, which allows fixing the radius of the Einstein ring. The details of the ring are explained by a simple version of the theory of images.展开更多
Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining wal...Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining walls,stabilizing piles,and anchors,are time-consuming and labor-and energy-intensive.This study proposes an innovative polymer grout method to improve the bearing capacity and reduce the displacement of bedding slopes.A series of large-scale model tests were carried out to verify the effectiveness of polymer grout in protecting bedding slopes.Specifically,load-displacement relationships and failure patterns were analyzed for different testing slopes with various dosages of polymer.Results show the great potential of polymer grout in improving bearing capacity,reducing settlement,and protecting slopes from being crushed under shearing.The polymer-treated slopes remained structurally intact,while the untreated slope exhibited considerable damage when subjected to loads surpassing the bearing capacity.It is also found that polymer-cemented soils concentrate around the injection pipe,forming a fan-shaped sheet-like structure.This study proves the improvement of polymer grouting for bedding slope treatment and will contribute to the development of a fast method to protect bedding slopes from landslides.展开更多
This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online ide...This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online identification method is a computer-involved approach for data collection,processing,and system identification,commonly used for adaptive control and prediction.This paper proposes a method for dynamically aggregating large-scale adjustable loads to support high proportions of new energy integration,aiming to study the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction methods.The experiment selected 300 central air conditioners as the research subject and analyzed their regulation characteristics,economic efficiency,and comfort.The experimental results show that as the adjustment time of the air conditioner increases from 5 minutes to 35 minutes,the stable adjustment quantity during the adjustment period decreases from 28.46 to 3.57,indicating that air conditioning loads can be controlled over a long period and have better adjustment effects in the short term.Overall,the experimental results of this paper demonstrate that analyzing the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction algorithms is effective.展开更多
Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework...Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.展开更多
This article concerns the integral related to the transverse comoving distance and, in turn, to the luminosity distance both in the standard non-flat and flat cosmology. The purpose is to determine a straightforward m...This article concerns the integral related to the transverse comoving distance and, in turn, to the luminosity distance both in the standard non-flat and flat cosmology. The purpose is to determine a straightforward mathematical formulation for the luminosity distance as function of the transverse comoving distance for all cosmology cases with a non-zero cosmological constant by adopting a different mindset. The applied method deals with incomplete elliptical integrals of the first kind associated with the polynomial roots admitted in the comoving distance integral according to the scientific literature. The outcome shows that the luminosity distance can be obtained by the combination of an analytical solution followed by a numerical integration in order to account for the redshift. This solution is solely compared to the current Gaussian quadrature method used as basic recognized algorithm in standard cosmology.展开更多
Three mechanisms for an alternative to the Doppler effect as an explanation for the redshift are reviewed. A fourth mechanism is the attenuation of the light as given by the Beer-Lambert law. The average value of the ...Three mechanisms for an alternative to the Doppler effect as an explanation for the redshift are reviewed. A fourth mechanism is the attenuation of the light as given by the Beer-Lambert law. The average value of the Hubble constant is therefore derived by processing the galaxies of the NED-D catalog in which the distances are independent of the redshift. The observed anisotropy of the Hubble constant is reproduced by adopting a rim model, a chord model, and both 2D and 3D Voronoi diagrams.展开更多
The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the intera...The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage.展开更多
With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that consid...With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that considers the trust relationship among decisionmakers(DMs).In the process of consensusmeasurement:the social network is constructed according to the social relationship among DMs,and the Louvain method is introduced to classify social networks to form subgroups.In this study,the weights of each decision maker and each subgroup are computed by comprehensive network weights and trust weights.In the process of consensus improvement:A feedback mechanism with four identification and two direction rules is designed to guide the consensus of the improvement process.Based on the trust relationship among DMs,the preferences are modified,and the corresponding social network is updated to accelerate the consensus.Compared with the previous research,the proposedmodel not only allows the subgroups to be reconstructed and updated during the adjustment process,but also improves the accuracy of the adjustment by the feedbackmechanism.Finally,an example analysis is conducted to verify the effectiveness and flexibility of the proposed method.Moreover,compared with previous studies,the superiority of the proposed method in solving the LGDM problem is highlighted.展开更多
Given the pending completion and publication of the final Dark Energy Survey (DESI) results, this letter presents the corresponding predictions of the Haug-Tatum cosmology (HTC) model. In particular, we show in tabula...Given the pending completion and publication of the final Dark Energy Survey (DESI) results, this letter presents the corresponding predictions of the Haug-Tatum cosmology (HTC) model. In particular, we show in tabular and graphic form the “dark energy decay” curve which the HTC model predicts for cosmological redshifts covering the range of 0 - 2.0 z. Furthermore, we present the HTC model distance-vs-redshift curve in comparison to the three very different curves (for luminosity distance, angular diameter distance, and co-moving distance) calculated within the Lambda-CDM model. Whether the expansion of our universe is actually undergoing slight acceleration or the finely-tuned cosmic coasting at constant velocity of Rh = ct models, including HTC, will hopefully soon be answered by the many pending observational studies.展开更多
We analyze a simple model for tired light in a cosmological environment, a generalized model, and a spectroscopic model. The three models are tested on different compilations for the distance modulus of supernovae. Th...We analyze a simple model for tired light in a cosmological environment, a generalized model, and a spectroscopic model. The three models are tested on different compilations for the distance modulus of supernovae. The tests are negative for the simple tired light and the spectroscopic models, but positive for the generalized tired light model. The percentage error of the distance modulus for the generalized tired light model compared with the distance modulus of standard cosmology is less than one percent over the considered ranges in redshift.展开更多
Underground salt cavern CO_(2) storage(SCCS)offers the dual benefits of enabling extensive CO_(2) storage and facilitating the utilization of CO_(2) resources while contributing the regulation of the carbon market.Its...Underground salt cavern CO_(2) storage(SCCS)offers the dual benefits of enabling extensive CO_(2) storage and facilitating the utilization of CO_(2) resources while contributing the regulation of the carbon market.Its economic and operational advantages over traditional carbon capture,utilization,and storage(CCUS)projects make SCCS a more cost-effective and flexible option.Despite the widespread use of salt caverns for storing various substances,differences exist between SCCS and traditional salt cavern energy storage in terms of gas-tightness,carbon injection,brine extraction control,long-term carbon storage stability,and site selection criteria.These distinctions stem from the unique phase change characteristics of CO_(2) and the application scenarios of SCCS.Therefore,targeted and forward-looking scientific research on SCCS is imperative.This paper introduces the implementation principles and application scenarios of SCCS,emphasizing its connections with carbon emissions,carbon utilization,and renewable energy peak shaving.It delves into the operational characteristics and economic advantages of SCCS compared with other CCUS methods,and addresses associated scientific challenges.In this paper,we establish a pressure equation for carbon injection and brine extraction,that considers the phase change characteristics of CO_(2),and we analyze the pressure during carbon injection.By comparing the viscosities of CO_(2) and other gases,SCCS’s excellent sealing performance is demonstrated.Building on this,we develop a long-term stability evaluation model and associated indices,which analyze the impact of the injection speed and minimum operating pressure on stability.Field countermeasures to ensure stability are proposed.Site selection criteria for SCCS are established,preliminary salt mine sites suitable for SCCS are identified in China,and an initial estimate of achievable carbon storage scale in China is made at over 51.8-77.7 million tons,utilizing only 20%-30%volume of abandoned salt caverns.This paper addresses key scientific and engineering challenges facing SCCS and determines crucial technical parameters,such as the operating pressure,burial depth,and storage scale,and it offers essential guidance for implementing SCCS projects in China.展开更多
This paper introduces the two Upsilon constants to the reader. Their usefulness is described with respect to acting as coupling constants between the CMB temperature and the Hubble constant. In addition, this paper su...This paper introduces the two Upsilon constants to the reader. Their usefulness is described with respect to acting as coupling constants between the CMB temperature and the Hubble constant. In addition, this paper summarizes the current state of quantum cosmology with respect to the Flat Space Cosmology (FSC) model. Although the FSC quantum cosmology formulae were published in 2018, they are only rearrangements and substitutions of the other assumptions into the original FSC Hubble temperature formula. In a real sense, this temperature formula was the first quantum cosmology formula developed since Hawking’s black hole temperature formula. A recent development in the last month proves that the FSC Hubble temperature formula can be derived from the Stephan-Boltzmann law. Thus, this Hubble temperature formula effectively unites some quantum developments with the general relativity model inherent in FSC. More progress towards unification in the near-future is expected.展开更多
This paper shows how the Flat Space Cosmology model correlates the recom-bination epoch CMB temperature of 3000 K with a cosmological redshift of 1100. This proof is given in support of the recent publication that the...This paper shows how the Flat Space Cosmology model correlates the recom-bination epoch CMB temperature of 3000 K with a cosmological redshift of 1100. This proof is given in support of the recent publication that the Tatum and Seshavatharam Hubble temperature formulae can be derived using the Stephan-Boltzmann dispersion law. Thus, as explained herein, the era of high precision Planck scale quantum cosmology has arrived.展开更多
Here, using the Scale-Symmetric Theory (SST) we explain the cosmological tension and the origin of the largest cosmic structures. We show that a change in value of strong coupling constant for cold baryonic matter lea...Here, using the Scale-Symmetric Theory (SST) we explain the cosmological tension and the origin of the largest cosmic structures. We show that a change in value of strong coupling constant for cold baryonic matter leads to the disagreement in the galaxy clustering amplitude, quantified by the parameter S8. Within the same model we described the Hubble tension. We described also the mechanism that transforms the gravitational collapse into an explosion—it concerns the dynamics of virtual fields that lead to dark energy. Our calculations concern the Type Ia supernovae and the core-collapse supernovae. We calculated the quantized masses of the progenitors of supernovae, emitted total energy during explosion, and we calculated how much of the released energy was transferred to neutrinos. Value of the speed of sound in the strongly interacting matter measured at the LHC confirms that presented here model is correct. Our calculations show that the Universe is cyclic.展开更多
We introduce a factorized Smith method(FSM)for solving large-scale highranked T-Stein equations within the banded-plus-low-rank structure framework.To effectively reduce both computational complexity and storage requi...We introduce a factorized Smith method(FSM)for solving large-scale highranked T-Stein equations within the banded-plus-low-rank structure framework.To effectively reduce both computational complexity and storage requirements,we develop techniques including deflation and shift,partial truncation and compression,as well as redesign the residual computation and termination condition.Numerical examples demonstrate that the FSM outperforms the Smith method implemented with a hierarchical HODLR structured toolkit in terms of CPU time.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
Gravitational wave detection has ushered in a new era of observing the universe, providing humanity with a novel window for cosmic cognition. This theoretical study systematically traces the developmental trajectory o...Gravitational wave detection has ushered in a new era of observing the universe, providing humanity with a novel window for cosmic cognition. This theoretical study systematically traces the developmental trajectory of gravitational wave detection technology and delves into its profound impact on cosmological research. From Einstein’s prediction in general relativity to LIGO’s groundbreaking discovery, the article meticulously delineates the key theoretical and technological milestones in gravitational wave detection, with particular emphasis on elucidating the principles and evolution of core detection technologies such as laser interferometers. The research thoroughly explores the theoretical application value of gravitational waves in verifying general relativity, studying the physics of compact celestial bodies like black holes and neutron stars, and precisely measuring cosmological parameters. The article postulates that gravitational wave observations may offer new research perspectives for addressing cosmological conundrums such as dark matter, dark energy, and early universe evolution. The study also discusses the scientific prospects of combining gravitational wave observations with electromagnetic waves, neutrinos, and other multi-messenger observations, analyzing the potential value of this multi-messenger astronomy in deepening cosmic cognition. Looking ahead, the article examines cutting-edge concepts such as space-based gravitational wave detectors and predicts potential developmental directions for gravitational wave astronomy. This research not only elucidates the theoretical foundations of gravitational wave detection technology but also provides a comprehensive theoretical framework for understanding the far-reaching impact of gravitational waves on modern cosmology.展开更多
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
文摘Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.
基金support by the Open Project of Xiangjiang Laboratory(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28,ZK21-07)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(CX20230074)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJZ03)the Science and Technology Innovation Program of Humnan Province(2023RC1002).
文摘Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.
文摘The Einstein ring is usually explained in the framework of the gravitational lens. Conversely here we apply the framework of the expansion of a superbubble (SB) in order to explain the spherical appearance of the ring. Two classical equations of motion for SBs are derived in the presence of a linear and a trigonometric decrease for density. A relativistic equation of motion with an inverse square dependence for the density is derived. The angular distance, adopting the minimax approximation, is derived for three relativistic cosmologies: the standard, the flat and the wCDM. We derive the relation between redshift and Euclidean distance, which allows fixing the radius of the Einstein ring. The details of the ring are explained by a simple version of the theory of images.
基金supported by the Fujian Science Foundation for Outstanding Youth(Grant No.2023J06039)the National Natural Science Foundation of China(Grant No.41977259 and No.U2005205)Fujian Province natural resources science and technology innovation project(Grant No.KY-090000-04-2022-019)。
文摘Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining walls,stabilizing piles,and anchors,are time-consuming and labor-and energy-intensive.This study proposes an innovative polymer grout method to improve the bearing capacity and reduce the displacement of bedding slopes.A series of large-scale model tests were carried out to verify the effectiveness of polymer grout in protecting bedding slopes.Specifically,load-displacement relationships and failure patterns were analyzed for different testing slopes with various dosages of polymer.Results show the great potential of polymer grout in improving bearing capacity,reducing settlement,and protecting slopes from being crushed under shearing.The polymer-treated slopes remained structurally intact,while the untreated slope exhibited considerable damage when subjected to loads surpassing the bearing capacity.It is also found that polymer-cemented soils concentrate around the injection pipe,forming a fan-shaped sheet-like structure.This study proves the improvement of polymer grouting for bedding slope treatment and will contribute to the development of a fast method to protect bedding slopes from landslides.
基金supported by the State Grid Science&Technology Project(5100-202114296A-0-0-00).
文摘This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online identification method is a computer-involved approach for data collection,processing,and system identification,commonly used for adaptive control and prediction.This paper proposes a method for dynamically aggregating large-scale adjustable loads to support high proportions of new energy integration,aiming to study the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction methods.The experiment selected 300 central air conditioners as the research subject and analyzed their regulation characteristics,economic efficiency,and comfort.The experimental results show that as the adjustment time of the air conditioner increases from 5 minutes to 35 minutes,the stable adjustment quantity during the adjustment period decreases from 28.46 to 3.57,indicating that air conditioning loads can be controlled over a long period and have better adjustment effects in the short term.Overall,the experimental results of this paper demonstrate that analyzing the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction algorithms is effective.
文摘Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.
文摘This article concerns the integral related to the transverse comoving distance and, in turn, to the luminosity distance both in the standard non-flat and flat cosmology. The purpose is to determine a straightforward mathematical formulation for the luminosity distance as function of the transverse comoving distance for all cosmology cases with a non-zero cosmological constant by adopting a different mindset. The applied method deals with incomplete elliptical integrals of the first kind associated with the polynomial roots admitted in the comoving distance integral according to the scientific literature. The outcome shows that the luminosity distance can be obtained by the combination of an analytical solution followed by a numerical integration in order to account for the redshift. This solution is solely compared to the current Gaussian quadrature method used as basic recognized algorithm in standard cosmology.
文摘Three mechanisms for an alternative to the Doppler effect as an explanation for the redshift are reviewed. A fourth mechanism is the attenuation of the light as given by the Beer-Lambert law. The average value of the Hubble constant is therefore derived by processing the galaxies of the NED-D catalog in which the distances are independent of the redshift. The observed anisotropy of the Hubble constant is reproduced by adopting a rim model, a chord model, and both 2D and 3D Voronoi diagrams.
基金supported in part by the Central Government Guides Local Science and TechnologyDevelopment Funds(Grant No.YDZJSX2021A038)in part by theNational Natural Science Foundation of China under(Grant No.61806138)in part by the China University Industry-University-Research Collaborative Innovation Fund(Future Network Innovation Research and Application Project)(Grant 2021FNA04014).
文摘The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage.
基金The work was supported by Humanities and Social Sciences Fund of the Ministry of Education(No.22YJA630119)the National Natural Science Foundation of China(No.71971051)Natural Science Foundation of Hebei Province(No.G2021501004).
文摘With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that considers the trust relationship among decisionmakers(DMs).In the process of consensusmeasurement:the social network is constructed according to the social relationship among DMs,and the Louvain method is introduced to classify social networks to form subgroups.In this study,the weights of each decision maker and each subgroup are computed by comprehensive network weights and trust weights.In the process of consensus improvement:A feedback mechanism with four identification and two direction rules is designed to guide the consensus of the improvement process.Based on the trust relationship among DMs,the preferences are modified,and the corresponding social network is updated to accelerate the consensus.Compared with the previous research,the proposedmodel not only allows the subgroups to be reconstructed and updated during the adjustment process,but also improves the accuracy of the adjustment by the feedbackmechanism.Finally,an example analysis is conducted to verify the effectiveness and flexibility of the proposed method.Moreover,compared with previous studies,the superiority of the proposed method in solving the LGDM problem is highlighted.
文摘Given the pending completion and publication of the final Dark Energy Survey (DESI) results, this letter presents the corresponding predictions of the Haug-Tatum cosmology (HTC) model. In particular, we show in tabular and graphic form the “dark energy decay” curve which the HTC model predicts for cosmological redshifts covering the range of 0 - 2.0 z. Furthermore, we present the HTC model distance-vs-redshift curve in comparison to the three very different curves (for luminosity distance, angular diameter distance, and co-moving distance) calculated within the Lambda-CDM model. Whether the expansion of our universe is actually undergoing slight acceleration or the finely-tuned cosmic coasting at constant velocity of Rh = ct models, including HTC, will hopefully soon be answered by the many pending observational studies.
文摘We analyze a simple model for tired light in a cosmological environment, a generalized model, and a spectroscopic model. The three models are tested on different compilations for the distance modulus of supernovae. The tests are negative for the simple tired light and the spectroscopic models, but positive for the generalized tired light model. The percentage error of the distance modulus for the generalized tired light model compared with the distance modulus of standard cosmology is less than one percent over the considered ranges in redshift.
基金supported by the National Natural Science Foundation of China(52074046,52122403,51834003,and 52274073)the Graduate Research and Innovation Foundation of Chongqing(CYB22023)+2 种基金the Chongqing Talents Plan for Young Talents(cstc2022ycjh-bgzxm0035)Hunan Institute of Engineering(21RC025 and XJ2005)Hunan Province Education Department(21B0664).
文摘Underground salt cavern CO_(2) storage(SCCS)offers the dual benefits of enabling extensive CO_(2) storage and facilitating the utilization of CO_(2) resources while contributing the regulation of the carbon market.Its economic and operational advantages over traditional carbon capture,utilization,and storage(CCUS)projects make SCCS a more cost-effective and flexible option.Despite the widespread use of salt caverns for storing various substances,differences exist between SCCS and traditional salt cavern energy storage in terms of gas-tightness,carbon injection,brine extraction control,long-term carbon storage stability,and site selection criteria.These distinctions stem from the unique phase change characteristics of CO_(2) and the application scenarios of SCCS.Therefore,targeted and forward-looking scientific research on SCCS is imperative.This paper introduces the implementation principles and application scenarios of SCCS,emphasizing its connections with carbon emissions,carbon utilization,and renewable energy peak shaving.It delves into the operational characteristics and economic advantages of SCCS compared with other CCUS methods,and addresses associated scientific challenges.In this paper,we establish a pressure equation for carbon injection and brine extraction,that considers the phase change characteristics of CO_(2),and we analyze the pressure during carbon injection.By comparing the viscosities of CO_(2) and other gases,SCCS’s excellent sealing performance is demonstrated.Building on this,we develop a long-term stability evaluation model and associated indices,which analyze the impact of the injection speed and minimum operating pressure on stability.Field countermeasures to ensure stability are proposed.Site selection criteria for SCCS are established,preliminary salt mine sites suitable for SCCS are identified in China,and an initial estimate of achievable carbon storage scale in China is made at over 51.8-77.7 million tons,utilizing only 20%-30%volume of abandoned salt caverns.This paper addresses key scientific and engineering challenges facing SCCS and determines crucial technical parameters,such as the operating pressure,burial depth,and storage scale,and it offers essential guidance for implementing SCCS projects in China.
文摘This paper introduces the two Upsilon constants to the reader. Their usefulness is described with respect to acting as coupling constants between the CMB temperature and the Hubble constant. In addition, this paper summarizes the current state of quantum cosmology with respect to the Flat Space Cosmology (FSC) model. Although the FSC quantum cosmology formulae were published in 2018, they are only rearrangements and substitutions of the other assumptions into the original FSC Hubble temperature formula. In a real sense, this temperature formula was the first quantum cosmology formula developed since Hawking’s black hole temperature formula. A recent development in the last month proves that the FSC Hubble temperature formula can be derived from the Stephan-Boltzmann law. Thus, this Hubble temperature formula effectively unites some quantum developments with the general relativity model inherent in FSC. More progress towards unification in the near-future is expected.
文摘This paper shows how the Flat Space Cosmology model correlates the recom-bination epoch CMB temperature of 3000 K with a cosmological redshift of 1100. This proof is given in support of the recent publication that the Tatum and Seshavatharam Hubble temperature formulae can be derived using the Stephan-Boltzmann dispersion law. Thus, as explained herein, the era of high precision Planck scale quantum cosmology has arrived.
文摘Here, using the Scale-Symmetric Theory (SST) we explain the cosmological tension and the origin of the largest cosmic structures. We show that a change in value of strong coupling constant for cold baryonic matter leads to the disagreement in the galaxy clustering amplitude, quantified by the parameter S8. Within the same model we described the Hubble tension. We described also the mechanism that transforms the gravitational collapse into an explosion—it concerns the dynamics of virtual fields that lead to dark energy. Our calculations concern the Type Ia supernovae and the core-collapse supernovae. We calculated the quantized masses of the progenitors of supernovae, emitted total energy during explosion, and we calculated how much of the released energy was transferred to neutrinos. Value of the speed of sound in the strongly interacting matter measured at the LHC confirms that presented here model is correct. Our calculations show that the Universe is cyclic.
基金Supported partly by NSF of China(Grant No.11801163)NSF of Hunan Province(Grant Nos.2021JJ50032,2023JJ50164 and 2023JJ50165)Degree&Postgraduate Reform Project of Hunan University of Technology and Hunan Province(Grant Nos.JGYB23009 and 2024JGYB210).
文摘We introduce a factorized Smith method(FSM)for solving large-scale highranked T-Stein equations within the banded-plus-low-rank structure framework.To effectively reduce both computational complexity and storage requirements,we develop techniques including deflation and shift,partial truncation and compression,as well as redesign the residual computation and termination condition.Numerical examples demonstrate that the FSM outperforms the Smith method implemented with a hierarchical HODLR structured toolkit in terms of CPU time.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
文摘Gravitational wave detection has ushered in a new era of observing the universe, providing humanity with a novel window for cosmic cognition. This theoretical study systematically traces the developmental trajectory of gravitational wave detection technology and delves into its profound impact on cosmological research. From Einstein’s prediction in general relativity to LIGO’s groundbreaking discovery, the article meticulously delineates the key theoretical and technological milestones in gravitational wave detection, with particular emphasis on elucidating the principles and evolution of core detection technologies such as laser interferometers. The research thoroughly explores the theoretical application value of gravitational waves in verifying general relativity, studying the physics of compact celestial bodies like black holes and neutron stars, and precisely measuring cosmological parameters. The article postulates that gravitational wave observations may offer new research perspectives for addressing cosmological conundrums such as dark matter, dark energy, and early universe evolution. The study also discusses the scientific prospects of combining gravitational wave observations with electromagnetic waves, neutrinos, and other multi-messenger observations, analyzing the potential value of this multi-messenger astronomy in deepening cosmic cognition. Looking ahead, the article examines cutting-edge concepts such as space-based gravitational wave detectors and predicts potential developmental directions for gravitational wave astronomy. This research not only elucidates the theoretical foundations of gravitational wave detection technology but also provides a comprehensive theoretical framework for understanding the far-reaching impact of gravitational waves on modern cosmology.