Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rare...Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rarely used in superresolution methods, especially in low elevation estimation. The target airspace information in the difference beam is different from the target airspace information in the sum beam. And the use of difference beams does not significantly increase the complexity of the system and algorithms. Thus, this paper applies the difference beam to the beamformer to improve the elevation estimation performance of BML algorithm. And the direction and number of beams can be adjusted according to the actual needs. The theoretical target elevation angle root means square error(RMSE) and the computational complexity of the proposed algorithms are analyzed. Finally, computer simulations and real data processing results demonstrate the effectiveness of the proposed algorithms.展开更多
The neutron spectrum unfolding by Bonner sphere spectrometer(BSS) is considered a complex multidimensional model,which requires complex mathematical methods to solve the first kind of Fredholm integral equation. In or...The neutron spectrum unfolding by Bonner sphere spectrometer(BSS) is considered a complex multidimensional model,which requires complex mathematical methods to solve the first kind of Fredholm integral equation. In order to solve the problem of the maximum likelihood expectation maximization(MLEM) algorithm which is easy to suffer the pitfalls of local optima and the particle swarm optimization(PSO) algorithm which is easy to get unreasonable flight direction and step length of particles, which leads to the invalid iteration and affect efficiency and accuracy, an improved PSO-MLEM algorithm, combined of PSO and MLEM algorithm, is proposed for neutron spectrum unfolding. The dynamic acceleration factor is used to balance the ability of global and local search, and improves the convergence speed and accuracy of the algorithm. Firstly, the Monte Carlo method was used to simulated the BSS to obtain the response function and count rates of BSS. In the simulation of count rate, four reference spectra from the IAEA Technical Report Series No. 403 were used as input parameters of the Monte Carlo method. The PSO-MLEM algorithm was used to unfold the neutron spectrum of the simulated data and was verified by the difference of the unfolded spectrum to the reference spectrum. Finally, the 252Cf neutron source was measured by BSS, and the PSO-MLEM algorithm was used to unfold the experimental neutron spectrum.Compared with maximum entropy deconvolution(MAXED), PSO and MLEM algorithm, the PSO-MLEM algorithm has fewer parameters and automatically adjusts the dynamic acceleration factor to solve the problem of local optima. The convergence speed of the PSO-MLEM algorithm is 1.4 times and 3.1 times that of the MLEM and PSO algorithms. Compared with PSO, MLEM and MAXED, the correlation coefficients of PSO-MLEM algorithm are increased by 33.1%, 33.5% and 1.9%, and the relative mean errors are decreased by 98.2%, 97.8% and 67.4%.展开更多
Maximum likelihood estimation(MLE)is an effective method for localizing radioactive sources in a given area.However,it requires an exhaustive search for parameter estimation,which is time-consuming.In this study,heuri...Maximum likelihood estimation(MLE)is an effective method for localizing radioactive sources in a given area.However,it requires an exhaustive search for parameter estimation,which is time-consuming.In this study,heuristic techniques were employed to search for radiation source parameters that provide the maximum likelihood by using a network of sensors.Hence,the time consumption of MLE would be effectively reduced.First,the radiation source was detected using the k-sigma method.Subsequently,the MLE was applied for parameter estimation using the readings and positions of the detectors that have detected the radiation source.A comparative study was performed in which the estimation accuracy and time consump-tion of the MLE were evaluated for traditional methods and heuristic techniques.The traditional MLE was performed via a grid search method using fixed and multiple resolutions.Additionally,four commonly used heuristic algorithms were applied:the firefly algorithm(FFA),particle swarm optimization(PSO),ant colony optimization(ACO),and artificial bee colony(ABC).The experiment was conducted using real data collected by the Low Scatter Irradiator facility at the Savannah River National Laboratory as part of the Intelligent Radiation Sensing System program.The comparative study showed that the estimation time was 3.27 s using fixed resolution MLE and 0.59 s using multi-resolution MLE.The time consumption for the heuristic-based MLE was 0.75,0.03,0.02,and 0.059 s for FFA,PSO,ACO,and ABC,respectively.The location estimation error was approximately 0.4 m using either the grid search-based MLE or the heuristic-based MLE.Hence,heuristic-based MLE can provide comparable estimation accuracy through a less time-consuming process than traditional MLE.展开更多
The conformal array can make full use of the aperture,save space,meet the requirements of aerodynamics,and is sensitive to polarization information.It has broad application prospects in military,aerospace,and communic...The conformal array can make full use of the aperture,save space,meet the requirements of aerodynamics,and is sensitive to polarization information.It has broad application prospects in military,aerospace,and communication fields.The joint polarization and direction-of-arrival(DOA)estimation based on the conformal array and the theoretical analysis of its parameter estimation performance are the key factors to promote the engineering application of the conformal array.To solve these problems,this paper establishes the wave field signal model of the conformal array.Then,for the case of a single target,the cost function of the maximum likelihood(ML)estimator is rewritten with Rayleigh quotient from a problem of maximizing the ratio of quadratic forms into those of minimizing quadratic forms.On this basis,rapid parameter estimation is achieved with the idea of manifold separation technology(MST).Compared with the modified variable projection(MVP)algorithm,it reduces the computational complexity and improves the parameter estimation performance.Meanwhile,the MST is used to solve the partial derivative of the steering vector.Then,the theoretical performance of ML,the multiple signal classification(MUSIC)estimator and Cramer-Rao bound(CRB)based on the conformal array are derived respectively,which provides theoretical foundation for the engineering application of the conformal array.Finally,the simulation experiment verifies the effectiveness of the proposed method.展开更多
In this paper, a weighted maximum likelihood technique (WMLT) for the logistic regression model is presented. This method depended on a weight function that is continuously adaptable using Mahalanobis distances for pr...In this paper, a weighted maximum likelihood technique (WMLT) for the logistic regression model is presented. This method depended on a weight function that is continuously adaptable using Mahalanobis distances for predictor variables. Under the model, the asymptotic consistency of the suggested estimator is demonstrated and properties of finite-sample are also investigated via simulation. In simulation studies and real data sets, it is observed that the newly proposed technique demonstrated the greatest performance among all estimators compared.展开更多
In this paper,we study spatial cross-sectional data models in the form of matrix exponential spatial specification(MESS),where MESS appears in both dependent and error terms.The empirical likelihood(EL)ratio statistic...In this paper,we study spatial cross-sectional data models in the form of matrix exponential spatial specification(MESS),where MESS appears in both dependent and error terms.The empirical likelihood(EL)ratio statistics are established for the parameters of the MESS model.It is shown that the limiting distributions of EL ratio statistics follow chi-square distributions,which are used to construct the confidence regions of model parameters.Simulation experiments are conducted to compare the performances of confidence regions based on EL method and normal approximation method.展开更多
Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the c...Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.展开更多
Laser-induced fluorescence(LIF)spectroscopy is employed for plasma diagnosis,necessitating the utilization of deconvolution algorithms to isolate the Doppler effect from the raw spectral signal.However,direct deconvol...Laser-induced fluorescence(LIF)spectroscopy is employed for plasma diagnosis,necessitating the utilization of deconvolution algorithms to isolate the Doppler effect from the raw spectral signal.However,direct deconvolution becomes invalid in the presence of noise as it leads to infinite amplification of high-frequency noise components.To address this issue,we propose a deconvolution algorithm based on the maximum entropy principle.We validate the effectiveness of the proposed algorithm by utilizing simulated LIF spectra at various noise levels(signal-to-noise ratio,SNR=20–80 d B)and measured LIF spectra with Xe as the working fluid.In the typical measured spectrum(SNR=26.23 d B)experiment,compared with the Gaussian filter and the Richardson–Lucy(R-L)algorithm,the proposed algorithm demonstrates an increase in SNR of 1.39 d B and 4.66 d B,respectively,along with a reduction in the root-meansquare error(RMSE)of 35%and 64%,respectively.Additionally,there is a decrease in the spectral angle(SA)of 0.05 and 0.11,respectively.In the high-quality spectrum(SNR=43.96 d B)experiment,the results show that the running time of the proposed algorithm is reduced by about98%compared with the R-L iterative algorithm.Moreover,the maximum entropy algorithm avoids parameter optimization settings and is more suitable for automatic implementation.In conclusion,the proposed algorithm can accurately resolve Doppler spectrum details while effectively suppressing noise,thus highlighting its advantage in LIF spectral deconvolution applications.展开更多
Recently,a Schwarz crystal structure with curved grain boundaries(GBs)constrained by twin-boundary(TB)networks was discovered in nanocrystalline Cu through experiments and atomistic simulations.Nanocrystalline Cu with...Recently,a Schwarz crystal structure with curved grain boundaries(GBs)constrained by twin-boundary(TB)networks was discovered in nanocrystalline Cu through experiments and atomistic simulations.Nanocrystalline Cu with nanosized Schwarz crystals exhibited high strength and excellent thermal stability.However,the grainsize effect and associated deformation mechanisms of Schwarz nanocrystals remain unknown.Here,we performed large-scale atomistic simulations to investigate the deformation behaviors and grain-size effect of nanocrystalline Cu with Schwarz crystals.Our simulations showed that similar to regular nanocrystals,Schwarz nanocrystals exhibit a strengthening-softening transition with decreasing grain size.The critical grain size in Schwarz nanocrystals is smaller than that in regular nanocrystals,leading to a maximum strength higher than that of regular nanocrystals.Our simulations revealed that the softening in Schwarz nanocrystals mainly originates from TB migration(or detwinning)and annihilation of GBs,rather than GB-mediated processes(including GB migration,sliding and diffusion)dominating the softening in regular nanocrystals.Quantitative analyses of simulation data further showed that compared with those in regular nanocrystals,the GB-mediated processes in Schwarz nanocrystals are suppressed,which is related to the low volume fraction of amorphous-like GBs and constraints of TB networks.The smaller critical grain size arises from the suppression of GB-mediated processes.展开更多
In this paper we study optimal advertising problems that model the introduction of a new product into the market in the presence of carryover effects of the advertisement and with memory effects in the level of goodwi...In this paper we study optimal advertising problems that model the introduction of a new product into the market in the presence of carryover effects of the advertisement and with memory effects in the level of goodwill. In particular, we let the dynamics of the product goodwill to depend on the past, and also on past advertising efforts. We treat the problem by means of the stochastic Pontryagin maximum principle, that here is considered for a class of problems where in the state equation either the state or the control depend on the past. Moreover the control acts on the martingale term and the space of controls U can be chosen to be non-convex but now the space of controls U can be chosen to be non-convex. The maximum principle is thus formulated using a first-order adjoint Backward Stochastic Differential Equations (BSDEs), which can be explicitly computed due to the specific characteristics of the model, and a second-order adjoint relation.展开更多
Spatial variability of soil properties imposes a challenge for practical analysis and design in geotechnical engineering.The latter is particularly true for slope stability assessment,where the effects of uncertainty ...Spatial variability of soil properties imposes a challenge for practical analysis and design in geotechnical engineering.The latter is particularly true for slope stability assessment,where the effects of uncertainty are synthesized in the so-called probability of failure.This probability quantifies the reliability of a slope and its numerical calculation is usually quite involved from a numerical viewpoint.In view of this issue,this paper proposes an approach for failure probability assessment based on Latinized partially stratified sampling and maximum entropy distribution with fractional moments.The spatial variability of geotechnical properties is represented by means of random fields and the Karhunen-Loève expansion.Then,failure probabilities are estimated employing maximum entropy distribution with fractional moments.The application of the proposed approach is examined with two examples:a case study of an undrained slope and a case study of a slope with cross-correlated random fields of strength parameters under a drained slope.The results show that the proposed approach has excellent accuracy and high efficiency,and it can be applied straightforwardly to similar geotechnical engineering problems.展开更多
Indoor positioning is a key technology in today’s intelligent environments,and it plays a crucial role in many application areas.This paper proposed an unscented Kalman filter(UKF)based on the maximum correntropy cri...Indoor positioning is a key technology in today’s intelligent environments,and it plays a crucial role in many application areas.This paper proposed an unscented Kalman filter(UKF)based on the maximum correntropy criterion(MCC)instead of the minimummean square error criterion(MMSE).This innovative approach is applied to the loose coupling of the Inertial Navigation System(INS)and Ultra-Wideband(UWB).By introducing the maximum correntropy criterion,the MCCUKF algorithm dynamically adjusts the covariance matrices of the system noise and the measurement noise,thus enhancing its adaptability to diverse environmental localization requirements.Particularly in the presence of non-Gaussian noise,especially heavy-tailed noise,the MCCUKF exhibits superior accuracy and robustness compared to the traditional UKF.The method initially generates an estimate of the predicted state and covariance matrix through the unscented transform(UT)and then recharacterizes the measurement information using a nonlinear regression method at the cost of theMCC.Subsequently,the state and covariance matrices of the filter are updated by employing the unscented transformation on the measurement equations.Moreover,to mitigate the influence of non-line-of-sight(NLOS)errors positioning accuracy,this paper proposes a k-medoid clustering algorithm based on bisection k-means(Bikmeans).This algorithm preprocesses the UWB distance measurements to yield a more precise position estimation.Simulation results demonstrate that MCCUKF is robust to the uncertainty of UWB and realizes stable integration of INS and UWB systems.展开更多
In this paper,an effective algorithm for optimizing the subarray of conformal arrays is proposed.The method first divides theconformal array into several first-level subarrays.It uses the X algorithm to solve the feas...In this paper,an effective algorithm for optimizing the subarray of conformal arrays is proposed.The method first divides theconformal array into several first-level subarrays.It uses the X algorithm to solve the feasible solution of first-level subarray tiling and employs the particle swarm algorithm to optimize the conformal array subarray tiling scheme with the maximum entropy of the planar mapping as the fitness function.Subsequently,convex optimization is applied to optimize the subarray amplitude phase.Data results verify that the method can effectively find the optimal conformal array tiling scheme.展开更多
BACKGROUND Adolescent major depressive disorder(MDD)is a significant mental health concern that often leads to recurrent depression in adulthood.Resting-state functional magnetic resonance imaging(rs-fMRI)offers uniqu...BACKGROUND Adolescent major depressive disorder(MDD)is a significant mental health concern that often leads to recurrent depression in adulthood.Resting-state functional magnetic resonance imaging(rs-fMRI)offers unique insights into the neural mechanisms underlying this condition.However,despite previous research,the specific vulnerable brain regions affected in adolescent MDD patients have not been fully elucidated.AIM To identify consistent vulnerable brain regions in adolescent MDD patients using rs-fMRI and activation likelihood estimation(ALE)meta-analysis.METHODS We performed a comprehensive literature search through July 12,2023,for studies investigating brain functional changes in adolescent MDD patients.We utilized regional homogeneity(ReHo),amplitude of low-frequency fluctuations(ALFF)and fractional ALFF(fALFF)analyses.We compared the regions of aberrant spontaneous neural activity in adolescents with MDD vs healthy controls(HCs)using ALE.RESULTS Ten studies(369 adolescent MDD patients and 313 HCs)were included.Combining the ReHo and ALFF/fALFF data,the results revealed that the activity in the right cuneus and left precuneus was lower in the adolescent MDD patients than in the HCs(voxel size:648 mm3,P<0.05),and no brain region exhibited increased activity.Based on the ALFF data,we found decreased activity in the right cuneus and left precuneus in adolescent MDD patients(voxel size:736 mm3,P<0.05),with no regions exhibiting increased activity.CONCLUSION Through ALE meta-analysis,we consistently identified the right cuneus and left precuneus as vulnerable brain regions in adolescent MDD patients,increasing our understanding of the neuropathology of affected adolescents.展开更多
BACKGROUND Major depressive disorder(MDD)in adolescents and young adults contributes significantly to global morbidity,with inconsistent findings on brain structural changes from structural magnetic resonance imaging ...BACKGROUND Major depressive disorder(MDD)in adolescents and young adults contributes significantly to global morbidity,with inconsistent findings on brain structural changes from structural magnetic resonance imaging studies.Activation likeli-hood estimation(ALE)offers a method to synthesize these diverse findings and identify consistent brain anomalies.METHODS We performed a comprehensive literature search in PubMed,Web of Science,Embase,and Chinese National Knowledge Infrastructure databases for neuroi-maging studies on MDD among adolescents and young adults published up to November 19,2023.Two independent researchers performed the study selection,quality assessment,and data extraction.The ALE technique was employed to synthesize findings on localized brain function anomalies in MDD patients,which was supplemented by sensitivity analyses.RESULTS Twenty-two studies comprising fourteen diffusion tensor imaging(DTI)studies and eight voxel-based morphome-try(VBM)studies,and involving 451 MDD patients and 465 healthy controls(HCs)for DTI and 664 MDD patients and 946 HCs for VBM,were included.DTI-based ALE demonstrated significant reductions in fractional anisotropy(FA)values in the right caudate head,right insula,and right lentiform nucleus putamen in adolescents and young adults with MDD compared to HCs,with no regions exhibiting increased FA values.VBM-based ALE did not demonstrate significant alterations in gray matter volume.Sensitivity analyses highlighted consistent findings in the right caudate head(11 of 14 analyses),right insula(10 of 14 analyses),and right lentiform nucleus putamen(11 of 14 analyses).CONCLUSION Structural alterations in the right caudate head,right insula,and right lentiform nucleus putamen in young MDD patients may contribute to its recurrent nature,offering insights for targeted therapies.展开更多
A photovoltaic (PV) string with multiple modules with bypass diodes frequently deployed on a variety of autonomous PV systems may present multiple power peaks under uneven shading. For optimal solar harvesting, there ...A photovoltaic (PV) string with multiple modules with bypass diodes frequently deployed on a variety of autonomous PV systems may present multiple power peaks under uneven shading. For optimal solar harvesting, there is a need for a control schema to force the PV string to operate at global maximum power point (GMPP). While a lot of tracking methods have been proposed in the literature, they are usually complex and do not fully take advantage of the available characteristics of the PV array. This work highlights how the voltage at operating point and the forward voltage of the bypass diode are considered to design a global maximum power point tracking (GMPPT) algorithm with a very limited global search phase called Fast GMPPT. This algorithm successfully tracks GMPP between 94% and 98% of the time under a theoretical evaluation. It is then compared against Perturb and Observe, Deterministic Particle Swarm Optimization, and Grey Wolf Optimization under a sequence of irradiance steps as well as a power-over-voltage characteristics profile that mimics the electrical characteristics of a PV string under varying partial shading conditions. Overall, the simulation with the sequence of irradiance steps shows that while Fast GMPPT does not have the best convergence time, it has an excellent convergence rate as well as causes the least amount of power loss during the global search phase. Experimental test under varying partial shading conditions shows that while the GMPPT proposal is simple and lightweight, it is very performant under a wide range of dynamically varying partial shading conditions and boasts the best energy efficiency (94.74%) out of the 4 tested algorithms.展开更多
Beyond-5G(B5G)aims to meet the growing demands of mobile traffic and expand the communication space.Considering that intelligent applications to B5G wireless communications will involve security issues regarding user ...Beyond-5G(B5G)aims to meet the growing demands of mobile traffic and expand the communication space.Considering that intelligent applications to B5G wireless communications will involve security issues regarding user data and operational data,this paper analyzes the maximum capacity of the multi-watermarking method for multimedia signal hiding as a means of alleviating the information security problem of B5G.The multiwatermarking process employs spread transform dither modulation.During the watermarking procedure,Gram-Schmidt orthogonalization is used to obtain the multiple spreading vectors.Consequently,multiple watermarks can be simultaneously embedded into the same position of a multimedia signal.Moreover,the multiple watermarks can be extracted without affecting one another during the extraction process.We analyze the effect of the size of the spreading vector on the unit maximum capacity,and consequently derive the theoretical relationship between the size of the spreading vector and the unit maximum capacity.A number of experiments are conducted to determine the optimal parameter values for maximum robustness on the premise of high capacity and good imperceptibility.展开更多
The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regressi...The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regression(GPR)model based on Conditional Likelihood Lower Bound Search(CLLBS)to optimize the design of the generator,which can filter the noise in the data and search for global optimization by combining the Conditional Likelihood Lower Bound Search method.Taking the efficiency optimization of 15 kW Permanent Magnet Synchronous Motor as an example.Firstly,this method uses the elementary effect analysis to choose the sensitive variables,combining the evolutionary algorithm to design the super Latin cube sampling plan;Then the generator-converter system is simulated by establishing a co-simulation platform to obtain data.A Gaussian process regression model combing the method of the conditional likelihood lower bound search is established,which combined the chi-square test to optimize the accuracy of the model globally.Secondly,after the model reaches the accuracy,the Pareto frontier is obtained through the NSGA-II algorithm by considering the maximum output torque as a constraint.Last,the constrained optimization is transformed into an unconstrained optimizing problem by introducing maximum constrained improvement expectation(CEI)optimization method based on the re-interpolation model,which cross-validated the optimization results of the Gaussian process regression model.The above method increase the efficiency of generator by 0.76%and 0.5%respectively;And this method can be used for rapid modeling and multi-objective optimization of generator systems.展开更多
The occurrence of earthquakes is closely related to the crustal geotectonic movement and the migration of mass,which consequently cause changes in gravity.The Gravity Recovery And Climate Experiment(GRACE)satellite da...The occurrence of earthquakes is closely related to the crustal geotectonic movement and the migration of mass,which consequently cause changes in gravity.The Gravity Recovery And Climate Experiment(GRACE)satellite data can be used to detect gravity changes associated with large earthquakes.However,previous GRACE satellite-based seismic gravity-change studies have focused more on coseismic gravity changes than on preseismic gravity changes.Moreover,the noise of the north–south stripe in GRACE data is difficult to eliminate,thereby resulting in the loss of some gravity information related to tectonic activities.To explore the preseismic gravity anomalies in a more refined way,we first propose a method of characterizing gravity variation based on the maximum shear strain of gravity,inspired by the concept of crustal strain.The offset index method is then adopted to describe the gravity anomalies,and the spatial and temporal characteristics of gravity anomalies before earthquakes are analyzed at the scales of the fault zone and plate,respectively.In this work,experiments are carried out on the Tibetan Plateau and its surrounding areas,and the following findings are obtained:First,from the observation scale of the fault zone,we detect the occurrence of large-area gravity anomalies near the epicenter,oftentimes about half a year before an earthquake,and these anomalies were distributed along the fault zone.Second,from the observation scale of the plate,we find that when an earthquake occurred on the Tibetan Plateau,a large number of gravity anomalies also occurred at the boundary of the Tibetan Plateau and the Indian Plate.Moreover,the aforementioned experiments confirm that the proposed method can successfully capture the preseismic gravity anomalies of large earthquakes with a magnitude of less than 8,which suggests a new idea for the application of gravity satellite data to earthquake research.展开更多
The main aim of the paper is to present (and at the same time offer) a differ-ent perspective for the analysis of the accelerated expansion of the Universe. A perspective that can surely be considered as being “in pa...The main aim of the paper is to present (and at the same time offer) a differ-ent perspective for the analysis of the accelerated expansion of the Universe. A perspective that can surely be considered as being “in parallel” to the tradition-al ones, such as those based, for example, on the hypotheses of “Dark Matter” and “Dark Energy”, or better as a “com-possible” perspective, because it is not understood as being “exclusive”. In fact, it is an approach that, when con-firmed by experimental results, always keeps its validity from an “operative” point of view. This is because, in analogy to the traditional perspectives, on the basis of Popper’s Falsification Principle the corresponding “Generative” Logic on which it is based has not the property of the perfect induction. The basic difference then only consists in the fact that the Evolution of the Universe is now modeled by considering the Universe as a Self-Organizing System, which is thus analyzed in the light of the Maximum Ordinality Principle.展开更多
基金supported by the Fund for Foreign Scholars in University Research and Teaching Programs (B18039)。
文摘Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rarely used in superresolution methods, especially in low elevation estimation. The target airspace information in the difference beam is different from the target airspace information in the sum beam. And the use of difference beams does not significantly increase the complexity of the system and algorithms. Thus, this paper applies the difference beam to the beamformer to improve the elevation estimation performance of BML algorithm. And the direction and number of beams can be adjusted according to the actual needs. The theoretical target elevation angle root means square error(RMSE) and the computational complexity of the proposed algorithms are analyzed. Finally, computer simulations and real data processing results demonstrate the effectiveness of the proposed algorithms.
基金supported by the National Natural science Foundation of China (No. 42127807)the Sichuan Science and Technology Program (No. 2020YJ0334)the Sichuan Science and Technology Breeding Program (No. 2022041)。
文摘The neutron spectrum unfolding by Bonner sphere spectrometer(BSS) is considered a complex multidimensional model,which requires complex mathematical methods to solve the first kind of Fredholm integral equation. In order to solve the problem of the maximum likelihood expectation maximization(MLEM) algorithm which is easy to suffer the pitfalls of local optima and the particle swarm optimization(PSO) algorithm which is easy to get unreasonable flight direction and step length of particles, which leads to the invalid iteration and affect efficiency and accuracy, an improved PSO-MLEM algorithm, combined of PSO and MLEM algorithm, is proposed for neutron spectrum unfolding. The dynamic acceleration factor is used to balance the ability of global and local search, and improves the convergence speed and accuracy of the algorithm. Firstly, the Monte Carlo method was used to simulated the BSS to obtain the response function and count rates of BSS. In the simulation of count rate, four reference spectra from the IAEA Technical Report Series No. 403 were used as input parameters of the Monte Carlo method. The PSO-MLEM algorithm was used to unfold the neutron spectrum of the simulated data and was verified by the difference of the unfolded spectrum to the reference spectrum. Finally, the 252Cf neutron source was measured by BSS, and the PSO-MLEM algorithm was used to unfold the experimental neutron spectrum.Compared with maximum entropy deconvolution(MAXED), PSO and MLEM algorithm, the PSO-MLEM algorithm has fewer parameters and automatically adjusts the dynamic acceleration factor to solve the problem of local optima. The convergence speed of the PSO-MLEM algorithm is 1.4 times and 3.1 times that of the MLEM and PSO algorithms. Compared with PSO, MLEM and MAXED, the correlation coefficients of PSO-MLEM algorithm are increased by 33.1%, 33.5% and 1.9%, and the relative mean errors are decreased by 98.2%, 97.8% and 67.4%.
文摘Maximum likelihood estimation(MLE)is an effective method for localizing radioactive sources in a given area.However,it requires an exhaustive search for parameter estimation,which is time-consuming.In this study,heuristic techniques were employed to search for radiation source parameters that provide the maximum likelihood by using a network of sensors.Hence,the time consumption of MLE would be effectively reduced.First,the radiation source was detected using the k-sigma method.Subsequently,the MLE was applied for parameter estimation using the readings and positions of the detectors that have detected the radiation source.A comparative study was performed in which the estimation accuracy and time consump-tion of the MLE were evaluated for traditional methods and heuristic techniques.The traditional MLE was performed via a grid search method using fixed and multiple resolutions.Additionally,four commonly used heuristic algorithms were applied:the firefly algorithm(FFA),particle swarm optimization(PSO),ant colony optimization(ACO),and artificial bee colony(ABC).The experiment was conducted using real data collected by the Low Scatter Irradiator facility at the Savannah River National Laboratory as part of the Intelligent Radiation Sensing System program.The comparative study showed that the estimation time was 3.27 s using fixed resolution MLE and 0.59 s using multi-resolution MLE.The time consumption for the heuristic-based MLE was 0.75,0.03,0.02,and 0.059 s for FFA,PSO,ACO,and ABC,respectively.The location estimation error was approximately 0.4 m using either the grid search-based MLE or the heuristic-based MLE.Hence,heuristic-based MLE can provide comparable estimation accuracy through a less time-consuming process than traditional MLE.
基金the National Natural Science Foundation of China(62071144,61971159,61871149).
文摘The conformal array can make full use of the aperture,save space,meet the requirements of aerodynamics,and is sensitive to polarization information.It has broad application prospects in military,aerospace,and communication fields.The joint polarization and direction-of-arrival(DOA)estimation based on the conformal array and the theoretical analysis of its parameter estimation performance are the key factors to promote the engineering application of the conformal array.To solve these problems,this paper establishes the wave field signal model of the conformal array.Then,for the case of a single target,the cost function of the maximum likelihood(ML)estimator is rewritten with Rayleigh quotient from a problem of maximizing the ratio of quadratic forms into those of minimizing quadratic forms.On this basis,rapid parameter estimation is achieved with the idea of manifold separation technology(MST).Compared with the modified variable projection(MVP)algorithm,it reduces the computational complexity and improves the parameter estimation performance.Meanwhile,the MST is used to solve the partial derivative of the steering vector.Then,the theoretical performance of ML,the multiple signal classification(MUSIC)estimator and Cramer-Rao bound(CRB)based on the conformal array are derived respectively,which provides theoretical foundation for the engineering application of the conformal array.Finally,the simulation experiment verifies the effectiveness of the proposed method.
文摘In this paper, a weighted maximum likelihood technique (WMLT) for the logistic regression model is presented. This method depended on a weight function that is continuously adaptable using Mahalanobis distances for predictor variables. Under the model, the asymptotic consistency of the suggested estimator is demonstrated and properties of finite-sample are also investigated via simulation. In simulation studies and real data sets, it is observed that the newly proposed technique demonstrated the greatest performance among all estimators compared.
基金Supported by the National Natural Science Foundation of China(12061017,12161009)the Research Fund of Guangxi Key Lab of Multi-source Information Mining&Security(22-A-01-01)。
文摘In this paper,we study spatial cross-sectional data models in the form of matrix exponential spatial specification(MESS),where MESS appears in both dependent and error terms.The empirical likelihood(EL)ratio statistics are established for the parameters of the MESS model.It is shown that the limiting distributions of EL ratio statistics follow chi-square distributions,which are used to construct the confidence regions of model parameters.Simulation experiments are conducted to compare the performances of confidence regions based on EL method and normal approximation method.
文摘Count data is almost always over-dispersed where the variance exceeds the mean. Several count data models have been proposed by researchers but the problem of over-dispersion still remains unresolved, more so in the context of change point analysis. This study develops a likelihood-based algorithm that detects and estimates multiple change points in a set of count data assumed to follow the Negative Binomial distribution. Discrete change point procedures discussed in literature work well for equi-dispersed data. The new algorithm produces reliable estimates of change points in cases of both equi-dispersed and over-dispersed count data;hence its advantage over other count data change point techniques. The Negative Binomial Multiple Change Point Algorithm was tested using simulated data for different sample sizes and varying positions of change. Changes in the distribution parameters were detected and estimated by conducting a likelihood ratio test on several partitions of data obtained through step-wise recursive binary segmentation. Critical values for the likelihood ratio test were developed and used to check for significance of the maximum likelihood estimates of the change points. The change point algorithm was found to work best for large datasets, though it also works well for small and medium-sized datasets with little to no error in the location of change points. The algorithm correctly detects changes when present and fails to detect changes when change is absent in actual sense. Power analysis of the likelihood ratio test for change was performed through Monte-Carlo simulation in the single change point setting. Sensitivity analysis of the test power showed that likelihood ratio test is the most powerful when the simulated change points are located mid-way through the sample data as opposed to when changes were located in the periphery. Further, the test is more powerful when the change was located three-quarter-way through the sample data compared to when the change point is closer (quarter-way) to the first observation.
文摘Laser-induced fluorescence(LIF)spectroscopy is employed for plasma diagnosis,necessitating the utilization of deconvolution algorithms to isolate the Doppler effect from the raw spectral signal.However,direct deconvolution becomes invalid in the presence of noise as it leads to infinite amplification of high-frequency noise components.To address this issue,we propose a deconvolution algorithm based on the maximum entropy principle.We validate the effectiveness of the proposed algorithm by utilizing simulated LIF spectra at various noise levels(signal-to-noise ratio,SNR=20–80 d B)and measured LIF spectra with Xe as the working fluid.In the typical measured spectrum(SNR=26.23 d B)experiment,compared with the Gaussian filter and the Richardson–Lucy(R-L)algorithm,the proposed algorithm demonstrates an increase in SNR of 1.39 d B and 4.66 d B,respectively,along with a reduction in the root-meansquare error(RMSE)of 35%and 64%,respectively.Additionally,there is a decrease in the spectral angle(SA)of 0.05 and 0.11,respectively.In the high-quality spectrum(SNR=43.96 d B)experiment,the results show that the running time of the proposed algorithm is reduced by about98%compared with the R-L iterative algorithm.Moreover,the maximum entropy algorithm avoids parameter optimization settings and is more suitable for automatic implementation.In conclusion,the proposed algorithm can accurately resolve Doppler spectrum details while effectively suppressing noise,thus highlighting its advantage in LIF spectral deconvolution applications.
基金the financial support from National Natural Science Foundation of China (Grants Nos.12325203,91963117,and 11921002)。
文摘Recently,a Schwarz crystal structure with curved grain boundaries(GBs)constrained by twin-boundary(TB)networks was discovered in nanocrystalline Cu through experiments and atomistic simulations.Nanocrystalline Cu with nanosized Schwarz crystals exhibited high strength and excellent thermal stability.However,the grainsize effect and associated deformation mechanisms of Schwarz nanocrystals remain unknown.Here,we performed large-scale atomistic simulations to investigate the deformation behaviors and grain-size effect of nanocrystalline Cu with Schwarz crystals.Our simulations showed that similar to regular nanocrystals,Schwarz nanocrystals exhibit a strengthening-softening transition with decreasing grain size.The critical grain size in Schwarz nanocrystals is smaller than that in regular nanocrystals,leading to a maximum strength higher than that of regular nanocrystals.Our simulations revealed that the softening in Schwarz nanocrystals mainly originates from TB migration(or detwinning)and annihilation of GBs,rather than GB-mediated processes(including GB migration,sliding and diffusion)dominating the softening in regular nanocrystals.Quantitative analyses of simulation data further showed that compared with those in regular nanocrystals,the GB-mediated processes in Schwarz nanocrystals are suppressed,which is related to the low volume fraction of amorphous-like GBs and constraints of TB networks.The smaller critical grain size arises from the suppression of GB-mediated processes.
文摘In this paper we study optimal advertising problems that model the introduction of a new product into the market in the presence of carryover effects of the advertisement and with memory effects in the level of goodwill. In particular, we let the dynamics of the product goodwill to depend on the past, and also on past advertising efforts. We treat the problem by means of the stochastic Pontryagin maximum principle, that here is considered for a class of problems where in the state equation either the state or the control depend on the past. Moreover the control acts on the martingale term and the space of controls U can be chosen to be non-convex but now the space of controls U can be chosen to be non-convex. The maximum principle is thus formulated using a first-order adjoint Backward Stochastic Differential Equations (BSDEs), which can be explicitly computed due to the specific characteristics of the model, and a second-order adjoint relation.
基金funding support from the China Scholarship Council(CSC).
文摘Spatial variability of soil properties imposes a challenge for practical analysis and design in geotechnical engineering.The latter is particularly true for slope stability assessment,where the effects of uncertainty are synthesized in the so-called probability of failure.This probability quantifies the reliability of a slope and its numerical calculation is usually quite involved from a numerical viewpoint.In view of this issue,this paper proposes an approach for failure probability assessment based on Latinized partially stratified sampling and maximum entropy distribution with fractional moments.The spatial variability of geotechnical properties is represented by means of random fields and the Karhunen-Loève expansion.Then,failure probabilities are estimated employing maximum entropy distribution with fractional moments.The application of the proposed approach is examined with two examples:a case study of an undrained slope and a case study of a slope with cross-correlated random fields of strength parameters under a drained slope.The results show that the proposed approach has excellent accuracy and high efficiency,and it can be applied straightforwardly to similar geotechnical engineering problems.
基金supported by the National Natural Science Foundation of China under Grant Nos.62273083 and 61803077Natural Science Foundation of Hebei Province under Grant No.F2020501012.
文摘Indoor positioning is a key technology in today’s intelligent environments,and it plays a crucial role in many application areas.This paper proposed an unscented Kalman filter(UKF)based on the maximum correntropy criterion(MCC)instead of the minimummean square error criterion(MMSE).This innovative approach is applied to the loose coupling of the Inertial Navigation System(INS)and Ultra-Wideband(UWB).By introducing the maximum correntropy criterion,the MCCUKF algorithm dynamically adjusts the covariance matrices of the system noise and the measurement noise,thus enhancing its adaptability to diverse environmental localization requirements.Particularly in the presence of non-Gaussian noise,especially heavy-tailed noise,the MCCUKF exhibits superior accuracy and robustness compared to the traditional UKF.The method initially generates an estimate of the predicted state and covariance matrix through the unscented transform(UT)and then recharacterizes the measurement information using a nonlinear regression method at the cost of theMCC.Subsequently,the state and covariance matrices of the filter are updated by employing the unscented transformation on the measurement equations.Moreover,to mitigate the influence of non-line-of-sight(NLOS)errors positioning accuracy,this paper proposes a k-medoid clustering algorithm based on bisection k-means(Bikmeans).This algorithm preprocesses the UWB distance measurements to yield a more precise position estimation.Simulation results demonstrate that MCCUKF is robust to the uncertainty of UWB and realizes stable integration of INS and UWB systems.
基金supported by the Advanced Functional Composites Technology Key Laboratory Fund under Grant No.6142906220404Sichuan Province Centralized Guided Local Science and Technology Development Special Project under Grant No.2022ZYD0121。
文摘In this paper,an effective algorithm for optimizing the subarray of conformal arrays is proposed.The method first divides theconformal array into several first-level subarrays.It uses the X algorithm to solve the feasible solution of first-level subarray tiling and employs the particle swarm algorithm to optimize the conformal array subarray tiling scheme with the maximum entropy of the planar mapping as the fitness function.Subsequently,convex optimization is applied to optimize the subarray amplitude phase.Data results verify that the method can effectively find the optimal conformal array tiling scheme.
基金Supported by The 2024 Guizhou Provincial Health Commission Science and Technology Fund Project,No.gzwkj2024-47502022 Provincial Clinical Key Specialty Construction Project。
文摘BACKGROUND Adolescent major depressive disorder(MDD)is a significant mental health concern that often leads to recurrent depression in adulthood.Resting-state functional magnetic resonance imaging(rs-fMRI)offers unique insights into the neural mechanisms underlying this condition.However,despite previous research,the specific vulnerable brain regions affected in adolescent MDD patients have not been fully elucidated.AIM To identify consistent vulnerable brain regions in adolescent MDD patients using rs-fMRI and activation likelihood estimation(ALE)meta-analysis.METHODS We performed a comprehensive literature search through July 12,2023,for studies investigating brain functional changes in adolescent MDD patients.We utilized regional homogeneity(ReHo),amplitude of low-frequency fluctuations(ALFF)and fractional ALFF(fALFF)analyses.We compared the regions of aberrant spontaneous neural activity in adolescents with MDD vs healthy controls(HCs)using ALE.RESULTS Ten studies(369 adolescent MDD patients and 313 HCs)were included.Combining the ReHo and ALFF/fALFF data,the results revealed that the activity in the right cuneus and left precuneus was lower in the adolescent MDD patients than in the HCs(voxel size:648 mm3,P<0.05),and no brain region exhibited increased activity.Based on the ALFF data,we found decreased activity in the right cuneus and left precuneus in adolescent MDD patients(voxel size:736 mm3,P<0.05),with no regions exhibiting increased activity.CONCLUSION Through ALE meta-analysis,we consistently identified the right cuneus and left precuneus as vulnerable brain regions in adolescent MDD patients,increasing our understanding of the neuropathology of affected adolescents.
基金Supported by the Guizhou Province Science and Technology Plan Project,No.ZK-2023-1952021 Health Commission of Guizhou Province Project,No.gzwkj2021-150.
文摘BACKGROUND Major depressive disorder(MDD)in adolescents and young adults contributes significantly to global morbidity,with inconsistent findings on brain structural changes from structural magnetic resonance imaging studies.Activation likeli-hood estimation(ALE)offers a method to synthesize these diverse findings and identify consistent brain anomalies.METHODS We performed a comprehensive literature search in PubMed,Web of Science,Embase,and Chinese National Knowledge Infrastructure databases for neuroi-maging studies on MDD among adolescents and young adults published up to November 19,2023.Two independent researchers performed the study selection,quality assessment,and data extraction.The ALE technique was employed to synthesize findings on localized brain function anomalies in MDD patients,which was supplemented by sensitivity analyses.RESULTS Twenty-two studies comprising fourteen diffusion tensor imaging(DTI)studies and eight voxel-based morphome-try(VBM)studies,and involving 451 MDD patients and 465 healthy controls(HCs)for DTI and 664 MDD patients and 946 HCs for VBM,were included.DTI-based ALE demonstrated significant reductions in fractional anisotropy(FA)values in the right caudate head,right insula,and right lentiform nucleus putamen in adolescents and young adults with MDD compared to HCs,with no regions exhibiting increased FA values.VBM-based ALE did not demonstrate significant alterations in gray matter volume.Sensitivity analyses highlighted consistent findings in the right caudate head(11 of 14 analyses),right insula(10 of 14 analyses),and right lentiform nucleus putamen(11 of 14 analyses).CONCLUSION Structural alterations in the right caudate head,right insula,and right lentiform nucleus putamen in young MDD patients may contribute to its recurrent nature,offering insights for targeted therapies.
文摘A photovoltaic (PV) string with multiple modules with bypass diodes frequently deployed on a variety of autonomous PV systems may present multiple power peaks under uneven shading. For optimal solar harvesting, there is a need for a control schema to force the PV string to operate at global maximum power point (GMPP). While a lot of tracking methods have been proposed in the literature, they are usually complex and do not fully take advantage of the available characteristics of the PV array. This work highlights how the voltage at operating point and the forward voltage of the bypass diode are considered to design a global maximum power point tracking (GMPPT) algorithm with a very limited global search phase called Fast GMPPT. This algorithm successfully tracks GMPP between 94% and 98% of the time under a theoretical evaluation. It is then compared against Perturb and Observe, Deterministic Particle Swarm Optimization, and Grey Wolf Optimization under a sequence of irradiance steps as well as a power-over-voltage characteristics profile that mimics the electrical characteristics of a PV string under varying partial shading conditions. Overall, the simulation with the sequence of irradiance steps shows that while Fast GMPPT does not have the best convergence time, it has an excellent convergence rate as well as causes the least amount of power loss during the global search phase. Experimental test under varying partial shading conditions shows that while the GMPPT proposal is simple and lightweight, it is very performant under a wide range of dynamically varying partial shading conditions and boasts the best energy efficiency (94.74%) out of the 4 tested algorithms.
基金funded by The National Natural Science Foundation of China under Grant(No.62273108,62306081)The Youth Project of Guangdong Artificial Intelligence and Digital Economy Laboratory(Guangzhou)(PZL2022KF0006)+3 种基金The National Key Research and Development Program of China(2022YFB3604502)Special Fund Project of GuangzhouScience and Technology Innovation Development(202201011307)Guangdong Province Industrial Internet Identity Analysis and Construction Guidance Fund Secondary Node Project(1746312)Special Projects in Key Fields of General Colleges and Universities in Guangdong Province(2021ZDZX1016).
文摘Beyond-5G(B5G)aims to meet the growing demands of mobile traffic and expand the communication space.Considering that intelligent applications to B5G wireless communications will involve security issues regarding user data and operational data,this paper analyzes the maximum capacity of the multi-watermarking method for multimedia signal hiding as a means of alleviating the information security problem of B5G.The multiwatermarking process employs spread transform dither modulation.During the watermarking procedure,Gram-Schmidt orthogonalization is used to obtain the multiple spreading vectors.Consequently,multiple watermarks can be simultaneously embedded into the same position of a multimedia signal.Moreover,the multiple watermarks can be extracted without affecting one another during the extraction process.We analyze the effect of the size of the spreading vector on the unit maximum capacity,and consequently derive the theoretical relationship between the size of the spreading vector and the unit maximum capacity.A number of experiments are conducted to determine the optimal parameter values for maximum robustness on the premise of high capacity and good imperceptibility.
基金supported in part by the National Key Research and Development Program of China(2019YFB1503700)the Hunan Natural Science Foundation-Science and Education Joint Project(2019JJ70063)。
文摘The noise that comes from finite element simulation often causes the model to fall into the local optimal solution and over fitting during optimization of generator.Thus,this paper proposes a Gaussian Process Regression(GPR)model based on Conditional Likelihood Lower Bound Search(CLLBS)to optimize the design of the generator,which can filter the noise in the data and search for global optimization by combining the Conditional Likelihood Lower Bound Search method.Taking the efficiency optimization of 15 kW Permanent Magnet Synchronous Motor as an example.Firstly,this method uses the elementary effect analysis to choose the sensitive variables,combining the evolutionary algorithm to design the super Latin cube sampling plan;Then the generator-converter system is simulated by establishing a co-simulation platform to obtain data.A Gaussian process regression model combing the method of the conditional likelihood lower bound search is established,which combined the chi-square test to optimize the accuracy of the model globally.Secondly,after the model reaches the accuracy,the Pareto frontier is obtained through the NSGA-II algorithm by considering the maximum output torque as a constraint.Last,the constrained optimization is transformed into an unconstrained optimizing problem by introducing maximum constrained improvement expectation(CEI)optimization method based on the re-interpolation model,which cross-validated the optimization results of the Gaussian process regression model.The above method increase the efficiency of generator by 0.76%and 0.5%respectively;And this method can be used for rapid modeling and multi-objective optimization of generator systems.
基金supported by the National Key Research and Development Program of China(Grant No.2019YFC1509202)the National Natural Science Foundation of China(Grant Nos.41772350,61371189,and 41701513).
文摘The occurrence of earthquakes is closely related to the crustal geotectonic movement and the migration of mass,which consequently cause changes in gravity.The Gravity Recovery And Climate Experiment(GRACE)satellite data can be used to detect gravity changes associated with large earthquakes.However,previous GRACE satellite-based seismic gravity-change studies have focused more on coseismic gravity changes than on preseismic gravity changes.Moreover,the noise of the north–south stripe in GRACE data is difficult to eliminate,thereby resulting in the loss of some gravity information related to tectonic activities.To explore the preseismic gravity anomalies in a more refined way,we first propose a method of characterizing gravity variation based on the maximum shear strain of gravity,inspired by the concept of crustal strain.The offset index method is then adopted to describe the gravity anomalies,and the spatial and temporal characteristics of gravity anomalies before earthquakes are analyzed at the scales of the fault zone and plate,respectively.In this work,experiments are carried out on the Tibetan Plateau and its surrounding areas,and the following findings are obtained:First,from the observation scale of the fault zone,we detect the occurrence of large-area gravity anomalies near the epicenter,oftentimes about half a year before an earthquake,and these anomalies were distributed along the fault zone.Second,from the observation scale of the plate,we find that when an earthquake occurred on the Tibetan Plateau,a large number of gravity anomalies also occurred at the boundary of the Tibetan Plateau and the Indian Plate.Moreover,the aforementioned experiments confirm that the proposed method can successfully capture the preseismic gravity anomalies of large earthquakes with a magnitude of less than 8,which suggests a new idea for the application of gravity satellite data to earthquake research.
文摘The main aim of the paper is to present (and at the same time offer) a differ-ent perspective for the analysis of the accelerated expansion of the Universe. A perspective that can surely be considered as being “in parallel” to the tradition-al ones, such as those based, for example, on the hypotheses of “Dark Matter” and “Dark Energy”, or better as a “com-possible” perspective, because it is not understood as being “exclusive”. In fact, it is an approach that, when con-firmed by experimental results, always keeps its validity from an “operative” point of view. This is because, in analogy to the traditional perspectives, on the basis of Popper’s Falsification Principle the corresponding “Generative” Logic on which it is based has not the property of the perfect induction. The basic difference then only consists in the fact that the Evolution of the Universe is now modeled by considering the Universe as a Self-Organizing System, which is thus analyzed in the light of the Maximum Ordinality Principle.