Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rare...Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rarely used in superresolution methods, especially in low elevation estimation. The target airspace information in the difference beam is different from the target airspace information in the sum beam. And the use of difference beams does not significantly increase the complexity of the system and algorithms. Thus, this paper applies the difference beam to the beamformer to improve the elevation estimation performance of BML algorithm. And the direction and number of beams can be adjusted according to the actual needs. The theoretical target elevation angle root means square error(RMSE) and the computational complexity of the proposed algorithms are analyzed. Finally, computer simulations and real data processing results demonstrate the effectiveness of the proposed algorithms.展开更多
This paper proposes to apply the genetic algorithm and the firefly algorithm to enhance the estimation of the direction of arrival (DOA) angle of electromagnetic signals of a smart antenna array. This estimation is es...This paper proposes to apply the genetic algorithm and the firefly algorithm to enhance the estimation of the direction of arrival (DOA) angle of electromagnetic signals of a smart antenna array. This estimation is essential for beamforming, where the antenna array radiating pattern is steered to provide faster and reliable data transmission with increased coverage. This work proposes using metaheuristics to improve a maximum likelihood DOA estimator for an antenna array arranged in a uniform cuboidal geometry. The DOA estimation performance of the proposed algorithm was compared to that of MUSIC on different two dimensions scenarios. The metaheuristic algorithms present better performance than the well-known MUSIC algorithm.展开更多
Discrete choice models are widely used in multiple sectors such as transportation, health, energy, and marketing, etc., where the model estimation is usually carried out by using commercial software. Nonetheless, tail...Discrete choice models are widely used in multiple sectors such as transportation, health, energy, and marketing, etc., where the model estimation is usually carried out by using commercial software. Nonetheless, tailored computer codes offer modellers greater flexibility and control of unique modelling situation. Aligned with empirically tailored computing environment, this research discusses the relative performance of six different algorithms of a discrete choice model using three key performance measures: convergence time, number of iterations, and iteration time. The computer codes are developed by using Visual Basic Application (VBA). Maximum likelihood function (MLF) is formulated and the mathematical relationships of gradient and Hessian matrix are analytically derived to carry out the estimation process. The estimated parameter values clearly suggest that convergence criterion and initial guessing of parameters are the two critical factors in determining the overall estimation performance of a custom-built discrete choice model.展开更多
The neutron spectrum unfolding by Bonner sphere spectrometer(BSS) is considered a complex multidimensional model,which requires complex mathematical methods to solve the first kind of Fredholm integral equation. In or...The neutron spectrum unfolding by Bonner sphere spectrometer(BSS) is considered a complex multidimensional model,which requires complex mathematical methods to solve the first kind of Fredholm integral equation. In order to solve the problem of the maximum likelihood expectation maximization(MLEM) algorithm which is easy to suffer the pitfalls of local optima and the particle swarm optimization(PSO) algorithm which is easy to get unreasonable flight direction and step length of particles, which leads to the invalid iteration and affect efficiency and accuracy, an improved PSO-MLEM algorithm, combined of PSO and MLEM algorithm, is proposed for neutron spectrum unfolding. The dynamic acceleration factor is used to balance the ability of global and local search, and improves the convergence speed and accuracy of the algorithm. Firstly, the Monte Carlo method was used to simulated the BSS to obtain the response function and count rates of BSS. In the simulation of count rate, four reference spectra from the IAEA Technical Report Series No. 403 were used as input parameters of the Monte Carlo method. The PSO-MLEM algorithm was used to unfold the neutron spectrum of the simulated data and was verified by the difference of the unfolded spectrum to the reference spectrum. Finally, the 252Cf neutron source was measured by BSS, and the PSO-MLEM algorithm was used to unfold the experimental neutron spectrum.Compared with maximum entropy deconvolution(MAXED), PSO and MLEM algorithm, the PSO-MLEM algorithm has fewer parameters and automatically adjusts the dynamic acceleration factor to solve the problem of local optima. The convergence speed of the PSO-MLEM algorithm is 1.4 times and 3.1 times that of the MLEM and PSO algorithms. Compared with PSO, MLEM and MAXED, the correlation coefficients of PSO-MLEM algorithm are increased by 33.1%, 33.5% and 1.9%, and the relative mean errors are decreased by 98.2%, 97.8% and 67.4%.展开更多
In this article, we consider a lifetime distribution, the Weibull-Logarithmic distri- bution introduced by [6]. We investigate some new statistical characterizations and properties. We develop the maximum likelihood i...In this article, we consider a lifetime distribution, the Weibull-Logarithmic distri- bution introduced by [6]. We investigate some new statistical characterizations and properties. We develop the maximum likelihood inference using EM algorithm. Asymptotic properties of the MLEs are obtained and extensive simulations are conducted to assess the performance of parameter estimation. A numerical example is used to illustrate the application.展开更多
Maximum likelihood estimation(MLE)is an effective method for localizing radioactive sources in a given area.However,it requires an exhaustive search for parameter estimation,which is time-consuming.In this study,heuri...Maximum likelihood estimation(MLE)is an effective method for localizing radioactive sources in a given area.However,it requires an exhaustive search for parameter estimation,which is time-consuming.In this study,heuristic techniques were employed to search for radiation source parameters that provide the maximum likelihood by using a network of sensors.Hence,the time consumption of MLE would be effectively reduced.First,the radiation source was detected using the k-sigma method.Subsequently,the MLE was applied for parameter estimation using the readings and positions of the detectors that have detected the radiation source.A comparative study was performed in which the estimation accuracy and time consump-tion of the MLE were evaluated for traditional methods and heuristic techniques.The traditional MLE was performed via a grid search method using fixed and multiple resolutions.Additionally,four commonly used heuristic algorithms were applied:the firefly algorithm(FFA),particle swarm optimization(PSO),ant colony optimization(ACO),and artificial bee colony(ABC).The experiment was conducted using real data collected by the Low Scatter Irradiator facility at the Savannah River National Laboratory as part of the Intelligent Radiation Sensing System program.The comparative study showed that the estimation time was 3.27 s using fixed resolution MLE and 0.59 s using multi-resolution MLE.The time consumption for the heuristic-based MLE was 0.75,0.03,0.02,and 0.059 s for FFA,PSO,ACO,and ABC,respectively.The location estimation error was approximately 0.4 m using either the grid search-based MLE or the heuristic-based MLE.Hence,heuristic-based MLE can provide comparable estimation accuracy through a less time-consuming process than traditional MLE.展开更多
The conformal array can make full use of the aperture,save space,meet the requirements of aerodynamics,and is sensitive to polarization information.It has broad application prospects in military,aerospace,and communic...The conformal array can make full use of the aperture,save space,meet the requirements of aerodynamics,and is sensitive to polarization information.It has broad application prospects in military,aerospace,and communication fields.The joint polarization and direction-of-arrival(DOA)estimation based on the conformal array and the theoretical analysis of its parameter estimation performance are the key factors to promote the engineering application of the conformal array.To solve these problems,this paper establishes the wave field signal model of the conformal array.Then,for the case of a single target,the cost function of the maximum likelihood(ML)estimator is rewritten with Rayleigh quotient from a problem of maximizing the ratio of quadratic forms into those of minimizing quadratic forms.On this basis,rapid parameter estimation is achieved with the idea of manifold separation technology(MST).Compared with the modified variable projection(MVP)algorithm,it reduces the computational complexity and improves the parameter estimation performance.Meanwhile,the MST is used to solve the partial derivative of the steering vector.Then,the theoretical performance of ML,the multiple signal classification(MUSIC)estimator and Cramer-Rao bound(CRB)based on the conformal array are derived respectively,which provides theoretical foundation for the engineering application of the conformal array.Finally,the simulation experiment verifies the effectiveness of the proposed method.展开更多
In this paper, a weighted maximum likelihood technique (WMLT) for the logistic regression model is presented. This method depended on a weight function that is continuously adaptable using Mahalanobis distances for pr...In this paper, a weighted maximum likelihood technique (WMLT) for the logistic regression model is presented. This method depended on a weight function that is continuously adaptable using Mahalanobis distances for predictor variables. Under the model, the asymptotic consistency of the suggested estimator is demonstrated and properties of finite-sample are also investigated via simulation. In simulation studies and real data sets, it is observed that the newly proposed technique demonstrated the greatest performance among all estimators compared.展开更多
Low elevation estimation,which has attracted wide attention due to the presence of specular multipath,is essential for tracking radars.Frequency agility not only has the advantage of strong anti-interference ability,b...Low elevation estimation,which has attracted wide attention due to the presence of specular multipath,is essential for tracking radars.Frequency agility not only has the advantage of strong anti-interference ability,but also can enhance the performance of tracking radars.A frequency-agile refined maximum likelihood(RML)algorithm based on optimal fusion is proposed.The algorithm constructs an optimization problem,which minimizes the mean square error(MSE)of angle estimation.Thereby,the optimal weight at different frequency points is obtained for fusing the angle estimation.Through theoretical analysis and simulation,the frequency-agile RML algorithm based on optimal fusion can improve the accuracy of angle estimation effectively.展开更多
Indoor positioning is a key technology in today’s intelligent environments,and it plays a crucial role in many application areas.This paper proposed an unscented Kalman filter(UKF)based on the maximum correntropy cri...Indoor positioning is a key technology in today’s intelligent environments,and it plays a crucial role in many application areas.This paper proposed an unscented Kalman filter(UKF)based on the maximum correntropy criterion(MCC)instead of the minimummean square error criterion(MMSE).This innovative approach is applied to the loose coupling of the Inertial Navigation System(INS)and Ultra-Wideband(UWB).By introducing the maximum correntropy criterion,the MCCUKF algorithm dynamically adjusts the covariance matrices of the system noise and the measurement noise,thus enhancing its adaptability to diverse environmental localization requirements.Particularly in the presence of non-Gaussian noise,especially heavy-tailed noise,the MCCUKF exhibits superior accuracy and robustness compared to the traditional UKF.The method initially generates an estimate of the predicted state and covariance matrix through the unscented transform(UT)and then recharacterizes the measurement information using a nonlinear regression method at the cost of theMCC.Subsequently,the state and covariance matrices of the filter are updated by employing the unscented transformation on the measurement equations.Moreover,to mitigate the influence of non-line-of-sight(NLOS)errors positioning accuracy,this paper proposes a k-medoid clustering algorithm based on bisection k-means(Bikmeans).This algorithm preprocesses the UWB distance measurements to yield a more precise position estimation.Simulation results demonstrate that MCCUKF is robust to the uncertainty of UWB and realizes stable integration of INS and UWB systems.展开更多
In this paper,an effective algorithm for optimizing the subarray of conformal arrays is proposed.The method first divides theconformal array into several first-level subarrays.It uses the X algorithm to solve the feas...In this paper,an effective algorithm for optimizing the subarray of conformal arrays is proposed.The method first divides theconformal array into several first-level subarrays.It uses the X algorithm to solve the feasible solution of first-level subarray tiling and employs the particle swarm algorithm to optimize the conformal array subarray tiling scheme with the maximum entropy of the planar mapping as the fitness function.Subsequently,convex optimization is applied to optimize the subarray amplitude phase.Data results verify that the method can effectively find the optimal conformal array tiling scheme.展开更多
The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so...The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so constructions an object function, then utilizes genetic algorithm for nonlinear global optimization. Direction of arrival is estimated without preprocessing of array data and so the algorithm eliminates the effect of pre-estimate on the final estimation. The algorithm is applied on uniform linear array and extensive simulation results prove the efficacy of the algorithm. In the process of simulation, we obtain the relation between estimation error and parameters of genetic algorithm.展开更多
As modern weapons and equipment undergo increasing levels of informatization,intelligence,and networking,the topology and traffic characteristics of battlefield data networks built with tactical data links are becomin...As modern weapons and equipment undergo increasing levels of informatization,intelligence,and networking,the topology and traffic characteristics of battlefield data networks built with tactical data links are becoming progressively complex.In this paper,we employ a traffic matrix to model the tactical data link network.We propose a method that utilizes the Maximum Variance Unfolding(MVU)algorithm to conduct nonlinear dimensionality reduction analysis on high-dimensional open network traffic matrix datasets.This approach introduces novel ideas and methods for future applications,including traffic prediction and anomaly analysis in real battlefield network environments.展开更多
Flexible job shop scheduling problem(FJSP)is the core decision-making problem of intelligent manufacturing production management.The Harris hawk optimization(HHO)algorithm,as a typical metaheuristic algorithm,has been...Flexible job shop scheduling problem(FJSP)is the core decision-making problem of intelligent manufacturing production management.The Harris hawk optimization(HHO)algorithm,as a typical metaheuristic algorithm,has been widely employed to solve scheduling problems.However,HHO suffers from premature convergence when solving NP-hard problems.Therefore,this paper proposes an improved HHO algorithm(GNHHO)to solve the FJSP.GNHHO introduces an elitism strategy,a chaotic mechanism,a nonlinear escaping energy update strategy,and a Gaussian random walk strategy to prevent premature convergence.A flexible job shop scheduling model is constructed,and the static and dynamic FJSP is investigated to minimize the makespan.This paper chooses a two-segment encoding mode based on the job and the machine of the FJSP.To verify the effectiveness of GNHHO,this study tests it in 23 benchmark functions,10 standard job shop scheduling problems(JSPs),and 5 standard FJSPs.Besides,this study collects data from an agricultural company and uses the GNHHO algorithm to optimize the company’s FJSP.The optimized scheduling scheme demonstrates significant improvements in makespan,with an advancement of 28.16%for static scheduling and 35.63%for dynamic scheduling.Moreover,it achieves an average increase of 21.50%in the on-time order delivery rate.The results demonstrate that the performance of the GNHHO algorithm in solving FJSP is superior to some existing algorithms.展开更多
The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multip...The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multiplexing (OFDM) in underground coal mine is sensitive to the frequency selection of multiple path fading channel, whose decoding is separated from the traditional channel estimation algorithm. In order to increase its accuracy and reliability, a new iterating channel estimation algorithm combining the logarithm likelihood ratio (LLR) decode iterate based on the maximum likelihood estimation (ML) is proposed in this paper, which estimates iteration channel in combination with LLR decode. Without estimating the channel noise power, it exchanges the information between the ML channel estimation and the LLR decode using the feedback information of LLR decode. The decoding speed is very quick, and the satisfied result will be obtained by iterating in some time. The simulation results of the shortwave broadband channel in the coal mine show that the error rate of the system is basically convergent after the iteration in two times.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
Directing at evaluation for qualifying rate in weaponry test,this article discusses firstly how field test information is flooded with lots of prior information.Then a fast Bayesian evaluation algorithm is presented b...Directing at evaluation for qualifying rate in weaponry test,this article discusses firstly how field test information is flooded with lots of prior information.Then a fast Bayesian evaluation algorithm is presented based on the elaborate analysis of prior information reliability and the second category of maximum likelihood.The example demonstrates that the algorithm presented in this article is better and more robust compared with classical evaluation algorithm for safe-or-failure test and normal Bayesian method,which can make the best of prior information.展开更多
In order to obtain the life information of the vacuum fluorescent display (VFD) in a short time, a model of constant stress accelerated life tests (CSALT) is established with its filament temperature increased, an...In order to obtain the life information of the vacuum fluorescent display (VFD) in a short time, a model of constant stress accelerated life tests (CSALT) is established with its filament temperature increased, and four constant stress tests are conducted. The Weibull function is applied to describe the life distribution of the VFD, and the maximum likelihood estimation (MLE) and its iterative flow chart are used to calculate the shape parameters and the scale parameters. Furthermore, the accelerated life equation is determined by the least square method, the Kolmogorov-Smirnov test is performed to verify whether the VFD life meets the Weibull distribution or not, and selfdeveloped software is employed to predict the average life and the reliable life. Statistical data analysis results demonstrate that the test plans are feasible and versatile, that the VFD life follows the Weibull distribution, and that the VFD accelerated model satisfies the linear Arrhenius equation. The proposed method and the estimated life information of the VFD can provide some significant guideline to its manufacturers and customers.展开更多
By using maximum likelihood classification, several landscape indexes have been adopted to evaluate landscape structure of the irrigated area of Hongsibao Town, and landscape pattern and dynamic change of Hongsibao in...By using maximum likelihood classification, several landscape indexes have been adopted to evaluate landscape structure of the irrigated area of Hongsibao Town, and landscape pattern and dynamic change of Hongsibao in 1989, 1999, 2003 and 2008 had been analyzed based on landscape patch, landscape type and transfer matrix. The results show that landscape pattern had changed obviously, patch number, fragmentation and dominance had increased, evenness had decreased, and landscape shape had become regular in the irrigated area of Hongsibao Town from 1989 to 2008. The primary landscape type in 1989 was grassland and in 2008 was sand, directly influenced by human activities.展开更多
基金supported by the Fund for Foreign Scholars in University Research and Teaching Programs (B18039)。
文摘Beamspace super-resolution methods for elevation estimation in multipath environment has attracted significant attention, especially the beamspace maximum likelihood(BML)algorithm. However, the difference beam is rarely used in superresolution methods, especially in low elevation estimation. The target airspace information in the difference beam is different from the target airspace information in the sum beam. And the use of difference beams does not significantly increase the complexity of the system and algorithms. Thus, this paper applies the difference beam to the beamformer to improve the elevation estimation performance of BML algorithm. And the direction and number of beams can be adjusted according to the actual needs. The theoretical target elevation angle root means square error(RMSE) and the computational complexity of the proposed algorithms are analyzed. Finally, computer simulations and real data processing results demonstrate the effectiveness of the proposed algorithms.
文摘This paper proposes to apply the genetic algorithm and the firefly algorithm to enhance the estimation of the direction of arrival (DOA) angle of electromagnetic signals of a smart antenna array. This estimation is essential for beamforming, where the antenna array radiating pattern is steered to provide faster and reliable data transmission with increased coverage. This work proposes using metaheuristics to improve a maximum likelihood DOA estimator for an antenna array arranged in a uniform cuboidal geometry. The DOA estimation performance of the proposed algorithm was compared to that of MUSIC on different two dimensions scenarios. The metaheuristic algorithms present better performance than the well-known MUSIC algorithm.
文摘Discrete choice models are widely used in multiple sectors such as transportation, health, energy, and marketing, etc., where the model estimation is usually carried out by using commercial software. Nonetheless, tailored computer codes offer modellers greater flexibility and control of unique modelling situation. Aligned with empirically tailored computing environment, this research discusses the relative performance of six different algorithms of a discrete choice model using three key performance measures: convergence time, number of iterations, and iteration time. The computer codes are developed by using Visual Basic Application (VBA). Maximum likelihood function (MLF) is formulated and the mathematical relationships of gradient and Hessian matrix are analytically derived to carry out the estimation process. The estimated parameter values clearly suggest that convergence criterion and initial guessing of parameters are the two critical factors in determining the overall estimation performance of a custom-built discrete choice model.
基金supported by the National Natural science Foundation of China (No. 42127807)the Sichuan Science and Technology Program (No. 2020YJ0334)the Sichuan Science and Technology Breeding Program (No. 2022041)。
文摘The neutron spectrum unfolding by Bonner sphere spectrometer(BSS) is considered a complex multidimensional model,which requires complex mathematical methods to solve the first kind of Fredholm integral equation. In order to solve the problem of the maximum likelihood expectation maximization(MLEM) algorithm which is easy to suffer the pitfalls of local optima and the particle swarm optimization(PSO) algorithm which is easy to get unreasonable flight direction and step length of particles, which leads to the invalid iteration and affect efficiency and accuracy, an improved PSO-MLEM algorithm, combined of PSO and MLEM algorithm, is proposed for neutron spectrum unfolding. The dynamic acceleration factor is used to balance the ability of global and local search, and improves the convergence speed and accuracy of the algorithm. Firstly, the Monte Carlo method was used to simulated the BSS to obtain the response function and count rates of BSS. In the simulation of count rate, four reference spectra from the IAEA Technical Report Series No. 403 were used as input parameters of the Monte Carlo method. The PSO-MLEM algorithm was used to unfold the neutron spectrum of the simulated data and was verified by the difference of the unfolded spectrum to the reference spectrum. Finally, the 252Cf neutron source was measured by BSS, and the PSO-MLEM algorithm was used to unfold the experimental neutron spectrum.Compared with maximum entropy deconvolution(MAXED), PSO and MLEM algorithm, the PSO-MLEM algorithm has fewer parameters and automatically adjusts the dynamic acceleration factor to solve the problem of local optima. The convergence speed of the PSO-MLEM algorithm is 1.4 times and 3.1 times that of the MLEM and PSO algorithms. Compared with PSO, MLEM and MAXED, the correlation coefficients of PSO-MLEM algorithm are increased by 33.1%, 33.5% and 1.9%, and the relative mean errors are decreased by 98.2%, 97.8% and 67.4%.
基金Supported by the program for the Fundamental Research Funds for the Central Universities(2014RC042,2015JBM109)
文摘In this article, we consider a lifetime distribution, the Weibull-Logarithmic distri- bution introduced by [6]. We investigate some new statistical characterizations and properties. We develop the maximum likelihood inference using EM algorithm. Asymptotic properties of the MLEs are obtained and extensive simulations are conducted to assess the performance of parameter estimation. A numerical example is used to illustrate the application.
文摘Maximum likelihood estimation(MLE)is an effective method for localizing radioactive sources in a given area.However,it requires an exhaustive search for parameter estimation,which is time-consuming.In this study,heuristic techniques were employed to search for radiation source parameters that provide the maximum likelihood by using a network of sensors.Hence,the time consumption of MLE would be effectively reduced.First,the radiation source was detected using the k-sigma method.Subsequently,the MLE was applied for parameter estimation using the readings and positions of the detectors that have detected the radiation source.A comparative study was performed in which the estimation accuracy and time consump-tion of the MLE were evaluated for traditional methods and heuristic techniques.The traditional MLE was performed via a grid search method using fixed and multiple resolutions.Additionally,four commonly used heuristic algorithms were applied:the firefly algorithm(FFA),particle swarm optimization(PSO),ant colony optimization(ACO),and artificial bee colony(ABC).The experiment was conducted using real data collected by the Low Scatter Irradiator facility at the Savannah River National Laboratory as part of the Intelligent Radiation Sensing System program.The comparative study showed that the estimation time was 3.27 s using fixed resolution MLE and 0.59 s using multi-resolution MLE.The time consumption for the heuristic-based MLE was 0.75,0.03,0.02,and 0.059 s for FFA,PSO,ACO,and ABC,respectively.The location estimation error was approximately 0.4 m using either the grid search-based MLE or the heuristic-based MLE.Hence,heuristic-based MLE can provide comparable estimation accuracy through a less time-consuming process than traditional MLE.
基金the National Natural Science Foundation of China(62071144,61971159,61871149).
文摘The conformal array can make full use of the aperture,save space,meet the requirements of aerodynamics,and is sensitive to polarization information.It has broad application prospects in military,aerospace,and communication fields.The joint polarization and direction-of-arrival(DOA)estimation based on the conformal array and the theoretical analysis of its parameter estimation performance are the key factors to promote the engineering application of the conformal array.To solve these problems,this paper establishes the wave field signal model of the conformal array.Then,for the case of a single target,the cost function of the maximum likelihood(ML)estimator is rewritten with Rayleigh quotient from a problem of maximizing the ratio of quadratic forms into those of minimizing quadratic forms.On this basis,rapid parameter estimation is achieved with the idea of manifold separation technology(MST).Compared with the modified variable projection(MVP)algorithm,it reduces the computational complexity and improves the parameter estimation performance.Meanwhile,the MST is used to solve the partial derivative of the steering vector.Then,the theoretical performance of ML,the multiple signal classification(MUSIC)estimator and Cramer-Rao bound(CRB)based on the conformal array are derived respectively,which provides theoretical foundation for the engineering application of the conformal array.Finally,the simulation experiment verifies the effectiveness of the proposed method.
文摘In this paper, a weighted maximum likelihood technique (WMLT) for the logistic regression model is presented. This method depended on a weight function that is continuously adaptable using Mahalanobis distances for predictor variables. Under the model, the asymptotic consistency of the suggested estimator is demonstrated and properties of finite-sample are also investigated via simulation. In simulation studies and real data sets, it is observed that the newly proposed technique demonstrated the greatest performance among all estimators compared.
基金supported by the Fund for Foreign Scholars in University Research and Teaching Programs(the 111 Project)(B18039).
文摘Low elevation estimation,which has attracted wide attention due to the presence of specular multipath,is essential for tracking radars.Frequency agility not only has the advantage of strong anti-interference ability,but also can enhance the performance of tracking radars.A frequency-agile refined maximum likelihood(RML)algorithm based on optimal fusion is proposed.The algorithm constructs an optimization problem,which minimizes the mean square error(MSE)of angle estimation.Thereby,the optimal weight at different frequency points is obtained for fusing the angle estimation.Through theoretical analysis and simulation,the frequency-agile RML algorithm based on optimal fusion can improve the accuracy of angle estimation effectively.
基金supported by the National Natural Science Foundation of China under Grant Nos.62273083 and 61803077Natural Science Foundation of Hebei Province under Grant No.F2020501012.
文摘Indoor positioning is a key technology in today’s intelligent environments,and it plays a crucial role in many application areas.This paper proposed an unscented Kalman filter(UKF)based on the maximum correntropy criterion(MCC)instead of the minimummean square error criterion(MMSE).This innovative approach is applied to the loose coupling of the Inertial Navigation System(INS)and Ultra-Wideband(UWB).By introducing the maximum correntropy criterion,the MCCUKF algorithm dynamically adjusts the covariance matrices of the system noise and the measurement noise,thus enhancing its adaptability to diverse environmental localization requirements.Particularly in the presence of non-Gaussian noise,especially heavy-tailed noise,the MCCUKF exhibits superior accuracy and robustness compared to the traditional UKF.The method initially generates an estimate of the predicted state and covariance matrix through the unscented transform(UT)and then recharacterizes the measurement information using a nonlinear regression method at the cost of theMCC.Subsequently,the state and covariance matrices of the filter are updated by employing the unscented transformation on the measurement equations.Moreover,to mitigate the influence of non-line-of-sight(NLOS)errors positioning accuracy,this paper proposes a k-medoid clustering algorithm based on bisection k-means(Bikmeans).This algorithm preprocesses the UWB distance measurements to yield a more precise position estimation.Simulation results demonstrate that MCCUKF is robust to the uncertainty of UWB and realizes stable integration of INS and UWB systems.
基金supported by the Advanced Functional Composites Technology Key Laboratory Fund under Grant No.6142906220404Sichuan Province Centralized Guided Local Science and Technology Development Special Project under Grant No.2022ZYD0121。
文摘In this paper,an effective algorithm for optimizing the subarray of conformal arrays is proposed.The method first divides theconformal array into several first-level subarrays.It uses the X algorithm to solve the feasible solution of first-level subarray tiling and employs the particle swarm algorithm to optimize the conformal array subarray tiling scheme with the maximum entropy of the planar mapping as the fitness function.Subsequently,convex optimization is applied to optimize the subarray amplitude phase.Data results verify that the method can effectively find the optimal conformal array tiling scheme.
基金This project was supported by the Teaching and Research Award Programfor Outstanding Young Teachersin Higher Educa-tion Institutions of MOE (2001226) .
文摘The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so constructions an object function, then utilizes genetic algorithm for nonlinear global optimization. Direction of arrival is estimated without preprocessing of array data and so the algorithm eliminates the effect of pre-estimate on the final estimation. The algorithm is applied on uniform linear array and extensive simulation results prove the efficacy of the algorithm. In the process of simulation, we obtain the relation between estimation error and parameters of genetic algorithm.
文摘As modern weapons and equipment undergo increasing levels of informatization,intelligence,and networking,the topology and traffic characteristics of battlefield data networks built with tactical data links are becoming progressively complex.In this paper,we employ a traffic matrix to model the tactical data link network.We propose a method that utilizes the Maximum Variance Unfolding(MVU)algorithm to conduct nonlinear dimensionality reduction analysis on high-dimensional open network traffic matrix datasets.This approach introduces novel ideas and methods for future applications,including traffic prediction and anomaly analysis in real battlefield network environments.
文摘Flexible job shop scheduling problem(FJSP)is the core decision-making problem of intelligent manufacturing production management.The Harris hawk optimization(HHO)algorithm,as a typical metaheuristic algorithm,has been widely employed to solve scheduling problems.However,HHO suffers from premature convergence when solving NP-hard problems.Therefore,this paper proposes an improved HHO algorithm(GNHHO)to solve the FJSP.GNHHO introduces an elitism strategy,a chaotic mechanism,a nonlinear escaping energy update strategy,and a Gaussian random walk strategy to prevent premature convergence.A flexible job shop scheduling model is constructed,and the static and dynamic FJSP is investigated to minimize the makespan.This paper chooses a two-segment encoding mode based on the job and the machine of the FJSP.To verify the effectiveness of GNHHO,this study tests it in 23 benchmark functions,10 standard job shop scheduling problems(JSPs),and 5 standard FJSPs.Besides,this study collects data from an agricultural company and uses the GNHHO algorithm to optimize the company’s FJSP.The optimized scheduling scheme demonstrates significant improvements in makespan,with an advancement of 28.16%for static scheduling and 35.63%for dynamic scheduling.Moreover,it achieves an average increase of 21.50%in the on-time order delivery rate.The results demonstrate that the performance of the GNHHO algorithm in solving FJSP is superior to some existing algorithms.
文摘The environment of the wireless communication system in the coal mine has unique characteristics: great noise, strong multiple path interference, and the wireless communication of orthogonal frequency division multiplexing (OFDM) in underground coal mine is sensitive to the frequency selection of multiple path fading channel, whose decoding is separated from the traditional channel estimation algorithm. In order to increase its accuracy and reliability, a new iterating channel estimation algorithm combining the logarithm likelihood ratio (LLR) decode iterate based on the maximum likelihood estimation (ML) is proposed in this paper, which estimates iteration channel in combination with LLR decode. Without estimating the channel noise power, it exchanges the information between the ML channel estimation and the LLR decode using the feedback information of LLR decode. The decoding speed is very quick, and the satisfied result will be obtained by iterating in some time. The simulation results of the shortwave broadband channel in the coal mine show that the error rate of the system is basically convergent after the iteration in two times.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
基金the National Defense Research Foundation of China(No.4010203010401)
文摘Directing at evaluation for qualifying rate in weaponry test,this article discusses firstly how field test information is flooded with lots of prior information.Then a fast Bayesian evaluation algorithm is presented based on the elaborate analysis of prior information reliability and the second category of maximum likelihood.The example demonstrates that the algorithm presented in this article is better and more robust compared with classical evaluation algorithm for safe-or-failure test and normal Bayesian method,which can make the best of prior information.
基金Undergraduate Education High land Construction Project of Shanghaithe Key Course Construction of Shanghai Education Committee (No.20075302)the Key Technology R&D Program of Shanghai Municipality (No.08160510600)
文摘In order to obtain the life information of the vacuum fluorescent display (VFD) in a short time, a model of constant stress accelerated life tests (CSALT) is established with its filament temperature increased, and four constant stress tests are conducted. The Weibull function is applied to describe the life distribution of the VFD, and the maximum likelihood estimation (MLE) and its iterative flow chart are used to calculate the shape parameters and the scale parameters. Furthermore, the accelerated life equation is determined by the least square method, the Kolmogorov-Smirnov test is performed to verify whether the VFD life meets the Weibull distribution or not, and selfdeveloped software is employed to predict the average life and the reliable life. Statistical data analysis results demonstrate that the test plans are feasible and versatile, that the VFD life follows the Weibull distribution, and that the VFD accelerated model satisfies the linear Arrhenius equation. The proposed method and the estimated life information of the VFD can provide some significant guideline to its manufacturers and customers.
文摘By using maximum likelihood classification, several landscape indexes have been adopted to evaluate landscape structure of the irrigated area of Hongsibao Town, and landscape pattern and dynamic change of Hongsibao in 1989, 1999, 2003 and 2008 had been analyzed based on landscape patch, landscape type and transfer matrix. The results show that landscape pattern had changed obviously, patch number, fragmentation and dominance had increased, evenness had decreased, and landscape shape had become regular in the irrigated area of Hongsibao Town from 1989 to 2008. The primary landscape type in 1989 was grassland and in 2008 was sand, directly influenced by human activities.