To improve the classification performance of the kernel minimum squared error( KMSE), an enhanced KMSE algorithm( EKMSE) is proposed. It redefines the regular objective function by introducing a novel class label ...To improve the classification performance of the kernel minimum squared error( KMSE), an enhanced KMSE algorithm( EKMSE) is proposed. It redefines the regular objective function by introducing a novel class label definition, and the relative class label matrix can be adaptively adjusted to the kernel matrix.Compared with the common methods, the newobjective function can enlarge the distance between different classes, which therefore yields better recognition rates. In addition, an iteration parameter searching technique is adopted to improve the computational efficiency. The extensive experiments on FERET and GT face databases illustrate the feasibility and efficiency of the proposed EKMSE. It outperforms the original MSE, KMSE,some KMSE improvement methods, and even the sparse representation-based techniques in face recognition, such as collaborate representation classification( CRC).展开更多
Minimum squared error(MSE)algorithm is one of the classical pattern recognition and regression analysis methods,whose objective is to minimize the squared error summation between the output of linear function and the ...Minimum squared error(MSE)algorithm is one of the classical pattern recognition and regression analysis methods,whose objective is to minimize the squared error summation between the output of linear function and the desired output.In this paper,the MSE algorithm is modified by using kernel functions satisfying the Mercer condition and regularization technique;and the nonlinear MSE algorithms based on kernels and regularization term,that is,the regularized kernel forms of MSE algorithm,are proposed.Their objective functions include the squared error summation between the output of nonlinear function based on kernels and the desired output and a proper regularization term.The regularization technique can handle ill-posed problems,reduce the solution space,and control the generalization.Three squared regularization terms are utilized in this paper.In accordance with the probabilistic interpretation of regularization terms,the difference among three regularization terms is given in detail.The synthetic and real data are used to analyze the algorithm performance.展开更多
In this paper, we propose a log-normal linear model whose errors are first-order correlated, and suggest a two-stage method for the efficient estimation of the conditional mean of the response variable at the original...In this paper, we propose a log-normal linear model whose errors are first-order correlated, and suggest a two-stage method for the efficient estimation of the conditional mean of the response variable at the original scale. We obtain two estimators which minimize the asymptotic mean squared error (MM) and the asymptotic bias (MB), respectively. Both the estimators are very easy to implement, and simulation studies show that they are perform better.展开更多
In this paper, a regression method of estimation has been used to derive the mean estimate of the survey variable using simple random sampling without replacement in the presence of observational errors. Two covariate...In this paper, a regression method of estimation has been used to derive the mean estimate of the survey variable using simple random sampling without replacement in the presence of observational errors. Two covariates were used and a case where the observational errors were in both the survey variable and the covariates was considered. The inclusion of observational errors was due to the fact that data collected through surveys are often not free from errors that occur during observation. These errors can occur due to over-reporting, under-reporting, memory failure by the respondents or use of imprecise tools of data collection. The expression of mean squared error (MSE) based on the obtained estimator has been derived to the first degree of approximation. The results of a simulation study show that the derived modified regression mean estimator under observational errors is more efficient than the mean per unit estimator and some other existing estimators. The proposed estimator can therefore be used in estimating a finite population mean, while considering observational errors that may occur during a study.展开更多
Adaptive digital filtering has traditionally been developed based on the minimum mean square error (MMSE) criterion and has found ever-increasing applications in communications. This paper presents an alternative ad...Adaptive digital filtering has traditionally been developed based on the minimum mean square error (MMSE) criterion and has found ever-increasing applications in communications. This paper presents an alternative adaptive filtering design based on the minimum symbol error rate (MSER) criterion for communication applications. It is shown that the MSER filtering is smarter, as it exploits the non-Gaussian distribution of filter output effectively. Consequently, it provides significant performance gain in terms of smaller symbol error over the MMSE approach. Adopting Parzen window or kernel density estimation for a probability density function, a block-data gradient adaptive MSER algorithm is derived. A stochastic gradient adaptive MSER algorithm, referred to as the least symbol error rate, is further developed for sample-by-sample adaptive implementation of the MSER filtering. Two applications, involving single-user channel equalization and beamforming assisted receiver, are included to demonstrate the effectiveness and generality of the proposed adaptive MSER filtering approach.展开更多
The uncertainty of observers' positions can lead to significantly degrading in source localization accuracy. This pa-per proposes a method of using self-location for calibrating the positions of observer stations in ...The uncertainty of observers' positions can lead to significantly degrading in source localization accuracy. This pa-per proposes a method of using self-location for calibrating the positions of observer stations in source localization to reduce the errors of the observer positions and improve the accuracy of the source localization. The relative distance measurements of the two coordinative observers are used for the linear minimum mean square error (LMMSE) estimator. The results of computer si-mulations prove the feasibility and effectiveness of the proposed method. With the general estimation errors of observers' positions, the MSE of the source localization with self-location calibration, which is significantly lower than that without self-location calibra-tion, is approximating to the Cramer-Rao lower bound (CRLB).展开更多
Performance of the Adaptive Coding and Modulation(ACM) strongly depends on the retrieved Channel State Information(CSI),which can be obtained using the channel estimation techniques relying on pilot symbol transmissio...Performance of the Adaptive Coding and Modulation(ACM) strongly depends on the retrieved Channel State Information(CSI),which can be obtained using the channel estimation techniques relying on pilot symbol transmission.Earlier analysis of methods of pilot-aided channel estimation for ACM systems were relatively little.In this paper,we investigate the performance of CSI prediction using the Minimum Mean Square Error(MMSE)channel estimator for an ACM system.To solve the two problems of MMSE:high computational operations and oversimplified assumption,we then propose the Low-Complexity schemes(LC-MMSE and Recursion LC-MMSE(R-LC-MMSE)).Computational complexity and Mean Square Error(MSE) are presented to evaluate the efficiency of the proposed algorithm.Both analysis and numerical results show that LC-MMSE performs close to the wellknown MMSE estimator with much lower complexity and R-LC-MMSE improves the application of MMSE estimation to specific circumstances.展开更多
Combining information entropy and wavelet analysis with neural network,an adaptive control system and an adaptive control algorithm are presented for machining process based on extended entropy square error(EESE)and w...Combining information entropy and wavelet analysis with neural network,an adaptive control system and an adaptive control algorithm are presented for machining process based on extended entropy square error(EESE)and wavelet neural network(WNN).Extended entropy square error function is defined and its availability is proved theoretically.Replacing the mean square error criterion of BP algorithm with the EESE criterion,the proposed system is then applied to the on-line control of the cutting force with variable cutting parameters by searching adaptively wavelet base function and self adjusting scaling parameter,translating parameter of the wavelet and neural network weights.Simulation results show that the designed system is of fast response,non-overshoot and it is more effective than the conventional adaptive control of machining process based on the neural network.The suggested algorithm can adaptively adjust the feed rate on-line till achieving a constant cutting force approaching the reference force in varied cutting conditions,thus improving the machining efficiency and protecting the tool.展开更多
The turbo equalization approach is studied for Orthogonal Frequency Division Multiplexing (OFDM) system with combined error control coding and linear precoding. While previous literatures employed linear precodcr of...The turbo equalization approach is studied for Orthogonal Frequency Division Multiplexing (OFDM) system with combined error control coding and linear precoding. While previous literatures employed linear precodcr of small size for complexity reasons, this paper proposes to use a linear precoder of size larger than or equal to the maximum length of the equivalent discrete-time channel in order to achieve full frequency diversity and reduce complexities of the error control coder/decoder. Also a low complexity Linear Minimum Mean Square Error (LMMSE) turbo equalizer is derived for the receiver. Through simulation and performance analysis, it is shown that the performance of the proposed scheme over frequency selective fading channel reaches the matched filter bound; compared with the same coded OFDM without linear precoding, the proposed scheme shows an Signal-to-Noise Ratio (SNR) improvement of at least 6dB at a bit error rate of 10 6 over a multipath channel with exponential power delay profile. Convergence behavior of the proposed scheme with turbo equalization using various type of linear precoder/transformer, various interleaver size and error control coder of various constraint length is also investigated.展开更多
Compared with the rank reduction estimator(RARE) based on second-order statistics(called SOS-RARE), the RARE based on fourth-order cumulants(referred to as FOC-RARE) can handle more sources and restrain the negative i...Compared with the rank reduction estimator(RARE) based on second-order statistics(called SOS-RARE), the RARE based on fourth-order cumulants(referred to as FOC-RARE) can handle more sources and restrain the negative impacts of the Gaussian colored noise. However, the unexpected modeling errors appearing in practice are known to significantly degrade the performance of the RARE. Therefore, the direction-of-arrival(DOA) estimation performance of the FOC-RARE is quantitatively derived. The explicit expression for direction-finding(DF) error is derived via the first-order perturbation analysis, and then the theoretical formula for the mean square error(MSE) is given. Simulation results demonstrate the validation of the theoretical analysis and reveal that the FOC-RARE is more robust to the unexpected modeling errors than the SOS-RARE.展开更多
In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived ...In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.展开更多
工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小...工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小迭代修复和改进WGAN混合模型的时序数据修复方法.首先,在预处理阶段,保留异常数据,进行信息标注等处理,从而充分挖掘异常值与真实值之间的特征约束.其次,在噪声模块提出了近邻参数裁剪规则,用于修正最小迭代修复公式生成的噪声向量.将其传递至模拟分布模块的生成器中,同时设计了一个动态时间注意力网络层,用于提取时序特征权重并与门控循环单元串联组合捕捉不同步长的特征依赖,并引入递归多步预测原理共同提升模型的表达能力;在判别器中设计了Abnormal and Truth奖励机制和Weighted Mean Square Error损失函数共同反向优化生成器修复数据的细节和质量.最后,在公开数据集和真实数据集上的实验结果表明,该方法的修复准确度与模型稳定性显著优于现有方法.展开更多
The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agil...The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agile Scrum and the Obtain, Scrub, Explore, Model, and iNterpret (OSEMN) methodology. Six machine learning models, namely Linear Forecast, Naive Forecast, Simple Moving Average with weekly window (SMA 5), Simple Moving Average with monthly window (SMA 20), Autoregressive Integrated Moving Average (ARIMA), and Long Short-Term Memory (LSTM), are compared and evaluated through Mean Absolute Error (MAE), with the LSTM model performing the best, showcasing its potential for practical financial applications. A Django web application “Predict It” is developed to implement the LSTM model. Ethical concerns related to predictive modeling in finance are addressed. Data quality, algorithm choice, feature engineering, and preprocessing techniques are emphasized for better model performance. The research acknowledges limitations and suggests future research directions, aiming to equip investors and financial professionals with reliable predictive models for dynamic markets.展开更多
Due to the rapid development of logistic industry, transportation cost is also increasing, and finding trends in transportation activities will impact positively in investment in transportation infrastructure. There i...Due to the rapid development of logistic industry, transportation cost is also increasing, and finding trends in transportation activities will impact positively in investment in transportation infrastructure. There is limited literature and data-driven analysis about trends in transportation mode. This thesis delves into the operational challenges of vehicle performance management within logistics clusters, a critical aspect of efficient supply chain operations. It aims to address the issues faced by logistics organizations in optimizing their vehicle fleets’ performance, essential for seamless logistics operations. The study’s core design involves the development of a predictive logistics model based on regression, focused on forecasting, and evaluating vehicle performance in logistics clusters. It encompasses a comprehensive literature review, research methodology, data sources, variables, feature engineering, and model training and evaluation and F-test analysis was done to identify and verify the relationships between attributes and the target variable. The findings highlight the model’s efficacy, with a low mean squared error (MSE) value of 3.42, indicating its accuracy in predicting performance metrics. The high R-squared (R2) score of 0.921 emphasizes its ability to capture relationships between input characteristics and performance metrics. The model’s training and testing accuracy further attest to its reliability and generalization capabilities. In interpretation, this research underscores the practical significance of the findings. The regression-based model provides a practical solution for the logistics industry, enabling informed decisions regarding resource allocation, maintenance planning, and delivery route optimization. This contributes to enhanced overall logistics performance and customer service. By addressing performance gaps and embracing modern logistics technologies, the study supports the ongoing evolution of vehicle performance management in logistics clusters, fostering increased competitiveness and sustainability in the logistics sector.展开更多
The purpose of this research work is to investigate the numerical solutions of the fractional dengue transmission model(FDTM)in the presence of Wolbachia using the stochastic-based Levenberg-Marquardt neural network(L...The purpose of this research work is to investigate the numerical solutions of the fractional dengue transmission model(FDTM)in the presence of Wolbachia using the stochastic-based Levenberg-Marquardt neural network(LM-NN)technique.The fractional dengue transmission model(FDTM)consists of 12 compartments.The human population is divided into four compartments;susceptible humans(S_(h)),exposed humans(E_(h)),infectious humans(I_(h)),and recovered humans(R_(h)).Wolbachia-infected and Wolbachia-uninfected mosquito population is also divided into four compartments:aquatic(eggs,larvae,pupae),susceptible,exposed,and infectious.We investigated three different cases of vertical transmission probability(η),namely when Wolbachia-free mosquitoes persist only(η=0.6),when both types of mosquitoes persist(η=0.8),and when Wolbachia-carrying mosquitoes persist only(η=1).The objective of this study is to investigate the effectiveness of Wolbachia in reducing dengue and presenting the numerical results by using the stochastic structure LM-NN approach with 10 hidden layers of neurons for three different cases of the fractional order derivatives(α=0.4,0.6,0.8).LM-NN approach includes a training,validation,and testing procedure to minimize the mean square error(MSE)values using the reference dataset(obtained by solving the model using the Adams-Bashforth-Moulton method(ABM).The distribution of data is 80% data for training,10% for validation,and,10% for testing purpose)results.A comprehensive investigation is accessible to observe the competence,precision,capacity,and efficiency of the suggested LM-NN approach by executing the MSE,state transitions findings,and regression analysis.The effectiveness of the LM-NN approach for solving the FDTM is demonstrated by the overlap of the findings with trustworthy measures,which achieves a precision of up to 10^(-4).展开更多
Large number of antennas and higher bandwidth usage in massive multiple-input-multipleoutput(MIMO)systems create immense burden on receiver in terms of higher power consumption.The power consumption at the receiver ra...Large number of antennas and higher bandwidth usage in massive multiple-input-multipleoutput(MIMO)systems create immense burden on receiver in terms of higher power consumption.The power consumption at the receiver radio frequency(RF)circuits can be significantly reduced by the application of analog-to-digital converter(ADC)of low resolution.In this paper we investigate bandwidth efficiency(BE)of massive MIMO with perfect channel state information(CSI)by applying low resolution ADCs with Rician fadings.We start our analysis by deriving the additive quantization noise model,which helps to understand the effects of ADC resolution on BE by keeping the power constraint at the receiver in radar.We also investigate deeply the effects of using higher bit rates and the number of BS antennas on bandwidth efficiency(BE)of the system.We emphasize that good bandwidth efficiency can be achieved by even using low resolution ADC by using regularized zero-forcing(RZF)combining algorithm.We also provide a generic analysis of energy efficiency(EE)with different options of bits by calculating the energy efficiencies(EE)using the achievable rates.We emphasize that satisfactory BE can be achieved by even using low-resolution ADC/DAC in massive MIMO.展开更多
Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which ent...Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which entails high complexity.To avoid the exact matrix inversion,a considerable number of implicit and explicit approximate matrix inversion based detection methods is proposed.By combining the advantages of both the explicit and the implicit matrix inversion,this paper introduces a new low-complexity signal detection algorithm.Firstly,the relationship between implicit and explicit techniques is analyzed.Then,an enhanced Newton iteration method is introduced to realize an approximate MMSE detection for massive MIMO uplink systems.The proposed improved Newton iteration significantly reduces the complexity of conventional Newton iteration.However,its complexity is still high for higher iterations.Thus,it is applied only for first two iterations.For subsequent iterations,we propose a novel trace iterative method(TIM)based low-complexity algorithm,which has significantly lower complexity than higher Newton iterations.Convergence guarantees of the proposed detector are also provided.Numerical simulations verify that the proposed detector exhibits significant performance enhancement over recently reported iterative detectors and achieves close-to-MMSE performance while retaining the low-complexity advantage for systems with hundreds of antennas.展开更多
In this paper, we define a new class of biased linear estimators of the vector of unknown parameters in the deficient_rank linear model based on the spectral decomposition expression of the best linear minimun bias es...In this paper, we define a new class of biased linear estimators of the vector of unknown parameters in the deficient_rank linear model based on the spectral decomposition expression of the best linear minimun bias estimator. Some important properties are discussed. By appropriate choices of bias parameters, we construct many interested and useful biased linear estimators, which are the extension of ordinary biased linear estimators in the full_rank linear model to the deficient_rank linear model. At last, we give a numerical example in geodetic adjustment.展开更多
This study explores the influence of infill patterns on machine acceleration prediction in the realm of three-dimensional(3D)printing,particularly focusing on extrusion technology.Our primary objective was to develop ...This study explores the influence of infill patterns on machine acceleration prediction in the realm of three-dimensional(3D)printing,particularly focusing on extrusion technology.Our primary objective was to develop a long short-term memory(LSTM)network capable of assessing this impact.We conducted an extensive analysis involving 12 distinct infill patterns,collecting time-series data to examine their effects on the acceleration of the printer’s bed.The LSTM network was trained using acceleration data from the adaptive cubic infill pattern,while the Archimedean chords infill pattern provided data for evaluating the network’s prediction accuracy.This involved utilizing offline time-series acceleration data as the training and testing datasets for the LSTM model.Specifically,the LSTM model was devised to predict the acceleration of a fused deposition modeling(FDM)printer using data from the adaptive cubic infill pattern.Rigorous testing yielded a root mean square error(RMSE)of 0.007144,reflecting the model’s precision.Further refinement and testing of the LSTM model were conducted using acceleration data from the Archimedean chords infill pattern,resulting in an RMSE of 0.007328.Notably,the developed LSTM model demonstrated superior performance compared to an optimized recurrent neural network(RNN)in predicting machine acceleration data.The empirical findings highlight that the adaptive cubic infill pattern considerably influences the dimensional accuracy of parts printed using FDM technology.展开更多
In recent years there has been an increasing interest in developing spatial statistical models for data sets that are seemingly spatially independent.This lack of spatial structure makes it difficult,if not impossible...In recent years there has been an increasing interest in developing spatial statistical models for data sets that are seemingly spatially independent.This lack of spatial structure makes it difficult,if not impossible to use optimal predictors such as ordinary kriging for modeling the spatial variability in the data.In many instances,the data still contain a wealth of information that could be used to gain flexibility and precision in estimation.In this paper we propose using a combination of regression analysis to describe the large-scale spatial variability in a set of survey data and a tree-based stratification design to enhance the estimation process of the small-scale spatial variability.With this approach,sample units(i.e.,pixel of a satellite image) are classified with respect to predictions of error attributes into homogeneous classes,and the classes are then used as strata in the stratified analysis.Independent variables used as a basis of stratification included terrain data and satellite imagery.A decision rule was used to identify a tree size that minimized the error in estimating the variance of the mean response and prediction uncertainties at new spatial locations.This approach was applied to a set of n=937 forested plots from a state-wide inventory conducted in 2006 in the Mexican State of Jalisco.The final models accounted for 62% to 82% of the variability observed in canopy closure(%),basal area(m2·ha-1),cubic volumes(m3·ha-1) and biomass(t·ha-1) on the sample plots.The spatial models provided unbiased estimates and when averaged over all sample units in the population,estimates of forest structure were very close to those obtained using classical estimates based on the sampling strategy used in the state-wide inventory.The spatial models also provided unbiased estimates of model variances leading to confidence and prediction coverage rates close to the 0.95 nominal rate.展开更多
基金The Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)the National Natural Science Foundation of China(No.61572258,61103141,51405241)+1 种基金the Natural Science Foundation of Jiangsu Province(No.BK20151530)Overseas Training Programs for Outstanding Young Scholars of Universities in Jiangsu Province
文摘To improve the classification performance of the kernel minimum squared error( KMSE), an enhanced KMSE algorithm( EKMSE) is proposed. It redefines the regular objective function by introducing a novel class label definition, and the relative class label matrix can be adaptively adjusted to the kernel matrix.Compared with the common methods, the newobjective function can enlarge the distance between different classes, which therefore yields better recognition rates. In addition, an iteration parameter searching technique is adopted to improve the computational efficiency. The extensive experiments on FERET and GT face databases illustrate the feasibility and efficiency of the proposed EKMSE. It outperforms the original MSE, KMSE,some KMSE improvement methods, and even the sparse representation-based techniques in face recognition, such as collaborate representation classification( CRC).
基金supported by the National Natural Science Foundation of China (No.60275007 and No.698885004).
文摘Minimum squared error(MSE)algorithm is one of the classical pattern recognition and regression analysis methods,whose objective is to minimize the squared error summation between the output of linear function and the desired output.In this paper,the MSE algorithm is modified by using kernel functions satisfying the Mercer condition and regularization technique;and the nonlinear MSE algorithms based on kernels and regularization term,that is,the regularized kernel forms of MSE algorithm,are proposed.Their objective functions include the squared error summation between the output of nonlinear function based on kernels and the desired output and a proper regularization term.The regularization technique can handle ill-posed problems,reduce the solution space,and control the generalization.Three squared regularization terms are utilized in this paper.In accordance with the probabilistic interpretation of regularization terms,the difference among three regularization terms is given in detail.The synthetic and real data are used to analyze the algorithm performance.
基金The NSF(11271155) of ChinaResearch Fund(20070183023) for the Doctoral Program of Higher Education
文摘In this paper, we propose a log-normal linear model whose errors are first-order correlated, and suggest a two-stage method for the efficient estimation of the conditional mean of the response variable at the original scale. We obtain two estimators which minimize the asymptotic mean squared error (MM) and the asymptotic bias (MB), respectively. Both the estimators are very easy to implement, and simulation studies show that they are perform better.
文摘In this paper, a regression method of estimation has been used to derive the mean estimate of the survey variable using simple random sampling without replacement in the presence of observational errors. Two covariates were used and a case where the observational errors were in both the survey variable and the covariates was considered. The inclusion of observational errors was due to the fact that data collected through surveys are often not free from errors that occur during observation. These errors can occur due to over-reporting, under-reporting, memory failure by the respondents or use of imprecise tools of data collection. The expression of mean squared error (MSE) based on the obtained estimator has been derived to the first degree of approximation. The results of a simulation study show that the derived modified regression mean estimator under observational errors is more efficient than the mean per unit estimator and some other existing estimators. The proposed estimator can therefore be used in estimating a finite population mean, while considering observational errors that may occur during a study.
文摘Adaptive digital filtering has traditionally been developed based on the minimum mean square error (MMSE) criterion and has found ever-increasing applications in communications. This paper presents an alternative adaptive filtering design based on the minimum symbol error rate (MSER) criterion for communication applications. It is shown that the MSER filtering is smarter, as it exploits the non-Gaussian distribution of filter output effectively. Consequently, it provides significant performance gain in terms of smaller symbol error over the MMSE approach. Adopting Parzen window or kernel density estimation for a probability density function, a block-data gradient adaptive MSER algorithm is derived. A stochastic gradient adaptive MSER algorithm, referred to as the least symbol error rate, is further developed for sample-by-sample adaptive implementation of the MSER filtering. Two applications, involving single-user channel equalization and beamforming assisted receiver, are included to demonstrate the effectiveness and generality of the proposed adaptive MSER filtering approach.
基金supported by the Fundamental Research Funds for the Central Universities(ZYGX2009J016)
文摘The uncertainty of observers' positions can lead to significantly degrading in source localization accuracy. This pa-per proposes a method of using self-location for calibrating the positions of observer stations in source localization to reduce the errors of the observer positions and improve the accuracy of the source localization. The relative distance measurements of the two coordinative observers are used for the linear minimum mean square error (LMMSE) estimator. The results of computer si-mulations prove the feasibility and effectiveness of the proposed method. With the general estimation errors of observers' positions, the MSE of the source localization with self-location calibration, which is significantly lower than that without self-location calibra-tion, is approximating to the Cramer-Rao lower bound (CRLB).
基金supported by the 2011 China Aerospace Science and Technology Foundationthe Certain Ministry Foundation under Grant No.20212HK03010
文摘Performance of the Adaptive Coding and Modulation(ACM) strongly depends on the retrieved Channel State Information(CSI),which can be obtained using the channel estimation techniques relying on pilot symbol transmission.Earlier analysis of methods of pilot-aided channel estimation for ACM systems were relatively little.In this paper,we investigate the performance of CSI prediction using the Minimum Mean Square Error(MMSE)channel estimator for an ACM system.To solve the two problems of MMSE:high computational operations and oversimplified assumption,we then propose the Low-Complexity schemes(LC-MMSE and Recursion LC-MMSE(R-LC-MMSE)).Computational complexity and Mean Square Error(MSE) are presented to evaluate the efficiency of the proposed algorithm.Both analysis and numerical results show that LC-MMSE performs close to the wellknown MMSE estimator with much lower complexity and R-LC-MMSE improves the application of MMSE estimation to specific circumstances.
基金Sponsored by the Natural Science Foundation of Guangdong Province(Grant No.06025546)the National Natural Science Foundation of China(Grant No.50305005).
文摘Combining information entropy and wavelet analysis with neural network,an adaptive control system and an adaptive control algorithm are presented for machining process based on extended entropy square error(EESE)and wavelet neural network(WNN).Extended entropy square error function is defined and its availability is proved theoretically.Replacing the mean square error criterion of BP algorithm with the EESE criterion,the proposed system is then applied to the on-line control of the cutting force with variable cutting parameters by searching adaptively wavelet base function and self adjusting scaling parameter,translating parameter of the wavelet and neural network weights.Simulation results show that the designed system is of fast response,non-overshoot and it is more effective than the conventional adaptive control of machining process based on the neural network.The suggested algorithm can adaptively adjust the feed rate on-line till achieving a constant cutting force approaching the reference force in varied cutting conditions,thus improving the machining efficiency and protecting the tool.
基金Supported by the National High Technology ResearchDevelopment Program of China (863 Program)(No.2001AA 123014)
文摘The turbo equalization approach is studied for Orthogonal Frequency Division Multiplexing (OFDM) system with combined error control coding and linear precoding. While previous literatures employed linear precodcr of small size for complexity reasons, this paper proposes to use a linear precoder of size larger than or equal to the maximum length of the equivalent discrete-time channel in order to achieve full frequency diversity and reduce complexities of the error control coder/decoder. Also a low complexity Linear Minimum Mean Square Error (LMMSE) turbo equalizer is derived for the receiver. Through simulation and performance analysis, it is shown that the performance of the proposed scheme over frequency selective fading channel reaches the matched filter bound; compared with the same coded OFDM without linear precoding, the proposed scheme shows an Signal-to-Noise Ratio (SNR) improvement of at least 6dB at a bit error rate of 10 6 over a multipath channel with exponential power delay profile. Convergence behavior of the proposed scheme with turbo equalization using various type of linear precoder/transformer, various interleaver size and error control coder of various constraint length is also investigated.
基金Project(61201381) supported by the National Natural Science Foundation of ChinaProject(YP12JJ202057) supported by the Future Development Foundation of Zhengzhou Information Science and Technology College,China
文摘Compared with the rank reduction estimator(RARE) based on second-order statistics(called SOS-RARE), the RARE based on fourth-order cumulants(referred to as FOC-RARE) can handle more sources and restrain the negative impacts of the Gaussian colored noise. However, the unexpected modeling errors appearing in practice are known to significantly degrade the performance of the RARE. Therefore, the direction-of-arrival(DOA) estimation performance of the FOC-RARE is quantitatively derived. The explicit expression for direction-finding(DF) error is derived via the first-order perturbation analysis, and then the theoretical formula for the mean square error(MSE) is given. Simulation results demonstrate the validation of the theoretical analysis and reveal that the FOC-RARE is more robust to the unexpected modeling errors than the SOS-RARE.
文摘In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.
文摘工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小迭代修复和改进WGAN混合模型的时序数据修复方法.首先,在预处理阶段,保留异常数据,进行信息标注等处理,从而充分挖掘异常值与真实值之间的特征约束.其次,在噪声模块提出了近邻参数裁剪规则,用于修正最小迭代修复公式生成的噪声向量.将其传递至模拟分布模块的生成器中,同时设计了一个动态时间注意力网络层,用于提取时序特征权重并与门控循环单元串联组合捕捉不同步长的特征依赖,并引入递归多步预测原理共同提升模型的表达能力;在判别器中设计了Abnormal and Truth奖励机制和Weighted Mean Square Error损失函数共同反向优化生成器修复数据的细节和质量.最后,在公开数据集和真实数据集上的实验结果表明,该方法的修复准确度与模型稳定性显著优于现有方法.
文摘The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agile Scrum and the Obtain, Scrub, Explore, Model, and iNterpret (OSEMN) methodology. Six machine learning models, namely Linear Forecast, Naive Forecast, Simple Moving Average with weekly window (SMA 5), Simple Moving Average with monthly window (SMA 20), Autoregressive Integrated Moving Average (ARIMA), and Long Short-Term Memory (LSTM), are compared and evaluated through Mean Absolute Error (MAE), with the LSTM model performing the best, showcasing its potential for practical financial applications. A Django web application “Predict It” is developed to implement the LSTM model. Ethical concerns related to predictive modeling in finance are addressed. Data quality, algorithm choice, feature engineering, and preprocessing techniques are emphasized for better model performance. The research acknowledges limitations and suggests future research directions, aiming to equip investors and financial professionals with reliable predictive models for dynamic markets.
文摘Due to the rapid development of logistic industry, transportation cost is also increasing, and finding trends in transportation activities will impact positively in investment in transportation infrastructure. There is limited literature and data-driven analysis about trends in transportation mode. This thesis delves into the operational challenges of vehicle performance management within logistics clusters, a critical aspect of efficient supply chain operations. It aims to address the issues faced by logistics organizations in optimizing their vehicle fleets’ performance, essential for seamless logistics operations. The study’s core design involves the development of a predictive logistics model based on regression, focused on forecasting, and evaluating vehicle performance in logistics clusters. It encompasses a comprehensive literature review, research methodology, data sources, variables, feature engineering, and model training and evaluation and F-test analysis was done to identify and verify the relationships between attributes and the target variable. The findings highlight the model’s efficacy, with a low mean squared error (MSE) value of 3.42, indicating its accuracy in predicting performance metrics. The high R-squared (R2) score of 0.921 emphasizes its ability to capture relationships between input characteristics and performance metrics. The model’s training and testing accuracy further attest to its reliability and generalization capabilities. In interpretation, this research underscores the practical significance of the findings. The regression-based model provides a practical solution for the logistics industry, enabling informed decisions regarding resource allocation, maintenance planning, and delivery route optimization. This contributes to enhanced overall logistics performance and customer service. By addressing performance gaps and embracing modern logistics technologies, the study supports the ongoing evolution of vehicle performance management in logistics clusters, fostering increased competitiveness and sustainability in the logistics sector.
文摘The purpose of this research work is to investigate the numerical solutions of the fractional dengue transmission model(FDTM)in the presence of Wolbachia using the stochastic-based Levenberg-Marquardt neural network(LM-NN)technique.The fractional dengue transmission model(FDTM)consists of 12 compartments.The human population is divided into four compartments;susceptible humans(S_(h)),exposed humans(E_(h)),infectious humans(I_(h)),and recovered humans(R_(h)).Wolbachia-infected and Wolbachia-uninfected mosquito population is also divided into four compartments:aquatic(eggs,larvae,pupae),susceptible,exposed,and infectious.We investigated three different cases of vertical transmission probability(η),namely when Wolbachia-free mosquitoes persist only(η=0.6),when both types of mosquitoes persist(η=0.8),and when Wolbachia-carrying mosquitoes persist only(η=1).The objective of this study is to investigate the effectiveness of Wolbachia in reducing dengue and presenting the numerical results by using the stochastic structure LM-NN approach with 10 hidden layers of neurons for three different cases of the fractional order derivatives(α=0.4,0.6,0.8).LM-NN approach includes a training,validation,and testing procedure to minimize the mean square error(MSE)values using the reference dataset(obtained by solving the model using the Adams-Bashforth-Moulton method(ABM).The distribution of data is 80% data for training,10% for validation,and,10% for testing purpose)results.A comprehensive investigation is accessible to observe the competence,precision,capacity,and efficiency of the suggested LM-NN approach by executing the MSE,state transitions findings,and regression analysis.The effectiveness of the LM-NN approach for solving the FDTM is demonstrated by the overlap of the findings with trustworthy measures,which achieves a precision of up to 10^(-4).
文摘Large number of antennas and higher bandwidth usage in massive multiple-input-multipleoutput(MIMO)systems create immense burden on receiver in terms of higher power consumption.The power consumption at the receiver radio frequency(RF)circuits can be significantly reduced by the application of analog-to-digital converter(ADC)of low resolution.In this paper we investigate bandwidth efficiency(BE)of massive MIMO with perfect channel state information(CSI)by applying low resolution ADCs with Rician fadings.We start our analysis by deriving the additive quantization noise model,which helps to understand the effects of ADC resolution on BE by keeping the power constraint at the receiver in radar.We also investigate deeply the effects of using higher bit rates and the number of BS antennas on bandwidth efficiency(BE)of the system.We emphasize that good bandwidth efficiency can be achieved by even using low resolution ADC by using regularized zero-forcing(RZF)combining algorithm.We also provide a generic analysis of energy efficiency(EE)with different options of bits by calculating the energy efficiencies(EE)using the achievable rates.We emphasize that satisfactory BE can be achieved by even using low-resolution ADC/DAC in massive MIMO.
基金supported by National Natural Science Foundation of China(62371225,62371227)。
文摘Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which entails high complexity.To avoid the exact matrix inversion,a considerable number of implicit and explicit approximate matrix inversion based detection methods is proposed.By combining the advantages of both the explicit and the implicit matrix inversion,this paper introduces a new low-complexity signal detection algorithm.Firstly,the relationship between implicit and explicit techniques is analyzed.Then,an enhanced Newton iteration method is introduced to realize an approximate MMSE detection for massive MIMO uplink systems.The proposed improved Newton iteration significantly reduces the complexity of conventional Newton iteration.However,its complexity is still high for higher iterations.Thus,it is applied only for first two iterations.For subsequent iterations,we propose a novel trace iterative method(TIM)based low-complexity algorithm,which has significantly lower complexity than higher Newton iterations.Convergence guarantees of the proposed detector are also provided.Numerical simulations verify that the proposed detector exhibits significant performance enhancement over recently reported iterative detectors and achieves close-to-MMSE performance while retaining the low-complexity advantage for systems with hundreds of antennas.
文摘In this paper, we define a new class of biased linear estimators of the vector of unknown parameters in the deficient_rank linear model based on the spectral decomposition expression of the best linear minimun bias estimator. Some important properties are discussed. By appropriate choices of bias parameters, we construct many interested and useful biased linear estimators, which are the extension of ordinary biased linear estimators in the full_rank linear model to the deficient_rank linear model. At last, we give a numerical example in geodetic adjustment.
文摘This study explores the influence of infill patterns on machine acceleration prediction in the realm of three-dimensional(3D)printing,particularly focusing on extrusion technology.Our primary objective was to develop a long short-term memory(LSTM)network capable of assessing this impact.We conducted an extensive analysis involving 12 distinct infill patterns,collecting time-series data to examine their effects on the acceleration of the printer’s bed.The LSTM network was trained using acceleration data from the adaptive cubic infill pattern,while the Archimedean chords infill pattern provided data for evaluating the network’s prediction accuracy.This involved utilizing offline time-series acceleration data as the training and testing datasets for the LSTM model.Specifically,the LSTM model was devised to predict the acceleration of a fused deposition modeling(FDM)printer using data from the adaptive cubic infill pattern.Rigorous testing yielded a root mean square error(RMSE)of 0.007144,reflecting the model’s precision.Further refinement and testing of the LSTM model were conducted using acceleration data from the Archimedean chords infill pattern,resulting in an RMSE of 0.007328.Notably,the developed LSTM model demonstrated superior performance compared to an optimized recurrent neural network(RNN)in predicting machine acceleration data.The empirical findings highlight that the adaptive cubic infill pattern considerably influences the dimensional accuracy of parts printed using FDM technology.
文摘In recent years there has been an increasing interest in developing spatial statistical models for data sets that are seemingly spatially independent.This lack of spatial structure makes it difficult,if not impossible to use optimal predictors such as ordinary kriging for modeling the spatial variability in the data.In many instances,the data still contain a wealth of information that could be used to gain flexibility and precision in estimation.In this paper we propose using a combination of regression analysis to describe the large-scale spatial variability in a set of survey data and a tree-based stratification design to enhance the estimation process of the small-scale spatial variability.With this approach,sample units(i.e.,pixel of a satellite image) are classified with respect to predictions of error attributes into homogeneous classes,and the classes are then used as strata in the stratified analysis.Independent variables used as a basis of stratification included terrain data and satellite imagery.A decision rule was used to identify a tree size that minimized the error in estimating the variance of the mean response and prediction uncertainties at new spatial locations.This approach was applied to a set of n=937 forested plots from a state-wide inventory conducted in 2006 in the Mexican State of Jalisco.The final models accounted for 62% to 82% of the variability observed in canopy closure(%),basal area(m2·ha-1),cubic volumes(m3·ha-1) and biomass(t·ha-1) on the sample plots.The spatial models provided unbiased estimates and when averaged over all sample units in the population,estimates of forest structure were very close to those obtained using classical estimates based on the sampling strategy used in the state-wide inventory.The spatial models also provided unbiased estimates of model variances leading to confidence and prediction coverage rates close to the 0.95 nominal rate.