工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小...工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小迭代修复和改进WGAN混合模型的时序数据修复方法.首先,在预处理阶段,保留异常数据,进行信息标注等处理,从而充分挖掘异常值与真实值之间的特征约束.其次,在噪声模块提出了近邻参数裁剪规则,用于修正最小迭代修复公式生成的噪声向量.将其传递至模拟分布模块的生成器中,同时设计了一个动态时间注意力网络层,用于提取时序特征权重并与门控循环单元串联组合捕捉不同步长的特征依赖,并引入递归多步预测原理共同提升模型的表达能力;在判别器中设计了Abnormal and Truth奖励机制和Weighted Mean Square Error损失函数共同反向优化生成器修复数据的细节和质量.最后,在公开数据集和真实数据集上的实验结果表明,该方法的修复准确度与模型稳定性显著优于现有方法.展开更多
The purpose of this research work is to investigate the numerical solutions of the fractional dengue transmission model(FDTM)in the presence of Wolbachia using the stochastic-based Levenberg-Marquardt neural network(L...The purpose of this research work is to investigate the numerical solutions of the fractional dengue transmission model(FDTM)in the presence of Wolbachia using the stochastic-based Levenberg-Marquardt neural network(LM-NN)technique.The fractional dengue transmission model(FDTM)consists of 12 compartments.The human population is divided into four compartments;susceptible humans(S_(h)),exposed humans(E_(h)),infectious humans(I_(h)),and recovered humans(R_(h)).Wolbachia-infected and Wolbachia-uninfected mosquito population is also divided into four compartments:aquatic(eggs,larvae,pupae),susceptible,exposed,and infectious.We investigated three different cases of vertical transmission probability(η),namely when Wolbachia-free mosquitoes persist only(η=0.6),when both types of mosquitoes persist(η=0.8),and when Wolbachia-carrying mosquitoes persist only(η=1).The objective of this study is to investigate the effectiveness of Wolbachia in reducing dengue and presenting the numerical results by using the stochastic structure LM-NN approach with 10 hidden layers of neurons for three different cases of the fractional order derivatives(α=0.4,0.6,0.8).LM-NN approach includes a training,validation,and testing procedure to minimize the mean square error(MSE)values using the reference dataset(obtained by solving the model using the Adams-Bashforth-Moulton method(ABM).The distribution of data is 80% data for training,10% for validation,and,10% for testing purpose)results.A comprehensive investigation is accessible to observe the competence,precision,capacity,and efficiency of the suggested LM-NN approach by executing the MSE,state transitions findings,and regression analysis.The effectiveness of the LM-NN approach for solving the FDTM is demonstrated by the overlap of the findings with trustworthy measures,which achieves a precision of up to 10^(-4).展开更多
Large number of antennas and higher bandwidth usage in massive multiple-input-multipleoutput(MIMO)systems create immense burden on receiver in terms of higher power consumption.The power consumption at the receiver ra...Large number of antennas and higher bandwidth usage in massive multiple-input-multipleoutput(MIMO)systems create immense burden on receiver in terms of higher power consumption.The power consumption at the receiver radio frequency(RF)circuits can be significantly reduced by the application of analog-to-digital converter(ADC)of low resolution.In this paper we investigate bandwidth efficiency(BE)of massive MIMO with perfect channel state information(CSI)by applying low resolution ADCs with Rician fadings.We start our analysis by deriving the additive quantization noise model,which helps to understand the effects of ADC resolution on BE by keeping the power constraint at the receiver in radar.We also investigate deeply the effects of using higher bit rates and the number of BS antennas on bandwidth efficiency(BE)of the system.We emphasize that good bandwidth efficiency can be achieved by even using low resolution ADC by using regularized zero-forcing(RZF)combining algorithm.We also provide a generic analysis of energy efficiency(EE)with different options of bits by calculating the energy efficiencies(EE)using the achievable rates.We emphasize that satisfactory BE can be achieved by even using low-resolution ADC/DAC in massive MIMO.展开更多
Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which ent...Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which entails high complexity.To avoid the exact matrix inversion,a considerable number of implicit and explicit approximate matrix inversion based detection methods is proposed.By combining the advantages of both the explicit and the implicit matrix inversion,this paper introduces a new low-complexity signal detection algorithm.Firstly,the relationship between implicit and explicit techniques is analyzed.Then,an enhanced Newton iteration method is introduced to realize an approximate MMSE detection for massive MIMO uplink systems.The proposed improved Newton iteration significantly reduces the complexity of conventional Newton iteration.However,its complexity is still high for higher iterations.Thus,it is applied only for first two iterations.For subsequent iterations,we propose a novel trace iterative method(TIM)based low-complexity algorithm,which has significantly lower complexity than higher Newton iterations.Convergence guarantees of the proposed detector are also provided.Numerical simulations verify that the proposed detector exhibits significant performance enhancement over recently reported iterative detectors and achieves close-to-MMSE performance while retaining the low-complexity advantage for systems with hundreds of antennas.展开更多
The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agil...The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agile Scrum and the Obtain, Scrub, Explore, Model, and iNterpret (OSEMN) methodology. Six machine learning models, namely Linear Forecast, Naive Forecast, Simple Moving Average with weekly window (SMA 5), Simple Moving Average with monthly window (SMA 20), Autoregressive Integrated Moving Average (ARIMA), and Long Short-Term Memory (LSTM), are compared and evaluated through Mean Absolute Error (MAE), with the LSTM model performing the best, showcasing its potential for practical financial applications. A Django web application “Predict It” is developed to implement the LSTM model. Ethical concerns related to predictive modeling in finance are addressed. Data quality, algorithm choice, feature engineering, and preprocessing techniques are emphasized for better model performance. The research acknowledges limitations and suggests future research directions, aiming to equip investors and financial professionals with reliable predictive models for dynamic markets.展开更多
Due to the rapid development of logistic industry, transportation cost is also increasing, and finding trends in transportation activities will impact positively in investment in transportation infrastructure. There i...Due to the rapid development of logistic industry, transportation cost is also increasing, and finding trends in transportation activities will impact positively in investment in transportation infrastructure. There is limited literature and data-driven analysis about trends in transportation mode. This thesis delves into the operational challenges of vehicle performance management within logistics clusters, a critical aspect of efficient supply chain operations. It aims to address the issues faced by logistics organizations in optimizing their vehicle fleets’ performance, essential for seamless logistics operations. The study’s core design involves the development of a predictive logistics model based on regression, focused on forecasting, and evaluating vehicle performance in logistics clusters. It encompasses a comprehensive literature review, research methodology, data sources, variables, feature engineering, and model training and evaluation and F-test analysis was done to identify and verify the relationships between attributes and the target variable. The findings highlight the model’s efficacy, with a low mean squared error (MSE) value of 3.42, indicating its accuracy in predicting performance metrics. The high R-squared (R2) score of 0.921 emphasizes its ability to capture relationships between input characteristics and performance metrics. The model’s training and testing accuracy further attest to its reliability and generalization capabilities. In interpretation, this research underscores the practical significance of the findings. The regression-based model provides a practical solution for the logistics industry, enabling informed decisions regarding resource allocation, maintenance planning, and delivery route optimization. This contributes to enhanced overall logistics performance and customer service. By addressing performance gaps and embracing modern logistics technologies, the study supports the ongoing evolution of vehicle performance management in logistics clusters, fostering increased competitiveness and sustainability in the logistics sector.展开更多
The uncertainty of observers' positions can lead to significantly degrading in source localization accuracy. This pa-per proposes a method of using self-location for calibrating the positions of observer stations in ...The uncertainty of observers' positions can lead to significantly degrading in source localization accuracy. This pa-per proposes a method of using self-location for calibrating the positions of observer stations in source localization to reduce the errors of the observer positions and improve the accuracy of the source localization. The relative distance measurements of the two coordinative observers are used for the linear minimum mean square error (LMMSE) estimator. The results of computer si-mulations prove the feasibility and effectiveness of the proposed method. With the general estimation errors of observers' positions, the MSE of the source localization with self-location calibration, which is significantly lower than that without self-location calibra-tion, is approximating to the Cramer-Rao lower bound (CRLB).展开更多
In this paper, we propose a log-normal linear model whose errors are first-order correlated, and suggest a two-stage method for the efficient estimation of the conditional mean of the response variable at the original...In this paper, we propose a log-normal linear model whose errors are first-order correlated, and suggest a two-stage method for the efficient estimation of the conditional mean of the response variable at the original scale. We obtain two estimators which minimize the asymptotic mean squared error (MM) and the asymptotic bias (MB), respectively. Both the estimators are very easy to implement, and simulation studies show that they are perform better.展开更多
In this paper, a regression method of estimation has been used to derive the mean estimate of the survey variable using simple random sampling without replacement in the presence of observational errors. Two covariate...In this paper, a regression method of estimation has been used to derive the mean estimate of the survey variable using simple random sampling without replacement in the presence of observational errors. Two covariates were used and a case where the observational errors were in both the survey variable and the covariates was considered. The inclusion of observational errors was due to the fact that data collected through surveys are often not free from errors that occur during observation. These errors can occur due to over-reporting, under-reporting, memory failure by the respondents or use of imprecise tools of data collection. The expression of mean squared error (MSE) based on the obtained estimator has been derived to the first degree of approximation. The results of a simulation study show that the derived modified regression mean estimator under observational errors is more efficient than the mean per unit estimator and some other existing estimators. The proposed estimator can therefore be used in estimating a finite population mean, while considering observational errors that may occur during a study.展开更多
Compared with the rank reduction estimator(RARE) based on second-order statistics(called SOS-RARE), the RARE based on fourth-order cumulants(referred to as FOC-RARE) can handle more sources and restrain the negative i...Compared with the rank reduction estimator(RARE) based on second-order statistics(called SOS-RARE), the RARE based on fourth-order cumulants(referred to as FOC-RARE) can handle more sources and restrain the negative impacts of the Gaussian colored noise. However, the unexpected modeling errors appearing in practice are known to significantly degrade the performance of the RARE. Therefore, the direction-of-arrival(DOA) estimation performance of the FOC-RARE is quantitatively derived. The explicit expression for direction-finding(DF) error is derived via the first-order perturbation analysis, and then the theoretical formula for the mean square error(MSE) is given. Simulation results demonstrate the validation of the theoretical analysis and reveal that the FOC-RARE is more robust to the unexpected modeling errors than the SOS-RARE.展开更多
It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous wor...It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous work reveals that in such scenario the filter calculated mean square errors(FMSE)and the true mean square errors(TMSE)become inconsistent,while FMSE and TMSE are consistent in the Kalman filter with accurate models.This can lead to low credibility of state estimation regardless of using Kalman filters or adaptive Kalman filters.Obviously,it is important to study the inconsistency issue since it is vital to understand the quantitative influence induced by the inaccurate models.Aiming at this,the concept of credibility is adopted to discuss the inconsistency problem in this paper.In order to formulate the degree of the credibility,a trust factor is constructed based on the FMSE and the TMSE.However,the trust factor can not be directly computed since the TMSE cannot be found for practical applications.Based on the definition of trust factor,the estimation of the trust factor is successfully modified to online estimation of the TMSE.More importantly,a necessary and sufficient condition is found,which turns out to be the basis for better design of Kalman filters with high performance.Accordingly,beyond trust factor estimation with Sage-Husa technique(TFE-SHT),three novel trust factor estimation methods,which are directly numerical solving method(TFE-DNS),the particle swarm optimization method(PSO)and expectation maximization-particle swarm optimization method(EM-PSO)are proposed.The analysis and simulation results both show that the proposed TFE-DNS is better than the TFE-SHT for the case of single unknown noise covariance.Meanwhile,the proposed EMPSO performs completely better than the EM and PSO on the estimation of the credibility degree and state when both noise covariances should be estimated online.展开更多
This study assesses the predictive capabilities of the CMA-GD model for wind speed prediction in two wind farms located in Hubei Province,China.The observed wind speeds at the height of 70m in wind turbines of two win...This study assesses the predictive capabilities of the CMA-GD model for wind speed prediction in two wind farms located in Hubei Province,China.The observed wind speeds at the height of 70m in wind turbines of two wind farms in Suizhou serve as the actual observation data for comparison and testing.At the same time,the wind speed predicted by the EC model is also included for comparative analysis.The results indicate that the CMA-GD model performs better than the EC model in Wind Farm A.The CMA-GD model exhibits a monthly average correlation coefficient of 0.56,root mean square error of 2.72 m s^(-1),and average absolute error of 2.11 m s^(-1).In contrast,the EC model shows a monthly average correlation coefficient of 0.51,root mean square error of 2.83 m s^(-1),and average absolute error of 2.21 m s^(-1).Conversely,in Wind Farm B,the EC model outperforms the CMA-GD model.The CMA-GD model achieves a monthly average correlation coefficient of 0.55,root mean square error of 2.61 m s^(-1),and average absolute error of 2.13 m s^(-1).By contrast,the EC model displays a monthly average correlation coefficient of 0.63,root mean square error of 2.04 m s^(-1),and average absolute error of 1.67 m s^(-1).展开更多
Phasor Measurement Units(PMUs)provide Global Positioning System(GPS)time-stamped synchronized measurements of voltage and current with the phase angle of the system at certain points along with the grid system.Those s...Phasor Measurement Units(PMUs)provide Global Positioning System(GPS)time-stamped synchronized measurements of voltage and current with the phase angle of the system at certain points along with the grid system.Those synchronized data measurements are extracted in the form of amplitude and phase from various locations of the power grid to monitor and control the power system condition.A PMU device is a crucial part of the power equipment in terms of the cost and operative point of view.However,such ongoing development and improvement to PMUs’principal work are essential to the network operators to enhance the grid quality and the operating expenses.This paper introduces a proposed method that led to lowcost and less complex techniques to optimize the performance of PMU using Second-Order Kalman Filter.It is based on the Asyncrhophasor technique resulting in a phase error minimization when receiving the signal from an access point or from the main access point.The MATLAB model has been created to implement the proposed method in the presence of Gaussian and non-Gaussian.The results have shown the proposed method which is Second-Order Kalman Filter outperforms the existing model.The results were tested usingMean Square Error(MSE).The proposed Second-Order Kalman Filter method has been replaced with a synchronization unit into thePMUstructure to clarify the significance of the proposed new PMU.展开更多
Most remote systems require user authentication to access resources.Text-based passwords are still widely used as a standard method of user authentication.Although conventional text-based passwords are rather hard to ...Most remote systems require user authentication to access resources.Text-based passwords are still widely used as a standard method of user authentication.Although conventional text-based passwords are rather hard to remember,users often write their passwords down in order to compromise security.One of the most complex challenges users may face is posting sensitive data on external data centers that are accessible to others and do not be controlled directly by users.Graphical user authentication methods have recently been proposed to verify the user identity.However,the fundamental limitation of a graphi-cal password is that it must have a colorful and rich image to provide an adequate password space to maintain security,and when the user clicks and inputs a pass-word between two possible grids,the fault tolerance is adjusted to avoid this situation.This paper proposes an enhanced graphical authentication scheme,which comprises benefits over both recognition and recall-based graphical techniques besides image steganography.The combination of graphical authentication and steganography technologies reduces the amount of sensitive data shared between users and service providers and improves the security of user accounts.To evaluate the effectiveness of the proposed scheme,peak signal-to-noise ratio and mean squared error parameters have been used.展开更多
Mosquitoes are of great concern for occasionally carrying noxious diseases(dengue,malaria,zika,and yellow fever).To control mosquitoes,it is very crucial to effectively monitor their behavioral trends and presence.Tra...Mosquitoes are of great concern for occasionally carrying noxious diseases(dengue,malaria,zika,and yellow fever).To control mosquitoes,it is very crucial to effectively monitor their behavioral trends and presence.Traditional mosquito repellent works by heating small pads soaked in repellant,which then diffuses a protected area around you,a great alternative to spraying yourself with insecticide.But they have limitations,including the range,turning them on manually,and then waiting for the protection to kick in when the mosquitoes may find you.This research aims to design a fuzzy-based controller to solve the above issues by automatically determining a mosquito repellent’s speed and active time.The speed and active time depend on the repellent cartridge and the number of mosquitoes.The Mamdani model is used in the proposed fuzzy system(FS).The FS consists of identifying unambiguous inputs,a fuzzification process,rule evaluation,and a defuzzification process to produce unambiguous outputs.The input variables used are the repellent cartridge and the number of mosquitoes,and the speed of mosquito repellent is used as the output variable.The whole FS is designed and simulated using MATLAB Simulink R2016b.The proposed FS is executed and verified utilizing a microcontroller using its pulse width modulation capability.Different simulations of the proposed model are performed in many nonlinear processes.Then,a comparative analysis of the outcomes under similar conditions confirms the higher accuracy of the FS,yielding a maximum relative error of 10%.The experimental outcomes show that the root mean square error is reduced by 67.68%,and the mean absolute percentage error is reduced by 52.46%.Using a fuzzy-based mosquito repellent can help maintain the speed of mosquito repellent and control the energy used by the mosquito repellent.展开更多
In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep lea...In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep learning process has also been widely considered in these genomics data processing system.In this research,brain disorder illness incliding Alzheimer’s disease,Schizophrenia and Parkinson’s diseaseis is analyzed owing to misdetection of disorders in neuroimaging data examined by means fo traditional methods.Moeover,deep learning approach is incorporated here for classification purpose of brain disorder with the aid of Deep Belief Networks(DBN).Images are stored in a secured manner by using DNA sequence based on JPEG Zig Zag Encryption algorithm(DBNJZZ)approach.The suggested approach is executed and tested by using the performance metric measure such as accuracy,root mean square error,Mean absolute error and mean absolute percentage error.Proposed DBNJZZ gives better performance than previously available methods.展开更多
As the scale of software systems expands,maintaining their stable operation has become an extraordinary challenge.System logs are semi-structured text generated by the recording function in the source code and have im...As the scale of software systems expands,maintaining their stable operation has become an extraordinary challenge.System logs are semi-structured text generated by the recording function in the source code and have important research significance in software service anomaly detection.Existing log anomaly detection methods mainly focus on the statistical characteristics of logs,making it difficult to distinguish the semantic differences between normal and abnormal logs,and performing poorly on real-world industrial log data.In this paper,we propose an unsupervised framework for log anomaly detection based on generative pre-training-2(GPT-2).We apply our approach to two industrial systems.The experimental results on two datasets show that our approach outperforms state-of-the-art approaches for log anomaly detection.展开更多
Hybrid beamforming(HBF)has become an attractive and important technology in massive multiple-input multiple-output(MIMO)millimeter-wave(mmWave)systems.There are different hybrid architectures in HBF depending on diffe...Hybrid beamforming(HBF)has become an attractive and important technology in massive multiple-input multiple-output(MIMO)millimeter-wave(mmWave)systems.There are different hybrid architectures in HBF depending on different connection strategies of the phase shifter network between antennas and radio frequency chains.This paper investigates HBF optimization with different hybrid architectures in broadband point-to-point mmWave MIMO systems.The joint hybrid architecture and beamforming optimization problem is divided into two sub-problems.First,we transform the spectral efficiency maximization problem into an equivalent weighted mean squared error minimization problem,and propose an algorithm based on the manifold optimization method for the hybrid beamformer with a fixed hybrid architecture.The overlapped subarray architecture which balances well between hardware costs and system performance is investigated.We further propose an algorithm to dynamically partition antenna subarrays and combine it with the HBF optimization algorithm.Simulation results are presented to demonstrate the performance improvement of our proposed algorithms.展开更多
The study explores the asymptotic consistency of the James-Stein shrinkage estimator obtained by shrinking a maximum likelihood estimator. We use Hansen’s approach to show that the James-Stein shrinkage estimator con...The study explores the asymptotic consistency of the James-Stein shrinkage estimator obtained by shrinking a maximum likelihood estimator. We use Hansen’s approach to show that the James-Stein shrinkage estimator converges asymptotically to some multivariate normal distribution with shrinkage effect values. We establish that the rate of convergence is of order and rate , hence the James-Stein shrinkage estimator is -consistent. Then visualise its consistency by studying the asymptotic behaviour using simulating plots in R for the mean squared error of the maximum likelihood estimator and the shrinkage estimator. The latter graphically shows lower mean squared error as compared to that of the maximum likelihood estimator.展开更多
In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived ...In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.展开更多
文摘工业数据由于技术故障和人为因素通常导致数据异常,现有基于约束的方法因约束阈值设置的过于宽松或严格会导致修复错误,基于统计的方法因平滑修复机制导致对时间步长较远的异常值修复准确度较低.针对上述问题,提出了基于奖励机制的最小迭代修复和改进WGAN混合模型的时序数据修复方法.首先,在预处理阶段,保留异常数据,进行信息标注等处理,从而充分挖掘异常值与真实值之间的特征约束.其次,在噪声模块提出了近邻参数裁剪规则,用于修正最小迭代修复公式生成的噪声向量.将其传递至模拟分布模块的生成器中,同时设计了一个动态时间注意力网络层,用于提取时序特征权重并与门控循环单元串联组合捕捉不同步长的特征依赖,并引入递归多步预测原理共同提升模型的表达能力;在判别器中设计了Abnormal and Truth奖励机制和Weighted Mean Square Error损失函数共同反向优化生成器修复数据的细节和质量.最后,在公开数据集和真实数据集上的实验结果表明,该方法的修复准确度与模型稳定性显著优于现有方法.
文摘The purpose of this research work is to investigate the numerical solutions of the fractional dengue transmission model(FDTM)in the presence of Wolbachia using the stochastic-based Levenberg-Marquardt neural network(LM-NN)technique.The fractional dengue transmission model(FDTM)consists of 12 compartments.The human population is divided into four compartments;susceptible humans(S_(h)),exposed humans(E_(h)),infectious humans(I_(h)),and recovered humans(R_(h)).Wolbachia-infected and Wolbachia-uninfected mosquito population is also divided into four compartments:aquatic(eggs,larvae,pupae),susceptible,exposed,and infectious.We investigated three different cases of vertical transmission probability(η),namely when Wolbachia-free mosquitoes persist only(η=0.6),when both types of mosquitoes persist(η=0.8),and when Wolbachia-carrying mosquitoes persist only(η=1).The objective of this study is to investigate the effectiveness of Wolbachia in reducing dengue and presenting the numerical results by using the stochastic structure LM-NN approach with 10 hidden layers of neurons for three different cases of the fractional order derivatives(α=0.4,0.6,0.8).LM-NN approach includes a training,validation,and testing procedure to minimize the mean square error(MSE)values using the reference dataset(obtained by solving the model using the Adams-Bashforth-Moulton method(ABM).The distribution of data is 80% data for training,10% for validation,and,10% for testing purpose)results.A comprehensive investigation is accessible to observe the competence,precision,capacity,and efficiency of the suggested LM-NN approach by executing the MSE,state transitions findings,and regression analysis.The effectiveness of the LM-NN approach for solving the FDTM is demonstrated by the overlap of the findings with trustworthy measures,which achieves a precision of up to 10^(-4).
文摘Large number of antennas and higher bandwidth usage in massive multiple-input-multipleoutput(MIMO)systems create immense burden on receiver in terms of higher power consumption.The power consumption at the receiver radio frequency(RF)circuits can be significantly reduced by the application of analog-to-digital converter(ADC)of low resolution.In this paper we investigate bandwidth efficiency(BE)of massive MIMO with perfect channel state information(CSI)by applying low resolution ADCs with Rician fadings.We start our analysis by deriving the additive quantization noise model,which helps to understand the effects of ADC resolution on BE by keeping the power constraint at the receiver in radar.We also investigate deeply the effects of using higher bit rates and the number of BS antennas on bandwidth efficiency(BE)of the system.We emphasize that good bandwidth efficiency can be achieved by even using low resolution ADC by using regularized zero-forcing(RZF)combining algorithm.We also provide a generic analysis of energy efficiency(EE)with different options of bits by calculating the energy efficiencies(EE)using the achievable rates.We emphasize that satisfactory BE can be achieved by even using low-resolution ADC/DAC in massive MIMO.
基金supported by National Natural Science Foundation of China(62371225,62371227)。
文摘Linear minimum mean square error(MMSE)detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output(MIMO)systems but inevitably involves complicated matrix inversion,which entails high complexity.To avoid the exact matrix inversion,a considerable number of implicit and explicit approximate matrix inversion based detection methods is proposed.By combining the advantages of both the explicit and the implicit matrix inversion,this paper introduces a new low-complexity signal detection algorithm.Firstly,the relationship between implicit and explicit techniques is analyzed.Then,an enhanced Newton iteration method is introduced to realize an approximate MMSE detection for massive MIMO uplink systems.The proposed improved Newton iteration significantly reduces the complexity of conventional Newton iteration.However,its complexity is still high for higher iterations.Thus,it is applied only for first two iterations.For subsequent iterations,we propose a novel trace iterative method(TIM)based low-complexity algorithm,which has significantly lower complexity than higher Newton iterations.Convergence guarantees of the proposed detector are also provided.Numerical simulations verify that the proposed detector exhibits significant performance enhancement over recently reported iterative detectors and achieves close-to-MMSE performance while retaining the low-complexity advantage for systems with hundreds of antennas.
文摘The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agile Scrum and the Obtain, Scrub, Explore, Model, and iNterpret (OSEMN) methodology. Six machine learning models, namely Linear Forecast, Naive Forecast, Simple Moving Average with weekly window (SMA 5), Simple Moving Average with monthly window (SMA 20), Autoregressive Integrated Moving Average (ARIMA), and Long Short-Term Memory (LSTM), are compared and evaluated through Mean Absolute Error (MAE), with the LSTM model performing the best, showcasing its potential for practical financial applications. A Django web application “Predict It” is developed to implement the LSTM model. Ethical concerns related to predictive modeling in finance are addressed. Data quality, algorithm choice, feature engineering, and preprocessing techniques are emphasized for better model performance. The research acknowledges limitations and suggests future research directions, aiming to equip investors and financial professionals with reliable predictive models for dynamic markets.
文摘Due to the rapid development of logistic industry, transportation cost is also increasing, and finding trends in transportation activities will impact positively in investment in transportation infrastructure. There is limited literature and data-driven analysis about trends in transportation mode. This thesis delves into the operational challenges of vehicle performance management within logistics clusters, a critical aspect of efficient supply chain operations. It aims to address the issues faced by logistics organizations in optimizing their vehicle fleets’ performance, essential for seamless logistics operations. The study’s core design involves the development of a predictive logistics model based on regression, focused on forecasting, and evaluating vehicle performance in logistics clusters. It encompasses a comprehensive literature review, research methodology, data sources, variables, feature engineering, and model training and evaluation and F-test analysis was done to identify and verify the relationships between attributes and the target variable. The findings highlight the model’s efficacy, with a low mean squared error (MSE) value of 3.42, indicating its accuracy in predicting performance metrics. The high R-squared (R2) score of 0.921 emphasizes its ability to capture relationships between input characteristics and performance metrics. The model’s training and testing accuracy further attest to its reliability and generalization capabilities. In interpretation, this research underscores the practical significance of the findings. The regression-based model provides a practical solution for the logistics industry, enabling informed decisions regarding resource allocation, maintenance planning, and delivery route optimization. This contributes to enhanced overall logistics performance and customer service. By addressing performance gaps and embracing modern logistics technologies, the study supports the ongoing evolution of vehicle performance management in logistics clusters, fostering increased competitiveness and sustainability in the logistics sector.
基金supported by the Fundamental Research Funds for the Central Universities(ZYGX2009J016)
文摘The uncertainty of observers' positions can lead to significantly degrading in source localization accuracy. This pa-per proposes a method of using self-location for calibrating the positions of observer stations in source localization to reduce the errors of the observer positions and improve the accuracy of the source localization. The relative distance measurements of the two coordinative observers are used for the linear minimum mean square error (LMMSE) estimator. The results of computer si-mulations prove the feasibility and effectiveness of the proposed method. With the general estimation errors of observers' positions, the MSE of the source localization with self-location calibration, which is significantly lower than that without self-location calibra-tion, is approximating to the Cramer-Rao lower bound (CRLB).
基金The NSF(11271155) of ChinaResearch Fund(20070183023) for the Doctoral Program of Higher Education
文摘In this paper, we propose a log-normal linear model whose errors are first-order correlated, and suggest a two-stage method for the efficient estimation of the conditional mean of the response variable at the original scale. We obtain two estimators which minimize the asymptotic mean squared error (MM) and the asymptotic bias (MB), respectively. Both the estimators are very easy to implement, and simulation studies show that they are perform better.
文摘In this paper, a regression method of estimation has been used to derive the mean estimate of the survey variable using simple random sampling without replacement in the presence of observational errors. Two covariates were used and a case where the observational errors were in both the survey variable and the covariates was considered. The inclusion of observational errors was due to the fact that data collected through surveys are often not free from errors that occur during observation. These errors can occur due to over-reporting, under-reporting, memory failure by the respondents or use of imprecise tools of data collection. The expression of mean squared error (MSE) based on the obtained estimator has been derived to the first degree of approximation. The results of a simulation study show that the derived modified regression mean estimator under observational errors is more efficient than the mean per unit estimator and some other existing estimators. The proposed estimator can therefore be used in estimating a finite population mean, while considering observational errors that may occur during a study.
基金Project(61201381) supported by the National Natural Science Foundation of ChinaProject(YP12JJ202057) supported by the Future Development Foundation of Zhengzhou Information Science and Technology College,China
文摘Compared with the rank reduction estimator(RARE) based on second-order statistics(called SOS-RARE), the RARE based on fourth-order cumulants(referred to as FOC-RARE) can handle more sources and restrain the negative impacts of the Gaussian colored noise. However, the unexpected modeling errors appearing in practice are known to significantly degrade the performance of the RARE. Therefore, the direction-of-arrival(DOA) estimation performance of the FOC-RARE is quantitatively derived. The explicit expression for direction-finding(DF) error is derived via the first-order perturbation analysis, and then the theoretical formula for the mean square error(MSE) is given. Simulation results demonstrate the validation of the theoretical analysis and reveal that the FOC-RARE is more robust to the unexpected modeling errors than the SOS-RARE.
基金supported by the National Natural Science Foundation of China(62033010)Aeronautical Science Foundation of China(2019460T5001)。
文摘It is quite often that the theoretic model used in the Kalman filtering may not be sufficiently accurate for practical applications,due to the fact that the covariances of noises are not exactly known.Our previous work reveals that in such scenario the filter calculated mean square errors(FMSE)and the true mean square errors(TMSE)become inconsistent,while FMSE and TMSE are consistent in the Kalman filter with accurate models.This can lead to low credibility of state estimation regardless of using Kalman filters or adaptive Kalman filters.Obviously,it is important to study the inconsistency issue since it is vital to understand the quantitative influence induced by the inaccurate models.Aiming at this,the concept of credibility is adopted to discuss the inconsistency problem in this paper.In order to formulate the degree of the credibility,a trust factor is constructed based on the FMSE and the TMSE.However,the trust factor can not be directly computed since the TMSE cannot be found for practical applications.Based on the definition of trust factor,the estimation of the trust factor is successfully modified to online estimation of the TMSE.More importantly,a necessary and sufficient condition is found,which turns out to be the basis for better design of Kalman filters with high performance.Accordingly,beyond trust factor estimation with Sage-Husa technique(TFE-SHT),three novel trust factor estimation methods,which are directly numerical solving method(TFE-DNS),the particle swarm optimization method(PSO)and expectation maximization-particle swarm optimization method(EM-PSO)are proposed.The analysis and simulation results both show that the proposed TFE-DNS is better than the TFE-SHT for the case of single unknown noise covariance.Meanwhile,the proposed EMPSO performs completely better than the EM and PSO on the estimation of the credibility degree and state when both noise covariances should be estimated online.
基金National Key Research and Development Program of the Ministry of Science(2018YFB1502801)Hubei Provincial Natural Science Foundation(2022CFD017)Innovation and Development Project of China Meteorological Administration(CXFZ2023J044)。
文摘This study assesses the predictive capabilities of the CMA-GD model for wind speed prediction in two wind farms located in Hubei Province,China.The observed wind speeds at the height of 70m in wind turbines of two wind farms in Suizhou serve as the actual observation data for comparison and testing.At the same time,the wind speed predicted by the EC model is also included for comparative analysis.The results indicate that the CMA-GD model performs better than the EC model in Wind Farm A.The CMA-GD model exhibits a monthly average correlation coefficient of 0.56,root mean square error of 2.72 m s^(-1),and average absolute error of 2.11 m s^(-1).In contrast,the EC model shows a monthly average correlation coefficient of 0.51,root mean square error of 2.83 m s^(-1),and average absolute error of 2.21 m s^(-1).Conversely,in Wind Farm B,the EC model outperforms the CMA-GD model.The CMA-GD model achieves a monthly average correlation coefficient of 0.55,root mean square error of 2.61 m s^(-1),and average absolute error of 2.13 m s^(-1).By contrast,the EC model displays a monthly average correlation coefficient of 0.63,root mean square error of 2.04 m s^(-1),and average absolute error of 1.67 m s^(-1).
文摘Phasor Measurement Units(PMUs)provide Global Positioning System(GPS)time-stamped synchronized measurements of voltage and current with the phase angle of the system at certain points along with the grid system.Those synchronized data measurements are extracted in the form of amplitude and phase from various locations of the power grid to monitor and control the power system condition.A PMU device is a crucial part of the power equipment in terms of the cost and operative point of view.However,such ongoing development and improvement to PMUs’principal work are essential to the network operators to enhance the grid quality and the operating expenses.This paper introduces a proposed method that led to lowcost and less complex techniques to optimize the performance of PMU using Second-Order Kalman Filter.It is based on the Asyncrhophasor technique resulting in a phase error minimization when receiving the signal from an access point or from the main access point.The MATLAB model has been created to implement the proposed method in the presence of Gaussian and non-Gaussian.The results have shown the proposed method which is Second-Order Kalman Filter outperforms the existing model.The results were tested usingMean Square Error(MSE).The proposed Second-Order Kalman Filter method has been replaced with a synchronization unit into thePMUstructure to clarify the significance of the proposed new PMU.
基金The researcher would like to thank the Deanship of Scientific Research,Qassim University for funding the publication of this project.
文摘Most remote systems require user authentication to access resources.Text-based passwords are still widely used as a standard method of user authentication.Although conventional text-based passwords are rather hard to remember,users often write their passwords down in order to compromise security.One of the most complex challenges users may face is posting sensitive data on external data centers that are accessible to others and do not be controlled directly by users.Graphical user authentication methods have recently been proposed to verify the user identity.However,the fundamental limitation of a graphi-cal password is that it must have a colorful and rich image to provide an adequate password space to maintain security,and when the user clicks and inputs a pass-word between two possible grids,the fault tolerance is adjusted to avoid this situation.This paper proposes an enhanced graphical authentication scheme,which comprises benefits over both recognition and recall-based graphical techniques besides image steganography.The combination of graphical authentication and steganography technologies reduces the amount of sensitive data shared between users and service providers and improves the security of user accounts.To evaluate the effectiveness of the proposed scheme,peak signal-to-noise ratio and mean squared error parameters have been used.
文摘Mosquitoes are of great concern for occasionally carrying noxious diseases(dengue,malaria,zika,and yellow fever).To control mosquitoes,it is very crucial to effectively monitor their behavioral trends and presence.Traditional mosquito repellent works by heating small pads soaked in repellant,which then diffuses a protected area around you,a great alternative to spraying yourself with insecticide.But they have limitations,including the range,turning them on manually,and then waiting for the protection to kick in when the mosquitoes may find you.This research aims to design a fuzzy-based controller to solve the above issues by automatically determining a mosquito repellent’s speed and active time.The speed and active time depend on the repellent cartridge and the number of mosquitoes.The Mamdani model is used in the proposed fuzzy system(FS).The FS consists of identifying unambiguous inputs,a fuzzification process,rule evaluation,and a defuzzification process to produce unambiguous outputs.The input variables used are the repellent cartridge and the number of mosquitoes,and the speed of mosquito repellent is used as the output variable.The whole FS is designed and simulated using MATLAB Simulink R2016b.The proposed FS is executed and verified utilizing a microcontroller using its pulse width modulation capability.Different simulations of the proposed model are performed in many nonlinear processes.Then,a comparative analysis of the outcomes under similar conditions confirms the higher accuracy of the FS,yielding a maximum relative error of 10%.The experimental outcomes show that the root mean square error is reduced by 67.68%,and the mean absolute percentage error is reduced by 52.46%.Using a fuzzy-based mosquito repellent can help maintain the speed of mosquito repellent and control the energy used by the mosquito repellent.
文摘In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep learning process has also been widely considered in these genomics data processing system.In this research,brain disorder illness incliding Alzheimer’s disease,Schizophrenia and Parkinson’s diseaseis is analyzed owing to misdetection of disorders in neuroimaging data examined by means fo traditional methods.Moeover,deep learning approach is incorporated here for classification purpose of brain disorder with the aid of Deep Belief Networks(DBN).Images are stored in a secured manner by using DNA sequence based on JPEG Zig Zag Encryption algorithm(DBNJZZ)approach.The suggested approach is executed and tested by using the performance metric measure such as accuracy,root mean square error,Mean absolute error and mean absolute percentage error.Proposed DBNJZZ gives better performance than previously available methods.
文摘As the scale of software systems expands,maintaining their stable operation has become an extraordinary challenge.System logs are semi-structured text generated by the recording function in the source code and have important research significance in software service anomaly detection.Existing log anomaly detection methods mainly focus on the statistical characteristics of logs,making it difficult to distinguish the semantic differences between normal and abnormal logs,and performing poorly on real-world industrial log data.In this paper,we propose an unsupervised framework for log anomaly detection based on generative pre-training-2(GPT-2).We apply our approach to two industrial systems.The experimental results on two datasets show that our approach outperforms state-of-the-art approaches for log anomaly detection.
基金supported by ZTE Industry-University-Institute Cooperation Funds,the Natural Science Foundation of Shanghai under Grant No.23ZR1407300the National Natural Science Foundation of China un⁃der Grant No.61771147.
文摘Hybrid beamforming(HBF)has become an attractive and important technology in massive multiple-input multiple-output(MIMO)millimeter-wave(mmWave)systems.There are different hybrid architectures in HBF depending on different connection strategies of the phase shifter network between antennas and radio frequency chains.This paper investigates HBF optimization with different hybrid architectures in broadband point-to-point mmWave MIMO systems.The joint hybrid architecture and beamforming optimization problem is divided into two sub-problems.First,we transform the spectral efficiency maximization problem into an equivalent weighted mean squared error minimization problem,and propose an algorithm based on the manifold optimization method for the hybrid beamformer with a fixed hybrid architecture.The overlapped subarray architecture which balances well between hardware costs and system performance is investigated.We further propose an algorithm to dynamically partition antenna subarrays and combine it with the HBF optimization algorithm.Simulation results are presented to demonstrate the performance improvement of our proposed algorithms.
文摘The study explores the asymptotic consistency of the James-Stein shrinkage estimator obtained by shrinking a maximum likelihood estimator. We use Hansen’s approach to show that the James-Stein shrinkage estimator converges asymptotically to some multivariate normal distribution with shrinkage effect values. We establish that the rate of convergence is of order and rate , hence the James-Stein shrinkage estimator is -consistent. Then visualise its consistency by studying the asymptotic behaviour using simulating plots in R for the mean squared error of the maximum likelihood estimator and the shrinkage estimator. The latter graphically shows lower mean squared error as compared to that of the maximum likelihood estimator.
文摘In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.