The prediction of bathymetry has advanced significantly with the development of satellite altimetry.However,the majority of its data originate from marine gravity anomaly.In this study,based on the expression of verti...The prediction of bathymetry has advanced significantly with the development of satellite altimetry.However,the majority of its data originate from marine gravity anomaly.In this study,based on the expression of vertical gravity gradient(VGG)of a rectangular prism,the governing equations for determining sea depths to invert bathymetry.The governing equation is solved by linearization through an iterative process,and numerical simulations verify its algorithm and its stability.We also study the processing methods of different interference errors.The regularization method improves the stability of the inversion process for errors.A piecewise bilinear interpolation function roughly replaces the low-frequency error,and numerical simulations show that the accuracy can be improved by 41.2%after this treatment.For variable ocean crust density,simulation simulations verify that the root-mean-square(RMS)error of prediction is approximately 5 m for the sea depth of 6 km if density is chosen as the average one.Finally,two test regions in the South China Sea are predicted and compared with ship soundings data,RMS errors of predictions are 71.1 m and 91.4 m,respectively.展开更多
The accuracy of acquired channel state information(CSI)for beamforming design is essential for achievable performance in multiple-input multiple-output(MIMO)systems.However,in a high-speed moving scene with time-divis...The accuracy of acquired channel state information(CSI)for beamforming design is essential for achievable performance in multiple-input multiple-output(MIMO)systems.However,in a high-speed moving scene with time-division duplex(TDD)mode,the acquired CSI depending on the channel reciprocity is inevitably outdated,leading to outdated beamforming design and then performance degradation.In this paper,a robust beamforming design under channel prediction errors is proposed for a time-varying MIMO system to combat the degradation further,based on the channel prediction technique.Specifically,the statistical characteristics of historical channel prediction errors are exploited and modeled.Moreover,to deal with random error terms,deterministic equivalents are adopted to further explore potential beamforming gain through the statistical information and ultimately derive the robust design aiming at maximizing weighted sum-rate performance.Simulation results show that the proposed beamforming design can maintain outperformance during the downlink transmission time even when channels vary fast,compared with the traditional beamforming design.展开更多
Unlike height-diameter equations for standing trees commonly used in forest resources modelling,tree height models for cut-to-length(CTL)stems tend to produce prediction errors whose distributions are not conditionall...Unlike height-diameter equations for standing trees commonly used in forest resources modelling,tree height models for cut-to-length(CTL)stems tend to produce prediction errors whose distributions are not conditionally normal but are rather leptokurtic and heavy-tailed.This feature was merely noticed in previous studies but never thoroughly investigated.This study characterized the prediction error distribution of a newly developed such tree height model for Pin us radiata(D.Don)through the three-parameter Burr TypeⅫ(BⅫ)distribution.The model’s prediction errors(ε)exhibited heteroskedasticity conditional mainly on the small end relative diameter of the top log and also on DBH to a minor extent.Structured serial correlations were also present in the data.A total of 14 candidate weighting functions were compared to select the best two for weightingεin order to reduce its conditional heteroskedasticity.The weighted prediction errors(εw)were shifted by a constant to the positive range supported by the BXII distribution.Then the distribution of weighted and shifted prediction errors(εw+)was characterized by the BⅫdistribution using maximum likelihood estimation through 1000 times of repeated random sampling,fitting and goodness-of-fit testing,each time by randomly taking only one observation from each tree to circumvent the potential adverse impact of serial correlation in the data on parameter estimation and inferences.The nonparametric two sample Kolmogorov-Smirnov(KS)goodness-of-fit test and its closely related Kuiper’s(KU)test showed the fitted BⅫdistributions provided a good fit to the highly leptokurtic and heavy-tailed distribution ofε.Random samples generated from the fitted BⅫdistributions ofεw+derived from using the best two weighting functions,when back-shifted and unweighted,exhibited distributions that were,in about97 and 95%of the 1000 cases respectively,not statistically different from the distribution ofε.Our results for cut-tolength P.radiata stems represented the first case of any tree species where a non-normal error distribution in tree height prediction was described by an underlying probability distribution.The fitted BXII prediction error distribution will help to unlock the full potential of the new tree height model in forest resources modelling of P.radiata plantations,particularly when uncertainty assessments,statistical inferences and error propagations are needed in research and practical applications through harvester data analytics.展开更多
In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived ...In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.展开更多
Low weight and good toughness thin plate parts are widely used in modem industry, but its flexibility seriously impacts the machinability. Plenty of studies locus on the influence of machine tool and cutting tool on t...Low weight and good toughness thin plate parts are widely used in modem industry, but its flexibility seriously impacts the machinability. Plenty of studies locus on the influence of machine tool and cutting tool on the machining errors. However, few researches focus on compensating machining errors through the fixture. In order to improve the machining accuracy of thin plate-shape part in face milling, this paper presents a novel method for compensating the surfacc errors by prebending the workpiece during the milling process. First, a machining error prediction model using finite element method is formulated, which simplifies the contacts between the workpiece and fixture with spring constraints. Milling fbrces calculated by the micro-unit cutting force model arc loaded on the error prediction model to predict the machining error. The error prediction results are substituted into the given formulas to obtain the prebending clamping forces and clamping positions. Consequently, the workpiece is prebent in terms of the calculated clamping forces and positions during the face milling operation to reduce the machining error. Finally, simulation and experimental tests are carried out to validate the correctness and efficiency of the proposed error compensation method. The experimental measured flatness results show that the flatness improves by approximately 30 percent through this error compensation method. The proposed mcthod not only predicts the machining errors in face milling thin plate-shape parts but also reduces the machining errors by taking full advantage of the workpiece prebending caused by fixture, meanwhile, it provides a novel idea and theoretical basis for reducing milling errors and improving the milling accuracy.展开更多
Extended range (10-30 d) heavy rain forecasting is difficult but performs an important function in disaster prevention and mitigation. In this paper, a nonlinear cross prediction error (NCPE) algorithm that combin...Extended range (10-30 d) heavy rain forecasting is difficult but performs an important function in disaster prevention and mitigation. In this paper, a nonlinear cross prediction error (NCPE) algorithm that combines nonlinear dynamics and statistical methods is proposed. The method is based on phase space reconstruction of chaotic single-variable time series of precipitable water and is tested in 100 global cases of heavy rain. First, nonlinear relative dynamic error for local attractor pairs is calculated at different stages of the heavy rain process, after which the local change characteristics of the attractors are analyzed. Second, the eigen-peak is defined as a prediction indicator based on an error threshold of about 1.5, and is then used to analyze the forecasting validity period. The results reveal that the prediction indicator features regarded as eigenpeaks for heavy rain extreme weather are all reflected consistently, without failure, based on the NCPE model; the prediction validity periods for 1-2 d, 3-9 d and 10-30 d are 4, 22 and 74 cases, respectively, without false alarm or omission. The NCPE model developed allows accurate forecasting of heavy rain over an extended range of 10-30 d and has the potential to be used to explore the mechanisms involved in the development of heavy rain according to a segmentation scale. This novel method provides new insights into extended range forecasting and atmospheric predictability, and also allows the creation of multi-variable chaotic extreme weather prediction models based on high spatiotemporal resolution data.展开更多
With the observational wind data and the Zebiak-Cane model, the impact of Madden-Iulian Oscillation (MJO) as external forcing on El Nino-Southern Oscillation (ENSO) predictability is studied. The observational dat...With the observational wind data and the Zebiak-Cane model, the impact of Madden-Iulian Oscillation (MJO) as external forcing on El Nino-Southern Oscillation (ENSO) predictability is studied. The observational data are analyzed with Continuous Wavelet Transform (CWT) and then used to extract MJO signals, which are added into the model to get a new model. After the Conditional Nonlinear Optimal Perturbation (CNOP) method has been used, the initial errors which can evolve into maximum prediction error, model errors and their join errors are gained and then the Nifio 3 indices and spatial structures of three kinds of errors are investigated. The results mainly show that the observational MJO has little impact on the maximum prediction error of ENSO events and the initial error affects much greater than model error caused by MJO forcing. These demonstrate that the initial error might be the main error source that produces uncertainty in ENSO prediction, which could provide a theoretical foundation for the adaptive data assimilation of the ENSO forecast and contribute to the ENSO target observation.展开更多
With the Zebiak-Cane (ZC) model, the initial error that has the largest effect on ENSO prediction is explored by conditional nonlinear optimal perturbation (CNOP). The results demonstrate that CNOP-type errors cau...With the Zebiak-Cane (ZC) model, the initial error that has the largest effect on ENSO prediction is explored by conditional nonlinear optimal perturbation (CNOP). The results demonstrate that CNOP-type errors cause the largest prediction error of ENSO in the ZC model. By analyzing the behavior of CNOPtype errors, we find that for the normal states and the relatively weak E1 Nifio events in the ZC model, the predictions tend to yield false alarms due to the uncertainties caused by CNOP. For the relatively strong E1 Nino events, the ZC model largely underestimates their intensities. Also, our results suggest that the error growth of E1 Nifio in the ZC model depends on the phases of both the annual cycle and ENSO. The condition during northern spring and summer is most favorable for the error growth. The ENSO prediction bestriding these two seasons may be the most difficult. A linear singular vector (LSV) approach is also used to estimate the error growth of ENSO, but it underestimates the prediction uncertainties of ENSO in the ZC model. This result indicates that the different initial errors cause different amplitudes of prediction errors though they have same magnitudes. CNOP yields the severest prediction uncertainty. That is to say, the prediction skill of ENSO is closely related to the types of initial error. This finding illustrates a theoretical basis of data assimilation. It is expected that a data assimilation method can filter the initial errors related to CNOP and improve the ENSO forecast skill.展开更多
Vector-to-raster conversion is a process accompanied with errors.The errors are classified into predicted errors before rasterization and actual errors after that.Accurate prediction of the errors is beneficial to dev...Vector-to-raster conversion is a process accompanied with errors.The errors are classified into predicted errors before rasterization and actual errors after that.Accurate prediction of the errors is beneficial to developing reasonable rasterization technical schemes and to making products of high quality.Analyzing and establishing a quantitative relationship between the error and its affecting factors is the key to error prediction.In this study,land cover data of China at a scale of 1:250 000 were taken as an example for analyzing the relationship between rasterization errors and the density of arc length(DA),the density of polygon(DP) and the size of grid cells(SG).Significant correlations were found between the errors and DA,DP and SG.The correlation coefficient(R2) of a model established based on samples collected in a small region(Beijing) reaches 0.95,and the value of R2 is equal to 0.91 while the model was validated with samples from the whole nation.On the other hand,the R2 of a model established based on nationwide samples reaches 0.96,and R2 is equal to 0.91 while it was validated with the samples in Beijing.These models depict well the relationships between rasterization errors and their affecting factors(DA,DP and SG).The analyzing method established in this study can be applied to effectively predicting rasterization errors in other cases as well.展开更多
Previous studies indicate that ENSO predictions are particularly sensitive to the initial conditions in some key areas(socalled "sensitive areas"). And yet, few studies have quantified improvements in prediction s...Previous studies indicate that ENSO predictions are particularly sensitive to the initial conditions in some key areas(socalled "sensitive areas"). And yet, few studies have quantified improvements in prediction skill in the context of an optimal observing system. In this study, the impact on prediction skill is explored using an intermediate coupled model in which errors in initial conditions formed to make ENSO predictions are removed in certain areas. Based on ideal observing system simulation experiments, the importance of various observational networks on improvement of El Ni n?o prediction skill is examined. The results indicate that the initial states in the central and eastern equatorial Pacific are important to improve El Ni n?o prediction skill effectively. When removing the initial condition errors in the central equatorial Pacific, ENSO prediction errors can be reduced by 25%. Furthermore, combinations of various subregions are considered to demonstrate the efficiency on ENSO prediction skill. Particularly, seasonally varying observational networks are suggested to improve the prediction skill more effectively. For example, in addition to observing in the central equatorial Pacific and its north throughout the year,increasing observations in the eastern equatorial Pacific during April to October is crucially important, which can improve the prediction accuracy by 62%. These results also demonstrate the effectiveness of the conditional nonlinear optimal perturbation approach on detecting sensitive areas for target observations.展开更多
Background and Purpose: To investigate target functional independence measure (FIM) items to achieve the prediction goal in terms of the causal relationships between prognostic prediction error and FIM among stroke pa...Background and Purpose: To investigate target functional independence measure (FIM) items to achieve the prediction goal in terms of the causal relationships between prognostic prediction error and FIM among stroke patients in the convalescent phase using the structural equation modeling (SEM) analysis. Methods: A total of 2992 stroke patients registered in the Japanese Rehabilitation Database were analyzed retrospectively. The prediction error was calculated based on a prognostic prediction formula proposed in a previous study. An exploratory factor analysis (EFA) then the factor was determined using confirmatory factorial analysis (CFA). Finally, multivariate analyses were performed using SEM analysis. Results: The fitted indices of the hypothesized model estimated based on EFA were confirmed by CFA. The factors estimated by EFA were applied, and interpreted as follows: “Transferring (T-factor),” “Dressing (D-factor),” and “Cognitive function (C-factor).” The fit of the structural model based on the three factors and prediction errors was supported by the SEM analysis. The effects of the D- and C-factors yielded similar causal relationships on prediction error. Meanwhile, the effects between the prediction error and the T-factor were low. Observed FIM items were related to their domains in the structural model, except for the dressing of the upper body and memory (p < 0.01). Conclusions: Transfer, which was not heavily considered in the previous prediction formula, was found in causal relationships with prediction error. It is suggested to intervene to transfer together with positive factors to recovery for achieving the prediction goal.展开更多
Image encryption(IE)is a very useful and popular technology to protect the privacy of users.Most algorithms usually encrypt the original image into an image similar to texture or noise,but texture and noise are an obv...Image encryption(IE)is a very useful and popular technology to protect the privacy of users.Most algorithms usually encrypt the original image into an image similar to texture or noise,but texture and noise are an obvious visual indication that the image has been encrypted,which is more likely to cause the attacks of enemy.To overcome this shortcoming,many image encryption systems,which convert the original image into a carrier image with visual significance have been proposed.However,the generated cryptographic image still has texture features.In line with the idea of improving the visual quality of the final password images,we proposed a meaningful image hiding algorithm based on prediction error and discrete wavelet transform.Lots of experimental results and safety analysis show that the proposed algorithm can achieve high visual quality and ensure the security at the same time.展开更多
Renewable and nonrenewable energy sources are widely incorporated for solar and wind energy that produces electricity without increasing carbon dioxide emissions.Energy industries worldwide are trying hard to predict ...Renewable and nonrenewable energy sources are widely incorporated for solar and wind energy that produces electricity without increasing carbon dioxide emissions.Energy industries worldwide are trying hard to predict future energy consumption that could eliminate over or under contracting energy resources and unnecessary financing.Machine learning techniques for predicting energy are the trending solution to overcome the challenges faced by energy companies.The basic need for machine learning algorithms to be trained for accurate prediction requires a considerable amount of data.Another critical factor is balancing the data for enhanced prediction.Data Augmentation is a technique used for increasing the data available for training.Synthetic data are the generation of new data which can be trained to improve the accuracy of prediction models.In this paper,we propose a model that takes time series energy consumption data as input,pre-processes the data,and then uses multiple augmentation techniques and generative adversarial networks to generate synthetic data which when combined with the original data,reduces energy consumption prediction error.We propose TGAN-skip-Improved-WGAN-GP to generate synthetic energy consumption time series tabular data.We modify TGANwith skip connections,then improveWGANGPby defining a consistency term,and finally use the architecture of improved WGAN-GP for training TGAN-skip.We used various evaluation metrics and visual representation to compare the performance of our proposed model.We also measured prediction accuracy along with mean and maximum error generated while predicting with different variations of augmented and synthetic data with original data.The mode collapse problemcould be handled by TGAN-skip-Improved-WGAN-GP model and it also converged faster than existing GAN models for synthetic data generation.The experiment result shows that our proposed technique of combining synthetic data with original data could significantly reduce the prediction error rate and increase the prediction accuracy of energy consumption.展开更多
:Machine Learning(ML)algorithms have been widely used for financial time series prediction and trading through bots.In this work,we propose a Predictive Error Compensated Wavelet Neural Network(PEC-WNN)ML model that i...:Machine Learning(ML)algorithms have been widely used for financial time series prediction and trading through bots.In this work,we propose a Predictive Error Compensated Wavelet Neural Network(PEC-WNN)ML model that improves the prediction of next day closing prices.In the proposed model we use multiple neural networks where the first one uses the closing stock prices from multiple-scale time-domain inputs.An additional network is used for error estimation to compensate and reduce the prediction error of the main network instead of using recurrence.The performance of the proposed model is evaluated using six different stock data samples in the New York stock exchange.The results have demonstrated significant improvement in forecasting accuracy in all cases when the second network is used in accordance with the first one by adding the outputs.The RMSE error is 33%improved when the proposed PEC-WNN model is used compared to the Long ShortTerm Memory(LSTM)model.Furthermore,through the analysis of training mechanisms,we found that using the updated training the performance of the proposed model is improved.The contribution of this study is the applicability of simultaneously different time frames as inputs.Cascading the predictive error compensation not only reduces the error rate but also helps in avoiding overfitting problems.展开更多
Human error,an important factor,may lead to serious results in various operational fields.The human factor plays a critical role in the risks and hazards of the maritime industry.A ship can achieve safe navigation whe...Human error,an important factor,may lead to serious results in various operational fields.The human factor plays a critical role in the risks and hazards of the maritime industry.A ship can achieve safe navigation when all operations in the engine room are conducted vigilantly.This paper presents a systematic evaluation of 20 failures in auxiliary systems of marine diesel engines that may be caused by human error.The Cognitive Reliability Error Analysis Method(CREAM)is used to determine the potentiality of human errors in the failures implied thanks to the answers of experts.Using this method,the probabilities of human error on failures were evaluated and the critical ones were emphasized.The measures to be taken for these results will make significant contributions not only to the seafarers but also to the ship owners.展开更多
Hiding secret data in digital images is one of the major researchfields in information security.Recently,reversible data hiding in encrypted images has attracted extensive attention due to the emergence of cloud servi...Hiding secret data in digital images is one of the major researchfields in information security.Recently,reversible data hiding in encrypted images has attracted extensive attention due to the emergence of cloud services.This paper proposes a novel reversible data hiding method in encrypted images based on an optimal multi-threshold block labeling technique(OMTBL-RDHEI).In our scheme,the content owner encrypts the cover image with block permutation,pixel permutation,and stream cipher,which preserve the in-block correlation of pixel values.After uploading to the cloud service,the data hider applies the prediction error rearrangement(PER),the optimal threshold selection(OTS),and the multi-threshold labeling(MTL)methods to obtain a compressed version of the encrypted image and embed secret data into the vacated room.The receiver can extract the secret,restore the cover image,or do both according to his/her granted authority.The proposed MTL labels blocks of the encrypted image with a list of threshold values which is optimized with OTS based on the features of the current image.Experimental results show that labeling image blocks with the optimized threshold list can efficiently enlarge the amount of vacated room and thus improve the embedding capacity of an encrypted cover image.Security level of the proposed scheme is analyzed and the embedding capacity is compared with state-of-the-art schemes.Both are concluded with satisfactory performance.展开更多
Background: The availability of premium intraocular lenses (IOL), including toric, multifocal, and EDOF, has become very sophisticated and now demands accurate biometric measurement accuracy. The Pentacam AXL and IOL ...Background: The availability of premium intraocular lenses (IOL), including toric, multifocal, and EDOF, has become very sophisticated and now demands accurate biometric measurement accuracy. The Pentacam AXL and IOL Master 700 are often used for optical biometry and they are available in the market today. They can also be used to measure the parameters needed in the IOL calculation using the latest generation formulas, such as the Barett Universal II. Therefore, this study aims to compare the accuracy of refraction results between Pentacam AXL compared to IOL Master 700 after cataract surgery with the Barett Universal-II formula. Method: A total of 64 eyes from 64 patients who had a preoperative examination with IOL Master 700 and Pentacam AXL were included in this study. Parameters such as K, ACD, LT, WTW, and AL were then compared between the two tools. Prediction error values were also calculated and compared based on the difference between the Spherical equivalent (SE) of subjective refraction results after 4 weeks of surgery with their refractive prediction targets. Results: There was no statistically significant difference in the parameters measured from the two tools except ACD and WTW. Furthermore, LT was difficult to obtain on the Pentacam AXL due to penetration problems, as well as in patients with significant lens opacities. The percentage of error prediction values that reach ± 0.50 D on Pentacam AXL and IOL Master 700 was 70.3% and 73.5%, respectively. However, the average prediction error that was close to emmetropia with IOL Master 700 was greater compared to the other tool. Conclusion: Pentacam AXL has a fairly good accuracy for refraction prediction compared to IOL Master 700. However, it is still necessary to optimize its constants to obtain optimal results.展开更多
Because of global climate change, it is necessary to add forest biomass estimation to national forest resource monitoring. The biomass equations developed for forest biomass estimation should be compatible with volume...Because of global climate change, it is necessary to add forest biomass estimation to national forest resource monitoring. The biomass equations developed for forest biomass estimation should be compatible with volume equations. Based on the tree volume and aboveground biomass data of Masson pine (Pinus massoniana Lamb.) in southern China, we constructed one-, two- and three-variable aboveground biomass equations and biomass conversion functions compatible with tree volume equations by using error-in-variable simultaneous equations. The prediction precision of aboveground biomass estimates from one variable equa- tion exceeded 95%. The regressions of aboveground biomass equations were improved slightly when tree height and crown width were used together with diameter on breast height, although the contributions to regressions were statistically insignificant. For the biomass conversion function on one variable, the conversion factor decreased with increasing diameter, but for the conversion function on two variables, the conversion factor increased with increasing diameter but decreased with in- creasing tree height.展开更多
Statistical properties of stock market time series and the implication of their Hurst exponents are discussed. Hurst exponents of DJIA (Dow Jones Industrial Average) components are tested using re scaled range analy...Statistical properties of stock market time series and the implication of their Hurst exponents are discussed. Hurst exponents of DJIA (Dow Jones Industrial Average) components are tested using re scaled range analysis. In addition to the original stock return series, the linear prediction errors of the daily returns are also tested. Numerical results show that the Hurst exponent analysis can provide some information about the statistical properties of the financial time series.展开更多
Currently, simultaneously ensuring the machining accuracy and efficiency of thin-walled structures especially high performance parts still remains a challenge. Existing compensating methods are mainly focusing on 3-ai...Currently, simultaneously ensuring the machining accuracy and efficiency of thin-walled structures especially high performance parts still remains a challenge. Existing compensating methods are mainly focusing on 3-aixs machining, which sometimes only take one given point as the compensative point at each given cutter location. This paper presents a redesigned surface based machining strategy for peripheral milling of thin-walled parts. Based on an improved cutting force/heat model and finite element method(FEM) simulation environment, a deflection error prediction model, which takes sequence of cutter contact lines as compensation targets, is established. And an iterative algorithm is presented to determine feasible cutter axis positions. The final redesigned surface is subsequently generated by skinning all discrete cutter axis vectors after compensating by using the proposed algorithm. The proposed machining strategy incorporates the thermo-mechanical coupled effect in deflection prediction, and is also validated with flank milling experiment by using five-axis machine tool. At the same time, the deformation error is detected by using three-coordinate measuring machine. Error prediction values and experimental results indicate that they have a good consistency and the proposed approach is able to significantly reduce the dimension error under the same machining conditions compared with conventional methods. The proposed machining strategy has potential in high-efficiency precision machining of thin-walled parts.展开更多
基金funded jointly by the National Nature Science Funds of China(No.42274010)the Fundamental Research Funds for the Central Universities(Nos.2023000540,2023000407).
文摘The prediction of bathymetry has advanced significantly with the development of satellite altimetry.However,the majority of its data originate from marine gravity anomaly.In this study,based on the expression of vertical gravity gradient(VGG)of a rectangular prism,the governing equations for determining sea depths to invert bathymetry.The governing equation is solved by linearization through an iterative process,and numerical simulations verify its algorithm and its stability.We also study the processing methods of different interference errors.The regularization method improves the stability of the inversion process for errors.A piecewise bilinear interpolation function roughly replaces the low-frequency error,and numerical simulations show that the accuracy can be improved by 41.2%after this treatment.For variable ocean crust density,simulation simulations verify that the root-mean-square(RMS)error of prediction is approximately 5 m for the sea depth of 6 km if density is chosen as the average one.Finally,two test regions in the South China Sea are predicted and compared with ship soundings data,RMS errors of predictions are 71.1 m and 91.4 m,respectively.
基金supported by the ZTE Industry⁃University⁃Institute Cooper⁃ation Funds under Grant No.2021ZTE01⁃03.
文摘The accuracy of acquired channel state information(CSI)for beamforming design is essential for achievable performance in multiple-input multiple-output(MIMO)systems.However,in a high-speed moving scene with time-division duplex(TDD)mode,the acquired CSI depending on the channel reciprocity is inevitably outdated,leading to outdated beamforming design and then performance degradation.In this paper,a robust beamforming design under channel prediction errors is proposed for a time-varying MIMO system to combat the degradation further,based on the channel prediction technique.Specifically,the statistical characteristics of historical channel prediction errors are exploited and modeled.Moreover,to deal with random error terms,deterministic equivalents are adopted to further explore potential beamforming gain through the statistical information and ultimately derive the robust design aiming at maximizing weighted sum-rate performance.Simulation results show that the proposed beamforming design can maintain outperformance during the downlink transmission time even when channels vary fast,compared with the traditional beamforming design.
文摘Unlike height-diameter equations for standing trees commonly used in forest resources modelling,tree height models for cut-to-length(CTL)stems tend to produce prediction errors whose distributions are not conditionally normal but are rather leptokurtic and heavy-tailed.This feature was merely noticed in previous studies but never thoroughly investigated.This study characterized the prediction error distribution of a newly developed such tree height model for Pin us radiata(D.Don)through the three-parameter Burr TypeⅫ(BⅫ)distribution.The model’s prediction errors(ε)exhibited heteroskedasticity conditional mainly on the small end relative diameter of the top log and also on DBH to a minor extent.Structured serial correlations were also present in the data.A total of 14 candidate weighting functions were compared to select the best two for weightingεin order to reduce its conditional heteroskedasticity.The weighted prediction errors(εw)were shifted by a constant to the positive range supported by the BXII distribution.Then the distribution of weighted and shifted prediction errors(εw+)was characterized by the BⅫdistribution using maximum likelihood estimation through 1000 times of repeated random sampling,fitting and goodness-of-fit testing,each time by randomly taking only one observation from each tree to circumvent the potential adverse impact of serial correlation in the data on parameter estimation and inferences.The nonparametric two sample Kolmogorov-Smirnov(KS)goodness-of-fit test and its closely related Kuiper’s(KU)test showed the fitted BⅫdistributions provided a good fit to the highly leptokurtic and heavy-tailed distribution ofε.Random samples generated from the fitted BⅫdistributions ofεw+derived from using the best two weighting functions,when back-shifted and unweighted,exhibited distributions that were,in about97 and 95%of the 1000 cases respectively,not statistically different from the distribution ofε.Our results for cut-tolength P.radiata stems represented the first case of any tree species where a non-normal error distribution in tree height prediction was described by an underlying probability distribution.The fitted BXII prediction error distribution will help to unlock the full potential of the new tree height model in forest resources modelling of P.radiata plantations,particularly when uncertainty assessments,statistical inferences and error propagations are needed in research and practical applications through harvester data analytics.
文摘In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.
基金Supported by National Natural Science Foundation of China(Grant No.51175304)Shandong Provincial Science and Technology Development Plan of China(Grant No.2013GHZ30305)
文摘Low weight and good toughness thin plate parts are widely used in modem industry, but its flexibility seriously impacts the machinability. Plenty of studies locus on the influence of machine tool and cutting tool on the machining errors. However, few researches focus on compensating machining errors through the fixture. In order to improve the machining accuracy of thin plate-shape part in face milling, this paper presents a novel method for compensating the surfacc errors by prebending the workpiece during the milling process. First, a machining error prediction model using finite element method is formulated, which simplifies the contacts between the workpiece and fixture with spring constraints. Milling fbrces calculated by the micro-unit cutting force model arc loaded on the error prediction model to predict the machining error. The error prediction results are substituted into the given formulas to obtain the prebending clamping forces and clamping positions. Consequently, the workpiece is prebent in terms of the calculated clamping forces and positions during the face milling operation to reduce the machining error. Finally, simulation and experimental tests are carried out to validate the correctness and efficiency of the proposed error compensation method. The experimental measured flatness results show that the flatness improves by approximately 30 percent through this error compensation method. The proposed mcthod not only predicts the machining errors in face milling thin plate-shape parts but also reduces the machining errors by taking full advantage of the workpiece prebending caused by fixture, meanwhile, it provides a novel idea and theoretical basis for reducing milling errors and improving the milling accuracy.
基金provided by the National Natural Science Foundation of China(Grant Nos.41275039 and 41471305)the Preeminence Youth Cultivation Project of Sichuan (Grant No.2015JQ0037)
文摘Extended range (10-30 d) heavy rain forecasting is difficult but performs an important function in disaster prevention and mitigation. In this paper, a nonlinear cross prediction error (NCPE) algorithm that combines nonlinear dynamics and statistical methods is proposed. The method is based on phase space reconstruction of chaotic single-variable time series of precipitable water and is tested in 100 global cases of heavy rain. First, nonlinear relative dynamic error for local attractor pairs is calculated at different stages of the heavy rain process, after which the local change characteristics of the attractors are analyzed. Second, the eigen-peak is defined as a prediction indicator based on an error threshold of about 1.5, and is then used to analyze the forecasting validity period. The results reveal that the prediction indicator features regarded as eigenpeaks for heavy rain extreme weather are all reflected consistently, without failure, based on the NCPE model; the prediction validity periods for 1-2 d, 3-9 d and 10-30 d are 4, 22 and 74 cases, respectively, without false alarm or omission. The NCPE model developed allows accurate forecasting of heavy rain over an extended range of 10-30 d and has the potential to be used to explore the mechanisms involved in the development of heavy rain according to a segmentation scale. This novel method provides new insights into extended range forecasting and atmospheric predictability, and also allows the creation of multi-variable chaotic extreme weather prediction models based on high spatiotemporal resolution data.
基金The National Natural Science Foundation of China under contract No.41405062
文摘With the observational wind data and the Zebiak-Cane model, the impact of Madden-Iulian Oscillation (MJO) as external forcing on El Nino-Southern Oscillation (ENSO) predictability is studied. The observational data are analyzed with Continuous Wavelet Transform (CWT) and then used to extract MJO signals, which are added into the model to get a new model. After the Conditional Nonlinear Optimal Perturbation (CNOP) method has been used, the initial errors which can evolve into maximum prediction error, model errors and their join errors are gained and then the Nifio 3 indices and spatial structures of three kinds of errors are investigated. The results mainly show that the observational MJO has little impact on the maximum prediction error of ENSO events and the initial error affects much greater than model error caused by MJO forcing. These demonstrate that the initial error might be the main error source that produces uncertainty in ENSO prediction, which could provide a theoretical foundation for the adaptive data assimilation of the ENSO forecast and contribute to the ENSO target observation.
基金This work was jointly supported by Chinese Academy of Sciences(CAS)International Partnership Creative Group"The Climate System Model Development and Application Studies",KZCX3-SW-230 of the Chinese Academy of Sciences,the National Natural Science Foundation of China(Grant Nos.40505013,40675030),and the IAP07401 and IAP07202 of Institute of Atmospheric Physics,CAS.
文摘With the Zebiak-Cane (ZC) model, the initial error that has the largest effect on ENSO prediction is explored by conditional nonlinear optimal perturbation (CNOP). The results demonstrate that CNOP-type errors cause the largest prediction error of ENSO in the ZC model. By analyzing the behavior of CNOPtype errors, we find that for the normal states and the relatively weak E1 Nifio events in the ZC model, the predictions tend to yield false alarms due to the uncertainties caused by CNOP. For the relatively strong E1 Nino events, the ZC model largely underestimates their intensities. Also, our results suggest that the error growth of E1 Nifio in the ZC model depends on the phases of both the annual cycle and ENSO. The condition during northern spring and summer is most favorable for the error growth. The ENSO prediction bestriding these two seasons may be the most difficult. A linear singular vector (LSV) approach is also used to estimate the error growth of ENSO, but it underestimates the prediction uncertainties of ENSO in the ZC model. This result indicates that the different initial errors cause different amplitudes of prediction errors though they have same magnitudes. CNOP yields the severest prediction uncertainty. That is to say, the prediction skill of ENSO is closely related to the types of initial error. This finding illustrates a theoretical basis of data assimilation. It is expected that a data assimilation method can filter the initial errors related to CNOP and improve the ENSO forecast skill.
基金Under the auspices of Strategic Priority Research Program of Chinese Academy of Sciences(No.XDA05050000)Special Program for Informatization of Chinese Academy of Sciences(No.INF0-115-C01-SDB3-02)
文摘Vector-to-raster conversion is a process accompanied with errors.The errors are classified into predicted errors before rasterization and actual errors after that.Accurate prediction of the errors is beneficial to developing reasonable rasterization technical schemes and to making products of high quality.Analyzing and establishing a quantitative relationship between the error and its affecting factors is the key to error prediction.In this study,land cover data of China at a scale of 1:250 000 were taken as an example for analyzing the relationship between rasterization errors and the density of arc length(DA),the density of polygon(DP) and the size of grid cells(SG).Significant correlations were found between the errors and DA,DP and SG.The correlation coefficient(R2) of a model established based on samples collected in a small region(Beijing) reaches 0.95,and the value of R2 is equal to 0.91 while the model was validated with samples from the whole nation.On the other hand,the R2 of a model established based on nationwide samples reaches 0.96,and R2 is equal to 0.91 while it was validated with the samples in Beijing.These models depict well the relationships between rasterization errors and their affecting factors(DA,DP and SG).The analyzing method established in this study can be applied to effectively predicting rasterization errors in other cases as well.
基金supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDA19060102)the National Natural Science Foundation of China (Grant Nos. 41475101, 41690122, 41690120 and 41421005)the National Programme on Global Change and Air–Sea Interaction Interaction (Grant Nos. GASI-IPOVAI-06 and GASI-IPOVAI-01-01)
文摘Previous studies indicate that ENSO predictions are particularly sensitive to the initial conditions in some key areas(socalled "sensitive areas"). And yet, few studies have quantified improvements in prediction skill in the context of an optimal observing system. In this study, the impact on prediction skill is explored using an intermediate coupled model in which errors in initial conditions formed to make ENSO predictions are removed in certain areas. Based on ideal observing system simulation experiments, the importance of various observational networks on improvement of El Ni n?o prediction skill is examined. The results indicate that the initial states in the central and eastern equatorial Pacific are important to improve El Ni n?o prediction skill effectively. When removing the initial condition errors in the central equatorial Pacific, ENSO prediction errors can be reduced by 25%. Furthermore, combinations of various subregions are considered to demonstrate the efficiency on ENSO prediction skill. Particularly, seasonally varying observational networks are suggested to improve the prediction skill more effectively. For example, in addition to observing in the central equatorial Pacific and its north throughout the year,increasing observations in the eastern equatorial Pacific during April to October is crucially important, which can improve the prediction accuracy by 62%. These results also demonstrate the effectiveness of the conditional nonlinear optimal perturbation approach on detecting sensitive areas for target observations.
文摘Background and Purpose: To investigate target functional independence measure (FIM) items to achieve the prediction goal in terms of the causal relationships between prognostic prediction error and FIM among stroke patients in the convalescent phase using the structural equation modeling (SEM) analysis. Methods: A total of 2992 stroke patients registered in the Japanese Rehabilitation Database were analyzed retrospectively. The prediction error was calculated based on a prognostic prediction formula proposed in a previous study. An exploratory factor analysis (EFA) then the factor was determined using confirmatory factorial analysis (CFA). Finally, multivariate analyses were performed using SEM analysis. Results: The fitted indices of the hypothesized model estimated based on EFA were confirmed by CFA. The factors estimated by EFA were applied, and interpreted as follows: “Transferring (T-factor),” “Dressing (D-factor),” and “Cognitive function (C-factor).” The fit of the structural model based on the three factors and prediction errors was supported by the SEM analysis. The effects of the D- and C-factors yielded similar causal relationships on prediction error. Meanwhile, the effects between the prediction error and the T-factor were low. Observed FIM items were related to their domains in the structural model, except for the dressing of the upper body and memory (p < 0.01). Conclusions: Transfer, which was not heavily considered in the previous prediction formula, was found in causal relationships with prediction error. It is suggested to intervene to transfer together with positive factors to recovery for achieving the prediction goal.
基金supported by the National Key R&D Program of China under grant 2018YFB1003205by the National Natural Science Foundation of China under grant U1836208,B1003205,U1836110,61602253,61672294+3 种基金by the Jiangsu Basic Research Programs Natural Science Foundation under grant numbers BK20181407by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fundby the Engineering Research Center of Digital Forensics,Ministry of Educationby the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,China.
文摘Image encryption(IE)is a very useful and popular technology to protect the privacy of users.Most algorithms usually encrypt the original image into an image similar to texture or noise,but texture and noise are an obvious visual indication that the image has been encrypted,which is more likely to cause the attacks of enemy.To overcome this shortcoming,many image encryption systems,which convert the original image into a carrier image with visual significance have been proposed.However,the generated cryptographic image still has texture features.In line with the idea of improving the visual quality of the final password images,we proposed a meaningful image hiding algorithm based on prediction error and discrete wavelet transform.Lots of experimental results and safety analysis show that the proposed algorithm can achieve high visual quality and ensure the security at the same time.
基金This research was financially supported by the Ministry of Small and Mediumsized Enterprises(SMEs)and Startups(MSS),Korea,under the“Regional Specialized Industry Development Program(R&D,S3091627)”supervised by Korea Institute for Advancement of Technology(KIAT).
文摘Renewable and nonrenewable energy sources are widely incorporated for solar and wind energy that produces electricity without increasing carbon dioxide emissions.Energy industries worldwide are trying hard to predict future energy consumption that could eliminate over or under contracting energy resources and unnecessary financing.Machine learning techniques for predicting energy are the trending solution to overcome the challenges faced by energy companies.The basic need for machine learning algorithms to be trained for accurate prediction requires a considerable amount of data.Another critical factor is balancing the data for enhanced prediction.Data Augmentation is a technique used for increasing the data available for training.Synthetic data are the generation of new data which can be trained to improve the accuracy of prediction models.In this paper,we propose a model that takes time series energy consumption data as input,pre-processes the data,and then uses multiple augmentation techniques and generative adversarial networks to generate synthetic data which when combined with the original data,reduces energy consumption prediction error.We propose TGAN-skip-Improved-WGAN-GP to generate synthetic energy consumption time series tabular data.We modify TGANwith skip connections,then improveWGANGPby defining a consistency term,and finally use the architecture of improved WGAN-GP for training TGAN-skip.We used various evaluation metrics and visual representation to compare the performance of our proposed model.We also measured prediction accuracy along with mean and maximum error generated while predicting with different variations of augmented and synthetic data with original data.The mode collapse problemcould be handled by TGAN-skip-Improved-WGAN-GP model and it also converged faster than existing GAN models for synthetic data generation.The experiment result shows that our proposed technique of combining synthetic data with original data could significantly reduce the prediction error rate and increase the prediction accuracy of energy consumption.
基金This study is based on the research project“Development of Cyberdroid based on Cognitive Intelligent system applications”(2019–2020)funded by Crypttech company(https://www.crypttech.com/en/)within the contract by ITUNOVA,Istanbul Technical University Technology Transfer Office.
文摘:Machine Learning(ML)algorithms have been widely used for financial time series prediction and trading through bots.In this work,we propose a Predictive Error Compensated Wavelet Neural Network(PEC-WNN)ML model that improves the prediction of next day closing prices.In the proposed model we use multiple neural networks where the first one uses the closing stock prices from multiple-scale time-domain inputs.An additional network is used for error estimation to compensate and reduce the prediction error of the main network instead of using recurrence.The performance of the proposed model is evaluated using six different stock data samples in the New York stock exchange.The results have demonstrated significant improvement in forecasting accuracy in all cases when the second network is used in accordance with the first one by adding the outputs.The RMSE error is 33%improved when the proposed PEC-WNN model is used compared to the Long ShortTerm Memory(LSTM)model.Furthermore,through the analysis of training mechanisms,we found that using the updated training the performance of the proposed model is improved.The contribution of this study is the applicability of simultaneously different time frames as inputs.Cascading the predictive error compensation not only reduces the error rate but also helps in avoiding overfitting problems.
文摘Human error,an important factor,may lead to serious results in various operational fields.The human factor plays a critical role in the risks and hazards of the maritime industry.A ship can achieve safe navigation when all operations in the engine room are conducted vigilantly.This paper presents a systematic evaluation of 20 failures in auxiliary systems of marine diesel engines that may be caused by human error.The Cognitive Reliability Error Analysis Method(CREAM)is used to determine the potentiality of human errors in the failures implied thanks to the answers of experts.Using this method,the probabilities of human error on failures were evaluated and the critical ones were emphasized.The measures to be taken for these results will make significant contributions not only to the seafarers but also to the ship owners.
基金the Ministry of Science and Technology of Taiwan,Grant Number MOST 110-2221-E-507-003.
文摘Hiding secret data in digital images is one of the major researchfields in information security.Recently,reversible data hiding in encrypted images has attracted extensive attention due to the emergence of cloud services.This paper proposes a novel reversible data hiding method in encrypted images based on an optimal multi-threshold block labeling technique(OMTBL-RDHEI).In our scheme,the content owner encrypts the cover image with block permutation,pixel permutation,and stream cipher,which preserve the in-block correlation of pixel values.After uploading to the cloud service,the data hider applies the prediction error rearrangement(PER),the optimal threshold selection(OTS),and the multi-threshold labeling(MTL)methods to obtain a compressed version of the encrypted image and embed secret data into the vacated room.The receiver can extract the secret,restore the cover image,or do both according to his/her granted authority.The proposed MTL labels blocks of the encrypted image with a list of threshold values which is optimized with OTS based on the features of the current image.Experimental results show that labeling image blocks with the optimized threshold list can efficiently enlarge the amount of vacated room and thus improve the embedding capacity of an encrypted cover image.Security level of the proposed scheme is analyzed and the embedding capacity is compared with state-of-the-art schemes.Both are concluded with satisfactory performance.
文摘Background: The availability of premium intraocular lenses (IOL), including toric, multifocal, and EDOF, has become very sophisticated and now demands accurate biometric measurement accuracy. The Pentacam AXL and IOL Master 700 are often used for optical biometry and they are available in the market today. They can also be used to measure the parameters needed in the IOL calculation using the latest generation formulas, such as the Barett Universal II. Therefore, this study aims to compare the accuracy of refraction results between Pentacam AXL compared to IOL Master 700 after cataract surgery with the Barett Universal-II formula. Method: A total of 64 eyes from 64 patients who had a preoperative examination with IOL Master 700 and Pentacam AXL were included in this study. Parameters such as K, ACD, LT, WTW, and AL were then compared between the two tools. Prediction error values were also calculated and compared based on the difference between the Spherical equivalent (SE) of subjective refraction results after 4 weeks of surgery with their refractive prediction targets. Results: There was no statistically significant difference in the parameters measured from the two tools except ACD and WTW. Furthermore, LT was difficult to obtain on the Pentacam AXL due to penetration problems, as well as in patients with significant lens opacities. The percentage of error prediction values that reach ± 0.50 D on Pentacam AXL and IOL Master 700 was 70.3% and 73.5%, respectively. However, the average prediction error that was close to emmetropia with IOL Master 700 was greater compared to the other tool. Conclusion: Pentacam AXL has a fairly good accuracy for refraction prediction compared to IOL Master 700. However, it is still necessary to optimize its constants to obtain optimal results.
基金the National Biomass Modeling Program for Continuous Forest Inventory(NBMP-CFI) funded by the State Forestry Administration of China
文摘Because of global climate change, it is necessary to add forest biomass estimation to national forest resource monitoring. The biomass equations developed for forest biomass estimation should be compatible with volume equations. Based on the tree volume and aboveground biomass data of Masson pine (Pinus massoniana Lamb.) in southern China, we constructed one-, two- and three-variable aboveground biomass equations and biomass conversion functions compatible with tree volume equations by using error-in-variable simultaneous equations. The prediction precision of aboveground biomass estimates from one variable equa- tion exceeded 95%. The regressions of aboveground biomass equations were improved slightly when tree height and crown width were used together with diameter on breast height, although the contributions to regressions were statistically insignificant. For the biomass conversion function on one variable, the conversion factor decreased with increasing diameter, but for the conversion function on two variables, the conversion factor increased with increasing diameter but decreased with in- creasing tree height.
文摘Statistical properties of stock market time series and the implication of their Hurst exponents are discussed. Hurst exponents of DJIA (Dow Jones Industrial Average) components are tested using re scaled range analysis. In addition to the original stock return series, the linear prediction errors of the daily returns are also tested. Numerical results show that the Hurst exponent analysis can provide some information about the statistical properties of the financial time series.
基金supported by Key Program of National Natural Science Foundation of China (Grant No. 50835001) General Program of National Natural Science Foundation of China (Grant No. 50775023)Program for New Century Excellent Talents of Ministry of Education of China (Grant No. NCET-08-081)
文摘Currently, simultaneously ensuring the machining accuracy and efficiency of thin-walled structures especially high performance parts still remains a challenge. Existing compensating methods are mainly focusing on 3-aixs machining, which sometimes only take one given point as the compensative point at each given cutter location. This paper presents a redesigned surface based machining strategy for peripheral milling of thin-walled parts. Based on an improved cutting force/heat model and finite element method(FEM) simulation environment, a deflection error prediction model, which takes sequence of cutter contact lines as compensation targets, is established. And an iterative algorithm is presented to determine feasible cutter axis positions. The final redesigned surface is subsequently generated by skinning all discrete cutter axis vectors after compensating by using the proposed algorithm. The proposed machining strategy incorporates the thermo-mechanical coupled effect in deflection prediction, and is also validated with flank milling experiment by using five-axis machine tool. At the same time, the deformation error is detected by using three-coordinate measuring machine. Error prediction values and experimental results indicate that they have a good consistency and the proposed approach is able to significantly reduce the dimension error under the same machining conditions compared with conventional methods. The proposed machining strategy has potential in high-efficiency precision machining of thin-walled parts.