In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable ...The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.展开更多
With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting m...With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting model is outdated due to the continuous updating of wind power data,a short-term wind power forecasting algorithm based on Incremental Learning-Bagging Deep Hybrid Kernel Extreme Learning Machine(IL-Bagging-DHKELM)error affinity propagation cluster analysis is proposed.The algorithm effectively combines deep hybrid kernel extreme learning machine(DHKELM)with incremental learning(IL).Firstly,an initial wind power prediction model is trained using the Bagging-DHKELM model.Secondly,Euclidean morphological distance affinity propagation AP clustering algorithm is used to cluster and analyze the prediction error of wind power obtained from the initial training model.Finally,the correlation between wind power prediction errors and Numerical Weather Prediction(NWP)data is introduced as incremental updates to the initial wind power prediction model.During the incremental learning process,multiple error performance indicators are used to measure the overall model performance,thereby enabling incremental updates of wind power models.Practical examples show the method proposed in this article reduces the root mean square error of the initial model by 1.9 percentage points,indicating that this method can be better adapted to the current scenario of the continuous increase in wind power penetration rate.The accuracy and precision of wind power generation prediction are effectively improved through the method.展开更多
The accuracy of acquired channel state information(CSI)for beamforming design is essential for achievable performance in multiple-input multiple-output(MIMO)systems.However,in a high-speed moving scene with time-divis...The accuracy of acquired channel state information(CSI)for beamforming design is essential for achievable performance in multiple-input multiple-output(MIMO)systems.However,in a high-speed moving scene with time-division duplex(TDD)mode,the acquired CSI depending on the channel reciprocity is inevitably outdated,leading to outdated beamforming design and then performance degradation.In this paper,a robust beamforming design under channel prediction errors is proposed for a time-varying MIMO system to combat the degradation further,based on the channel prediction technique.Specifically,the statistical characteristics of historical channel prediction errors are exploited and modeled.Moreover,to deal with random error terms,deterministic equivalents are adopted to further explore potential beamforming gain through the statistical information and ultimately derive the robust design aiming at maximizing weighted sum-rate performance.Simulation results show that the proposed beamforming design can maintain outperformance during the downlink transmission time even when channels vary fast,compared with the traditional beamforming design.展开更多
The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agil...The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agile Scrum and the Obtain, Scrub, Explore, Model, and iNterpret (OSEMN) methodology. Six machine learning models, namely Linear Forecast, Naive Forecast, Simple Moving Average with weekly window (SMA 5), Simple Moving Average with monthly window (SMA 20), Autoregressive Integrated Moving Average (ARIMA), and Long Short-Term Memory (LSTM), are compared and evaluated through Mean Absolute Error (MAE), with the LSTM model performing the best, showcasing its potential for practical financial applications. A Django web application “Predict It” is developed to implement the LSTM model. Ethical concerns related to predictive modeling in finance are addressed. Data quality, algorithm choice, feature engineering, and preprocessing techniques are emphasized for better model performance. The research acknowledges limitations and suggests future research directions, aiming to equip investors and financial professionals with reliable predictive models for dynamic markets.展开更多
Virtual machine(VM)consolidation is an effective way to improve resource utilization and reduce energy consumption in cloud data centers.Most existing studies have considered VM consolidation as a bin-packing problem,...Virtual machine(VM)consolidation is an effective way to improve resource utilization and reduce energy consumption in cloud data centers.Most existing studies have considered VM consolidation as a bin-packing problem,but the current schemes commonly ignore the long-term relationship between VMs and hosts.In addition,there is a lack of long-term consideration for resource optimization in the VM consolidation,which results in unnecessary VM migration and increased energy consumption.To address these limitations,a VM consolidation method based on multi-step prediction and affinity-aware technique for energy-efficient cloud data centers(MPaAF-VMC)is proposed.The proposed method uses an improved linear regression prediction algorithm to predict the next-moment resource utilization of hosts and VMs,and obtains the stage demand of resources in the future period through multi-step prediction,which is realized by iterative prediction.Then,based on the multi-step prediction,an affinity model between the VM and host is designed using the first-order correlation coefficient and Euclidean distance.During the VM consolidation,the affinity value is used to select the migration VM and placement host.The proposed method is compared with the existing consolidation algorithms on the PlanetLab and Google cluster real workload data using the CloudSim simulation platform.Experimental results show that the proposed method can achieve significant improvement in reducing energy consumption,VM migration costs,and service level agreement(SLA)violations.展开更多
Unlike height-diameter equations for standing trees commonly used in forest resources modelling,tree height models for cut-to-length(CTL)stems tend to produce prediction errors whose distributions are not conditionall...Unlike height-diameter equations for standing trees commonly used in forest resources modelling,tree height models for cut-to-length(CTL)stems tend to produce prediction errors whose distributions are not conditionally normal but are rather leptokurtic and heavy-tailed.This feature was merely noticed in previous studies but never thoroughly investigated.This study characterized the prediction error distribution of a newly developed such tree height model for Pin us radiata(D.Don)through the three-parameter Burr TypeⅫ(BⅫ)distribution.The model’s prediction errors(ε)exhibited heteroskedasticity conditional mainly on the small end relative diameter of the top log and also on DBH to a minor extent.Structured serial correlations were also present in the data.A total of 14 candidate weighting functions were compared to select the best two for weightingεin order to reduce its conditional heteroskedasticity.The weighted prediction errors(εw)were shifted by a constant to the positive range supported by the BXII distribution.Then the distribution of weighted and shifted prediction errors(εw+)was characterized by the BⅫdistribution using maximum likelihood estimation through 1000 times of repeated random sampling,fitting and goodness-of-fit testing,each time by randomly taking only one observation from each tree to circumvent the potential adverse impact of serial correlation in the data on parameter estimation and inferences.The nonparametric two sample Kolmogorov-Smirnov(KS)goodness-of-fit test and its closely related Kuiper’s(KU)test showed the fitted BⅫdistributions provided a good fit to the highly leptokurtic and heavy-tailed distribution ofε.Random samples generated from the fitted BⅫdistributions ofεw+derived from using the best two weighting functions,when back-shifted and unweighted,exhibited distributions that were,in about97 and 95%of the 1000 cases respectively,not statistically different from the distribution ofε.Our results for cut-tolength P.radiata stems represented the first case of any tree species where a non-normal error distribution in tree height prediction was described by an underlying probability distribution.The fitted BXII prediction error distribution will help to unlock the full potential of the new tree height model in forest resources modelling of P.radiata plantations,particularly when uncertainty assessments,statistical inferences and error propagations are needed in research and practical applications through harvester data analytics.展开更多
In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived ...In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.展开更多
In south China, warm-sector rainstorms are significantly different from the traditional frontal rainstorms due to complex mechanism, which brings great challenges to their forecast. In this study, based on ensemble fo...In south China, warm-sector rainstorms are significantly different from the traditional frontal rainstorms due to complex mechanism, which brings great challenges to their forecast. In this study, based on ensemble forecasting, the high-resolution mesoscale numerical forecast model WRF was used to investigate the effect of initial errors on a warmsector rainstorm and a frontal rainstorm under the same circulation in south China, respectively. We analyzed the sensitivity of forecast errors to the initial errors and their evolution characteristics for the warm-sector and the frontal rainstorm. Additionally, the difference of the predictability was compared via adjusting the initial values of the GOOD member and the BAD member. Compared with the frontal rainstorm, the warm-sector rainstorm was more sensitive to initial error, which increased faster in the warm-sector. Furthermore, the magnitude of error in the warm-sector rainstorm was obviously larger than that of the frontal rainstorm, while the spatial scale of the error was smaller. Similarly, both types of the rainstorm were limited by practical predictability and inherent predictability, while the nonlinear increase characteristics occurred to be more distinct in the warm-sector rainstorm, resulting in the lower inherent predictability.The comparison between the warm-sector rainstorm and the frontal rainstorm revealed that the forecast field was closer to the real situation derived from more accurate initial errors, but only the increase rate in the frontal rainstorm was restrained evidently.展开更多
For the product degradation process with random effect (RE), measurement error (ME) and nonlinearity in step-stress accelerated degradation test (SSADT), the nonlinear Wiener based degradation model with RE and ME is ...For the product degradation process with random effect (RE), measurement error (ME) and nonlinearity in step-stress accelerated degradation test (SSADT), the nonlinear Wiener based degradation model with RE and ME is built. An analytical approximation to the probability density function (PDF) of the product's lifetime is derived in a closed form. The process and data of SSADT are analyzed to obtain the relation model of the observed data under each accelerated stress. The likelihood function for the population-based observed data is constructed. The population-based model parameters and its random coefficient prior values are estimated. According to the newly observed data of the target product in SSADT, an analytical approximation to the PDF of its residual lifetime (RL) is derived in accordance with its individual degradation characteristics. The parameter updating method based on Bayesian inference is applied to obtain the posterior value of random coefficient of the RL model. A numerical example by simulation is analyzed to verify the accuracy and advantage of the proposed model.展开更多
The initial errors constitute one of the main limiting factors in the ability to predict the E1 Nino-Southem Oscillation (ENSO) in ocean-atmosphere coupled models. The conditional nonlinear optimal perturbation (C...The initial errors constitute one of the main limiting factors in the ability to predict the E1 Nino-Southem Oscillation (ENSO) in ocean-atmosphere coupled models. The conditional nonlinear optimal perturbation (CNOP) approach was em- ployed to study the largest initial error growth in the E1 Nino predictions of an intermediate coupled model (ICM). The optimal initial errors (as represented by CNOPs) in sea surface temperature anomalies (SSTAs) and sea level anomalies (SLAs) were obtained with seasonal variation. The CNOP-induced perturbations, which tend to evolve into the La Nifia mode, were found to have the same dynamics as ENSO itself. This indicates that, if CNOP-type errors are present in the initial conditions used to make a prediction of E1 Nino, the E1 Nino event tends to be under-predicted. In particular, compared with other seasonal CNOPs, the CNOPs in winter can induce the largest error growth, which gives rise to an ENSO amplitude that is hardly ever predicted accurately. Additionally, it was found that the CNOP-induced perturbations exhibit a strong spring predictability barrier (SPB) phenomenon for ENSO prediction. These results offer a way to enhance ICM prediction skill and, particularly, weaken the SPB phenomenon by filtering the CNOP-type errors in the initial state. The characteristic distributions of the CNOPs derived from the ICM also provide useful information for targeted observations through data assimilation. Given the fact that the derived CNOPs are season-dependent, it is suggested that seasonally varying targeted observations should be implemented to accurately predict ENSO events.展开更多
In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can eff...In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can effectively reduce model errors by combining the statistical analogue method with the dynamical model together in order that the information of plenty of historical data is utilized in the current complicated NWP model, Furthermore, in the ACE, the differences of the similarities between different historical analogues and the current initial state are considered as the weights for estimating model errors. The results of daily, decad and monthly prediction experiments on a complicated T63 atmospheric model show that the performance of the ACE by correcting model errors based on the estimation of the errors of 4 historical analogue predictions is not only better than that of the scheme of only introducing the correction of the errors of every single analogue prediction, but is also better than that of the T63 model.展开更多
This paper investigates the possible sources of errors associated with tropical cyclone(TC) tracks forecasted using the Global/Regional Assimilation and Prediction System(GRAPES). In Part I, it is shown that the model...This paper investigates the possible sources of errors associated with tropical cyclone(TC) tracks forecasted using the Global/Regional Assimilation and Prediction System(GRAPES). In Part I, it is shown that the model error of GRAPES may be the main cause of poor forecasts of landfalling TCs. Thus, a further examination of the model error is the focus of Part II.Considering model error as a type of forcing, the model error can be represented by the combination of good forecasts and bad forecasts. Results show that there are systematic model errors. The model error of the geopotential height component has periodic features, with a period of 24 h and a global pattern of wavenumber 2 from west to east located between 60?S and 60?N. This periodic model error presents similar features as the atmospheric semidiurnal tide, which reflect signals from tropical diabatic heating, indicating that the parameter errors related to the tropical diabatic heating may be the source of the periodic model error. The above model errors are subtracted from the forecast equation and a series of new forecasts are made. The average forecasting capability using the rectified model is improved compared to simply improving the initial conditions of the original GRAPES model. This confirms the strong impact of the periodic model error on landfalling TC track forecasts. Besides, if the model error used to rectify the model is obtained from an examination of additional TCs, the forecasting capabilities of the corresponding rectified model will be improved.展开更多
Based on the high-resolution Regional Ocean Modeling System(ROMS) and the conditional nonlinear optimal perturbation(CNOP) method, this study explored the effects of optimal initial errors on the prediction of the Kur...Based on the high-resolution Regional Ocean Modeling System(ROMS) and the conditional nonlinear optimal perturbation(CNOP) method, this study explored the effects of optimal initial errors on the prediction of the Kuroshio large meander(LM) path, and the growth mechanism of optimal initial errors was revealed. For each LM event, two types of initial error(denoted as CNOP1 and CNOP2) were obtained. Their large amplitudes were found located mainly in the upper 2500 m in the upstream region of the LM, i.e., southeast of Kyushu. Furthermore, we analyzed the patterns and nonlinear evolution of the two types of CNOP. We found CNOP1 tends to strengthen the LM path through southwestward extension. Conversely,CNOP2 has almost the opposite pattern to CNOP1, and it tends to weaken the LM path through northeastward contraction.The growth mechanism of optimal initial errors was clarified through eddy-energetics analysis. The results indicated that energy from the background field is transferred to the error field because of barotropic and baroclinic instabilities. Thus, it is inferred that both barotropic and baroclinic processes play important roles in the growth of CNOP-type optimal initial errors.展开更多
Extended range (10-30 d) heavy rain forecasting is difficult but performs an important function in disaster prevention and mitigation. In this paper, a nonlinear cross prediction error (NCPE) algorithm that combin...Extended range (10-30 d) heavy rain forecasting is difficult but performs an important function in disaster prevention and mitigation. In this paper, a nonlinear cross prediction error (NCPE) algorithm that combines nonlinear dynamics and statistical methods is proposed. The method is based on phase space reconstruction of chaotic single-variable time series of precipitable water and is tested in 100 global cases of heavy rain. First, nonlinear relative dynamic error for local attractor pairs is calculated at different stages of the heavy rain process, after which the local change characteristics of the attractors are analyzed. Second, the eigen-peak is defined as a prediction indicator based on an error threshold of about 1.5, and is then used to analyze the forecasting validity period. The results reveal that the prediction indicator features regarded as eigenpeaks for heavy rain extreme weather are all reflected consistently, without failure, based on the NCPE model; the prediction validity periods for 1-2 d, 3-9 d and 10-30 d are 4, 22 and 74 cases, respectively, without false alarm or omission. The NCPE model developed allows accurate forecasting of heavy rain over an extended range of 10-30 d and has the potential to be used to explore the mechanisms involved in the development of heavy rain according to a segmentation scale. This novel method provides new insights into extended range forecasting and atmospheric predictability, and also allows the creation of multi-variable chaotic extreme weather prediction models based on high spatiotemporal resolution data.展开更多
Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can ...Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can be viewed as a series of solutions of an accurate model governing the actual atmosphere. Model error is represented as an unknown term in the accurate model, thus NWP can be considered as an inverse problem to uncover the unknown error term. The inverse problem models can absorb long periods of observed data to generate model error correction procedures. They thus resolve the deficiency and faultiness of the NWP schemes employing only the initial-time data. In this study we construct two inverse problem models to estimate and extrapolate the time-varying and spatial-varying model errors in both the historical and forecast periods by using recent observations and analogue phenomena of the atmosphere. Numerical experiment on Burgers' equation has illustrated the substantial forecast improvement using inverse problem algorithms. The proposed inverse problem methods of suppressing NWP errors will be useful in future high accuracy applications of NWP.展开更多
This paper investigates the possible sources of errors associated with tropical cyclone (TC) tracks forecasted using the Global/Regional Assimilation and Prediction System (GRAPES). The GRAPES forecasts were made ...This paper investigates the possible sources of errors associated with tropical cyclone (TC) tracks forecasted using the Global/Regional Assimilation and Prediction System (GRAPES). The GRAPES forecasts were made for 16 landfaIling TCs in the western North Pacific basin during the 2008 and 2009 seasons, with a forecast length of 72 hours, and using the default initial conditions ("initials", hereafter), which are from the NCEP-FNL dataset, as well as ECMWF initials. The forecasts are compared with ECMWF forecasts. The results show that in most TCs, the GRAPES forecasts are improved when using the ECMWF initials compared with the default initials. Compared with the ECMWF initials, the default initials produce lower intensity TCs and a lower intensity subtropical high, but a higher intensity South Asia high and monsoon trough, as well as a higher temperature but lower specific humidity at the TC center. Replacement of the geopotential height and wind fields with the ECMWF initials in and around the TC center at the initial time was found to be the most efficient way to improve the forecasts. In addition, TCs that showed the greatest improvement in forecast accuracy usually had the largest initial uncertainties in TC intensity and were usually in the intensifying phase. The results demonstrate the importance of the initial intensity for TC track forecasts made using GRAPES, and indicate the model is better in describing the intensifying phase than the decaying phase of TCs. Finally, the limit of the improvement indicates that the model error associated with GRAPES forecasts may be the main cause of poor forecasts of landfalling TCs. Thus, further examinations of the model errors are required.展开更多
In this study, a method of analogue-based correction of errors(ACE) was introduced to improve El Ni?o-Southern Oscillation(ENSO) prediction produced by climate models. The ACE method is based on the hypothesis that th...In this study, a method of analogue-based correction of errors(ACE) was introduced to improve El Ni?o-Southern Oscillation(ENSO) prediction produced by climate models. The ACE method is based on the hypothesis that the flow-dependent model prediction errors are to some degree similar under analogous historical climate states, and so the historical errors can be used to effectively reduce such flow-dependent errors. With this method, the unknown errors in current ENSO predictions can be empirically estimated by using the known prediction errors which are diagnosed by the same model based on historical analogue states. The authors first propose the basic idea for applying the ACE method to ENSO prediction and then establish an analogue-dynamical ENSO prediction system based on an operational climate prediction model. The authors present some experimental results which clearly show the possibility of correcting the flow-dependent errors in ENSO prediction, and thus the potential of applying the ACE method to operational ENSO prediction based on climate models.展开更多
The initial value error and the imperfect numerical model are usually considered as error sources of numerical weather prediction (NWP). By using past multi-time observations and model output, this study proposes a ...The initial value error and the imperfect numerical model are usually considered as error sources of numerical weather prediction (NWP). By using past multi-time observations and model output, this study proposes a method to estimate imperfect numerical model error. This method can be inversely estimated through expressing the model error as a Lagrange interpolation polynomial, while the coefficients of polyno- mial are determined by past model performance. However, for practical application in the full NWP model, it is necessary to determine the following criteria: (1) the length of past data sufficient for estimation of the model errors, (2) a proper method of estimating the term "model integration with the exact solution" when solving the inverse problem, and (3) the extent to which this scheme is sensitive to the observational errors. In this study, such issues are resolved using a simple linear model, and an advection diffusion model is applied to discuss the sensitivity of the method to an artificial error source. The results indicate that the forecast errors can be largely reduced using the proposed method if the proper length of past data is chosen. To address the three problems, it is determined that (1) a few data limited by the order of the corrector can be used, (2) trapezoidal approximation can be employed to estimate the "term" in this study; however, a more accurate method should be explored for an operational NWP model, and (3) the correction is sensitive to observational error.展开更多
Video prediction is the problem of generating future frames by exploiting the spatiotemporal correlation from the past frame sequence.It is one of the crucial issues in computer vision and has many real-world applicat...Video prediction is the problem of generating future frames by exploiting the spatiotemporal correlation from the past frame sequence.It is one of the crucial issues in computer vision and has many real-world applications,mainly focused on predicting future scenarios to avoid undesirable outcomes.However,modeling future image content and object is challenging due to the dynamic evolution and complexity of the scene,such as occlusions,camera movements,delay and illumination.Direct frame synthesis or optical-flow estimation are common approaches used by researchers.However,researchers mainly focused on video prediction using one of the approaches.Both methods have limitations,such as direct frame synthesis,usually face blurry prediction due to complex pixel distributions in the scene,and optical-flow estimation,usually produce artifacts due to large object displacements or obstructions in the clip.In this paper,we constructed a deep neural network Frame Prediction Network(FPNet-OF)with multiplebranch inputs(optical flow and original frame)to predict the future video frame by adaptively fusing the future object-motion with the future frame generator.The key idea is to jointly optimize direct RGB frame synthesis and dense optical flow estimation to generate a superior video prediction network.Using various real-world datasets,we experimentally verify that our proposed framework can produce high-level video frame compared to other state-ofthe-art framework.展开更多
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the Interdisciplinary Innovation Fund of Natural Science,Nanchang University(Grant No.9167-28220007-YB2107).
文摘The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.
基金funded by Liaoning Provincial Department of Science and Technology(2023JH2/101600058)。
文摘With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting model is outdated due to the continuous updating of wind power data,a short-term wind power forecasting algorithm based on Incremental Learning-Bagging Deep Hybrid Kernel Extreme Learning Machine(IL-Bagging-DHKELM)error affinity propagation cluster analysis is proposed.The algorithm effectively combines deep hybrid kernel extreme learning machine(DHKELM)with incremental learning(IL).Firstly,an initial wind power prediction model is trained using the Bagging-DHKELM model.Secondly,Euclidean morphological distance affinity propagation AP clustering algorithm is used to cluster and analyze the prediction error of wind power obtained from the initial training model.Finally,the correlation between wind power prediction errors and Numerical Weather Prediction(NWP)data is introduced as incremental updates to the initial wind power prediction model.During the incremental learning process,multiple error performance indicators are used to measure the overall model performance,thereby enabling incremental updates of wind power models.Practical examples show the method proposed in this article reduces the root mean square error of the initial model by 1.9 percentage points,indicating that this method can be better adapted to the current scenario of the continuous increase in wind power penetration rate.The accuracy and precision of wind power generation prediction are effectively improved through the method.
基金supported by the ZTE Industry⁃University⁃Institute Cooper⁃ation Funds under Grant No.2021ZTE01⁃03.
文摘The accuracy of acquired channel state information(CSI)for beamforming design is essential for achievable performance in multiple-input multiple-output(MIMO)systems.However,in a high-speed moving scene with time-division duplex(TDD)mode,the acquired CSI depending on the channel reciprocity is inevitably outdated,leading to outdated beamforming design and then performance degradation.In this paper,a robust beamforming design under channel prediction errors is proposed for a time-varying MIMO system to combat the degradation further,based on the channel prediction technique.Specifically,the statistical characteristics of historical channel prediction errors are exploited and modeled.Moreover,to deal with random error terms,deterministic equivalents are adopted to further explore potential beamforming gain through the statistical information and ultimately derive the robust design aiming at maximizing weighted sum-rate performance.Simulation results show that the proposed beamforming design can maintain outperformance during the downlink transmission time even when channels vary fast,compared with the traditional beamforming design.
文摘The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agile Scrum and the Obtain, Scrub, Explore, Model, and iNterpret (OSEMN) methodology. Six machine learning models, namely Linear Forecast, Naive Forecast, Simple Moving Average with weekly window (SMA 5), Simple Moving Average with monthly window (SMA 20), Autoregressive Integrated Moving Average (ARIMA), and Long Short-Term Memory (LSTM), are compared and evaluated through Mean Absolute Error (MAE), with the LSTM model performing the best, showcasing its potential for practical financial applications. A Django web application “Predict It” is developed to implement the LSTM model. Ethical concerns related to predictive modeling in finance are addressed. Data quality, algorithm choice, feature engineering, and preprocessing techniques are emphasized for better model performance. The research acknowledges limitations and suggests future research directions, aiming to equip investors and financial professionals with reliable predictive models for dynamic markets.
基金supported by the National Natural Science Foundation of China(62172089,61972087,62172090).
文摘Virtual machine(VM)consolidation is an effective way to improve resource utilization and reduce energy consumption in cloud data centers.Most existing studies have considered VM consolidation as a bin-packing problem,but the current schemes commonly ignore the long-term relationship between VMs and hosts.In addition,there is a lack of long-term consideration for resource optimization in the VM consolidation,which results in unnecessary VM migration and increased energy consumption.To address these limitations,a VM consolidation method based on multi-step prediction and affinity-aware technique for energy-efficient cloud data centers(MPaAF-VMC)is proposed.The proposed method uses an improved linear regression prediction algorithm to predict the next-moment resource utilization of hosts and VMs,and obtains the stage demand of resources in the future period through multi-step prediction,which is realized by iterative prediction.Then,based on the multi-step prediction,an affinity model between the VM and host is designed using the first-order correlation coefficient and Euclidean distance.During the VM consolidation,the affinity value is used to select the migration VM and placement host.The proposed method is compared with the existing consolidation algorithms on the PlanetLab and Google cluster real workload data using the CloudSim simulation platform.Experimental results show that the proposed method can achieve significant improvement in reducing energy consumption,VM migration costs,and service level agreement(SLA)violations.
文摘Unlike height-diameter equations for standing trees commonly used in forest resources modelling,tree height models for cut-to-length(CTL)stems tend to produce prediction errors whose distributions are not conditionally normal but are rather leptokurtic and heavy-tailed.This feature was merely noticed in previous studies but never thoroughly investigated.This study characterized the prediction error distribution of a newly developed such tree height model for Pin us radiata(D.Don)through the three-parameter Burr TypeⅫ(BⅫ)distribution.The model’s prediction errors(ε)exhibited heteroskedasticity conditional mainly on the small end relative diameter of the top log and also on DBH to a minor extent.Structured serial correlations were also present in the data.A total of 14 candidate weighting functions were compared to select the best two for weightingεin order to reduce its conditional heteroskedasticity.The weighted prediction errors(εw)were shifted by a constant to the positive range supported by the BXII distribution.Then the distribution of weighted and shifted prediction errors(εw+)was characterized by the BⅫdistribution using maximum likelihood estimation through 1000 times of repeated random sampling,fitting and goodness-of-fit testing,each time by randomly taking only one observation from each tree to circumvent the potential adverse impact of serial correlation in the data on parameter estimation and inferences.The nonparametric two sample Kolmogorov-Smirnov(KS)goodness-of-fit test and its closely related Kuiper’s(KU)test showed the fitted BⅫdistributions provided a good fit to the highly leptokurtic and heavy-tailed distribution ofε.Random samples generated from the fitted BⅫdistributions ofεw+derived from using the best two weighting functions,when back-shifted and unweighted,exhibited distributions that were,in about97 and 95%of the 1000 cases respectively,not statistically different from the distribution ofε.Our results for cut-tolength P.radiata stems represented the first case of any tree species where a non-normal error distribution in tree height prediction was described by an underlying probability distribution.The fitted BXII prediction error distribution will help to unlock the full potential of the new tree height model in forest resources modelling of P.radiata plantations,particularly when uncertainty assessments,statistical inferences and error propagations are needed in research and practical applications through harvester data analytics.
文摘In regression, despite being both aimed at estimating the Mean Squared Prediction Error (MSPE), Akaike’s Final Prediction Error (FPE) and the Generalized Cross Validation (GCV) selection criteria are usually derived from two quite different perspectives. Here, settling on the most commonly accepted definition of the MSPE as the expectation of the squared prediction error loss, we provide theoretical expressions for it, valid for any linear model (LM) fitter, be it under random or non random designs. Specializing these MSPE expressions for each of them, we are able to derive closed formulas of the MSPE for some of the most popular LM fitters: Ordinary Least Squares (OLS), with or without a full column rank design matrix;Ordinary and Generalized Ridge regression, the latter embedding smoothing splines fitting. For each of these LM fitters, we then deduce a computable estimate of the MSPE which turns out to coincide with Akaike’s FPE. Using a slight variation, we similarly get a class of MSPE estimates coinciding with the classical GCV formula for those same LM fitters.
基金National Key Research and Development Program of China(2017YFC1502000)。
文摘In south China, warm-sector rainstorms are significantly different from the traditional frontal rainstorms due to complex mechanism, which brings great challenges to their forecast. In this study, based on ensemble forecasting, the high-resolution mesoscale numerical forecast model WRF was used to investigate the effect of initial errors on a warmsector rainstorm and a frontal rainstorm under the same circulation in south China, respectively. We analyzed the sensitivity of forecast errors to the initial errors and their evolution characteristics for the warm-sector and the frontal rainstorm. Additionally, the difference of the predictability was compared via adjusting the initial values of the GOOD member and the BAD member. Compared with the frontal rainstorm, the warm-sector rainstorm was more sensitive to initial error, which increased faster in the warm-sector. Furthermore, the magnitude of error in the warm-sector rainstorm was obviously larger than that of the frontal rainstorm, while the spatial scale of the error was smaller. Similarly, both types of the rainstorm were limited by practical predictability and inherent predictability, while the nonlinear increase characteristics occurred to be more distinct in the warm-sector rainstorm, resulting in the lower inherent predictability.The comparison between the warm-sector rainstorm and the frontal rainstorm revealed that the forecast field was closer to the real situation derived from more accurate initial errors, but only the increase rate in the frontal rainstorm was restrained evidently.
基金supported by the National Defense Foundation of China(71601183)
文摘For the product degradation process with random effect (RE), measurement error (ME) and nonlinearity in step-stress accelerated degradation test (SSADT), the nonlinear Wiener based degradation model with RE and ME is built. An analytical approximation to the probability density function (PDF) of the product's lifetime is derived in a closed form. The process and data of SSADT are analyzed to obtain the relation model of the observed data under each accelerated stress. The likelihood function for the population-based observed data is constructed. The population-based model parameters and its random coefficient prior values are estimated. According to the newly observed data of the target product in SSADT, an analytical approximation to the PDF of its residual lifetime (RL) is derived in accordance with its individual degradation characteristics. The parameter updating method based on Bayesian inference is applied to obtain the posterior value of random coefficient of the RL model. A numerical example by simulation is analyzed to verify the accuracy and advantage of the proposed model.
基金supported by the National Natural Science Foundation of China (NFSC Grant Nos. 41690122, 41690120, 41490644, 41490640 and 41475101)+5 种基金the Ao Shan Talents Program supported by Qingdao National Laboratory for Marine Science and Technology (Grant No. 2015ASTP)a Chinese Academy of Sciences Strategic Priority Projectthe Western Pacific Ocean System (Grant Nos. XDA11010105, XDA11020306)the NSFC–Shandong Joint Fund for Marine Science Research Centers (Grant No. U1406401)the National Natural Science Foundation of China Innovative Group Grant (Grant No. 41421005)the Taishan Scholarship and Qingdao Innovative Program (Grant No. 2014GJJS0101)
文摘The initial errors constitute one of the main limiting factors in the ability to predict the E1 Nino-Southem Oscillation (ENSO) in ocean-atmosphere coupled models. The conditional nonlinear optimal perturbation (CNOP) approach was em- ployed to study the largest initial error growth in the E1 Nino predictions of an intermediate coupled model (ICM). The optimal initial errors (as represented by CNOPs) in sea surface temperature anomalies (SSTAs) and sea level anomalies (SLAs) were obtained with seasonal variation. The CNOP-induced perturbations, which tend to evolve into the La Nifia mode, were found to have the same dynamics as ENSO itself. This indicates that, if CNOP-type errors are present in the initial conditions used to make a prediction of E1 Nino, the E1 Nino event tends to be under-predicted. In particular, compared with other seasonal CNOPs, the CNOPs in winter can induce the largest error growth, which gives rise to an ENSO amplitude that is hardly ever predicted accurately. Additionally, it was found that the CNOP-induced perturbations exhibit a strong spring predictability barrier (SPB) phenomenon for ENSO prediction. These results offer a way to enhance ICM prediction skill and, particularly, weaken the SPB phenomenon by filtering the CNOP-type errors in the initial state. The characteristic distributions of the CNOPs derived from the ICM also provide useful information for targeted observations through data assimilation. Given the fact that the derived CNOPs are season-dependent, it is suggested that seasonally varying targeted observations should be implemented to accurately predict ENSO events.
基金Project supported by the National Natural Science Foundation of China (Grant Nos 40575036 and 40325015).Acknowledgement The authors thank Drs Zhang Pei-Qun and Bao Ming very much for their valuable comments on the present paper.
文摘In this paper, an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP). The analysis shows that the ACE can effectively reduce model errors by combining the statistical analogue method with the dynamical model together in order that the information of plenty of historical data is utilized in the current complicated NWP model, Furthermore, in the ACE, the differences of the similarities between different historical analogues and the current initial state are considered as the weights for estimating model errors. The results of daily, decad and monthly prediction experiments on a complicated T63 atmospheric model show that the performance of the ACE by correcting model errors based on the estimation of the errors of 4 historical analogue predictions is not only better than that of the scheme of only introducing the correction of the errors of every single analogue prediction, but is also better than that of the T63 model.
基金jointly supported by the National Key Research and Development Program of China (Grant. No. 2017YFC1501601)the National Natural Science Foundation of China (Grant. No. 41475100)+1 种基金the National Science and Technology Support Program (Grant. No. 2012BAC22B03)the Youth Innovation Promotion Association of the Chinese Academy of Sciences
文摘This paper investigates the possible sources of errors associated with tropical cyclone(TC) tracks forecasted using the Global/Regional Assimilation and Prediction System(GRAPES). In Part I, it is shown that the model error of GRAPES may be the main cause of poor forecasts of landfalling TCs. Thus, a further examination of the model error is the focus of Part II.Considering model error as a type of forcing, the model error can be represented by the combination of good forecasts and bad forecasts. Results show that there are systematic model errors. The model error of the geopotential height component has periodic features, with a period of 24 h and a global pattern of wavenumber 2 from west to east located between 60?S and 60?N. This periodic model error presents similar features as the atmospheric semidiurnal tide, which reflect signals from tropical diabatic heating, indicating that the parameter errors related to the tropical diabatic heating may be the source of the periodic model error. The above model errors are subtracted from the forecast equation and a series of new forecasts are made. The average forecasting capability using the rectified model is improved compared to simply improving the initial conditions of the original GRAPES model. This confirms the strong impact of the periodic model error on landfalling TC track forecasts. Besides, if the model error used to rectify the model is obtained from an examination of additional TCs, the forecasting capabilities of the corresponding rectified model will be improved.
基金supported by the National Natural Scientific Foundation of China (Grant Nos. 41230420 and 41576015)the Qingdao National Laboratory for Marine Science and Technology (Grant No. QNLM2016ORP0107)+2 种基金the NSFC Innovative Group (Grant No. 41421005)the NSFC–Shandong Joint Fund for Marine Science Research Centers (Grant No. U1606402)the National Programme on Global Change and Air–Sea Interaction (Grant No. GASI-IPOVAI-06)
文摘Based on the high-resolution Regional Ocean Modeling System(ROMS) and the conditional nonlinear optimal perturbation(CNOP) method, this study explored the effects of optimal initial errors on the prediction of the Kuroshio large meander(LM) path, and the growth mechanism of optimal initial errors was revealed. For each LM event, two types of initial error(denoted as CNOP1 and CNOP2) were obtained. Their large amplitudes were found located mainly in the upper 2500 m in the upstream region of the LM, i.e., southeast of Kyushu. Furthermore, we analyzed the patterns and nonlinear evolution of the two types of CNOP. We found CNOP1 tends to strengthen the LM path through southwestward extension. Conversely,CNOP2 has almost the opposite pattern to CNOP1, and it tends to weaken the LM path through northeastward contraction.The growth mechanism of optimal initial errors was clarified through eddy-energetics analysis. The results indicated that energy from the background field is transferred to the error field because of barotropic and baroclinic instabilities. Thus, it is inferred that both barotropic and baroclinic processes play important roles in the growth of CNOP-type optimal initial errors.
基金provided by the National Natural Science Foundation of China(Grant Nos.41275039 and 41471305)the Preeminence Youth Cultivation Project of Sichuan (Grant No.2015JQ0037)
文摘Extended range (10-30 d) heavy rain forecasting is difficult but performs an important function in disaster prevention and mitigation. In this paper, a nonlinear cross prediction error (NCPE) algorithm that combines nonlinear dynamics and statistical methods is proposed. The method is based on phase space reconstruction of chaotic single-variable time series of precipitable water and is tested in 100 global cases of heavy rain. First, nonlinear relative dynamic error for local attractor pairs is calculated at different stages of the heavy rain process, after which the local change characteristics of the attractors are analyzed. Second, the eigen-peak is defined as a prediction indicator based on an error threshold of about 1.5, and is then used to analyze the forecasting validity period. The results reveal that the prediction indicator features regarded as eigenpeaks for heavy rain extreme weather are all reflected consistently, without failure, based on the NCPE model; the prediction validity periods for 1-2 d, 3-9 d and 10-30 d are 4, 22 and 74 cases, respectively, without false alarm or omission. The NCPE model developed allows accurate forecasting of heavy rain over an extended range of 10-30 d and has the potential to be used to explore the mechanisms involved in the development of heavy rain according to a segmentation scale. This novel method provides new insights into extended range forecasting and atmospheric predictability, and also allows the creation of multi-variable chaotic extreme weather prediction models based on high spatiotemporal resolution data.
基金Project supported by the Special Scientific Research Project for Public Interest(Grant No.GYHY201206009)the Fundamental Research Funds for the Central Universities,China(Grant Nos.lzujbky-2012-13 and lzujbky-2013-11)the National Basic Research Program of China(Grant Nos.2012CB955902 and 2013CB430204)
文摘Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can be viewed as a series of solutions of an accurate model governing the actual atmosphere. Model error is represented as an unknown term in the accurate model, thus NWP can be considered as an inverse problem to uncover the unknown error term. The inverse problem models can absorb long periods of observed data to generate model error correction procedures. They thus resolve the deficiency and faultiness of the NWP schemes employing only the initial-time data. In this study we construct two inverse problem models to estimate and extrapolate the time-varying and spatial-varying model errors in both the historical and forecast periods by using recent observations and analogue phenomena of the atmosphere. Numerical experiment on Burgers' equation has illustrated the substantial forecast improvement using inverse problem algorithms. The proposed inverse problem methods of suppressing NWP errors will be useful in future high accuracy applications of NWP.
基金supported by the National Science and Technology Support Program(Grant.No.2012BAC22B03)the National Natural Science Foundation of China(Grant No.41475100)+1 种基金the Youth Innovation Promotion Association of Chinese Academy of Sciencesthe Japan Society for the Promotion of Science KAKENHI(Grant.No.26282111)
文摘This paper investigates the possible sources of errors associated with tropical cyclone (TC) tracks forecasted using the Global/Regional Assimilation and Prediction System (GRAPES). The GRAPES forecasts were made for 16 landfaIling TCs in the western North Pacific basin during the 2008 and 2009 seasons, with a forecast length of 72 hours, and using the default initial conditions ("initials", hereafter), which are from the NCEP-FNL dataset, as well as ECMWF initials. The forecasts are compared with ECMWF forecasts. The results show that in most TCs, the GRAPES forecasts are improved when using the ECMWF initials compared with the default initials. Compared with the ECMWF initials, the default initials produce lower intensity TCs and a lower intensity subtropical high, but a higher intensity South Asia high and monsoon trough, as well as a higher temperature but lower specific humidity at the TC center. Replacement of the geopotential height and wind fields with the ECMWF initials in and around the TC center at the initial time was found to be the most efficient way to improve the forecasts. In addition, TCs that showed the greatest improvement in forecast accuracy usually had the largest initial uncertainties in TC intensity and were usually in the intensifying phase. The results demonstrate the importance of the initial intensity for TC track forecasts made using GRAPES, and indicate the model is better in describing the intensifying phase than the decaying phase of TCs. Finally, the limit of the improvement indicates that the model error associated with GRAPES forecasts may be the main cause of poor forecasts of landfalling TCs. Thus, further examinations of the model errors are required.
基金supported by the Integration and Application Project for Key Meteorology Techniques in China Meteorological Administration (Grant No. CMAGJ2014M64)the China Meteorological Special Project (Grant No. GYHY2012 06016)the National Basic Research Program of China (973 Program, Grant No. 2010CB950404)
文摘In this study, a method of analogue-based correction of errors(ACE) was introduced to improve El Ni?o-Southern Oscillation(ENSO) prediction produced by climate models. The ACE method is based on the hypothesis that the flow-dependent model prediction errors are to some degree similar under analogous historical climate states, and so the historical errors can be used to effectively reduce such flow-dependent errors. With this method, the unknown errors in current ENSO predictions can be empirically estimated by using the known prediction errors which are diagnosed by the same model based on historical analogue states. The authors first propose the basic idea for applying the ACE method to ENSO prediction and then establish an analogue-dynamical ENSO prediction system based on an operational climate prediction model. The authors present some experimental results which clearly show the possibility of correcting the flow-dependent errors in ENSO prediction, and thus the potential of applying the ACE method to operational ENSO prediction based on climate models.
基金funded by the Special Scientific Research Project for Public Interest (GYHY201206009)the National Key Technologies Research and Development Program (Grant No. 2012BAC22B02)+2 种基金the National Natural Science Foundation Science Fund for Creative Research Groups (Grant No.41221064)the Special Scientific Research Project for Public Interest (Grant No. GYHY201006013)the National Natural Science Foundation of China (Grant No. 41105070 )
文摘The initial value error and the imperfect numerical model are usually considered as error sources of numerical weather prediction (NWP). By using past multi-time observations and model output, this study proposes a method to estimate imperfect numerical model error. This method can be inversely estimated through expressing the model error as a Lagrange interpolation polynomial, while the coefficients of polyno- mial are determined by past model performance. However, for practical application in the full NWP model, it is necessary to determine the following criteria: (1) the length of past data sufficient for estimation of the model errors, (2) a proper method of estimating the term "model integration with the exact solution" when solving the inverse problem, and (3) the extent to which this scheme is sensitive to the observational errors. In this study, such issues are resolved using a simple linear model, and an advection diffusion model is applied to discuss the sensitivity of the method to an artificial error source. The results indicate that the forecast errors can be largely reduced using the proposed method if the proper length of past data is chosen. To address the three problems, it is determined that (1) a few data limited by the order of the corrector can be used, (2) trapezoidal approximation can be employed to estimate the "term" in this study; however, a more accurate method should be explored for an operational NWP model, and (3) the correction is sensitive to observational error.
基金supported by Incheon NationalUniversity Research Grant in 2017.
文摘Video prediction is the problem of generating future frames by exploiting the spatiotemporal correlation from the past frame sequence.It is one of the crucial issues in computer vision and has many real-world applications,mainly focused on predicting future scenarios to avoid undesirable outcomes.However,modeling future image content and object is challenging due to the dynamic evolution and complexity of the scene,such as occlusions,camera movements,delay and illumination.Direct frame synthesis or optical-flow estimation are common approaches used by researchers.However,researchers mainly focused on video prediction using one of the approaches.Both methods have limitations,such as direct frame synthesis,usually face blurry prediction due to complex pixel distributions in the scene,and optical-flow estimation,usually produce artifacts due to large object displacements or obstructions in the clip.In this paper,we constructed a deep neural network Frame Prediction Network(FPNet-OF)with multiplebranch inputs(optical flow and original frame)to predict the future video frame by adaptively fusing the future object-motion with the future frame generator.The key idea is to jointly optimize direct RGB frame synthesis and dense optical flow estimation to generate a superior video prediction network.Using various real-world datasets,we experimentally verify that our proposed framework can produce high-level video frame compared to other state-ofthe-art framework.