期刊文献+
共找到9篇文章
< 1 >
每页显示 20 50 100
Water storage changes in North America retrieved from GRACE gravity and GPS data 被引量:2
1
作者 Wang Hansheng Xiang Longwei +4 位作者 Jia Lulu Wu Patrick Steffen Holger Jiang Liming Shen Qiang 《Geodesy and Geodynamics》 2015年第4期267-273,共7页
As global warming continues,the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management.In North Am... As global warming continues,the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management.In North America as elsewhere in the world,changes in water resources strongly impact agriculture and animal husbandry.From a combination of Gravity Recovery and Climate Experiment(GRACE) gravity and Global Positioning System(GPS) data,it is recently found that water storage from August,2002 to March,2011 recovered after the extreme Canadian Prairies drought between 1999 and 2005.In this paper,we use GRACE monthly gravity data of Release 5 to track the water storage change from August,2002 to June,2014.In Canadian Prairies and the Great Lakes areas,the total water storage is found to have increased during the last decade by a rate of 73.8 ± 14.5 Gt/a,which is larger than that found in the previous study due to the longer time span of GRACE observations used and the reduction of the leakage error.We also find a long term decrease of water storage at a rate of-12.0 ± 4.2 Gt/a in Ungava Peninsula,possibly due to permafrost degradation and less snow accumulation during the winter in the region.In addition,the effect of total mass gain in the surveyed area,on present-day sea level,amounts to-0.18 mm/a,and thus should be taken into account in studies of global sea level change. 展开更多
关键词 Canadian Prairies Great Lakes Ungava Peninsula Water storage changes Gravity Recovery and Climate Experiment (GRACE) data Global Positioning System (GPS) data Glacial isostatic adjustment Separation approach
下载PDF
Fourth-Order Predictive Modelling: I. General-Purpose Closed-Form Fourth-Order Moments-Constrained MaxEnt Distribution
2
作者 Dan Gabriel Cacuci 《American Journal of Computational Mathematics》 2023年第4期413-438,共26页
This work (in two parts) will present a novel predictive modeling methodology aimed at obtaining “best-estimate results with reduced uncertainties” for the first four moments (mean values, covariance, skewness and k... This work (in two parts) will present a novel predictive modeling methodology aimed at obtaining “best-estimate results with reduced uncertainties” for the first four moments (mean values, covariance, skewness and kurtosis) of the optimally predicted distribution of model results and calibrated model parameters, by combining fourth-order experimental and computational information, including fourth (and higher) order sensitivities of computed model responses to model parameters. Underlying the construction of this fourth-order predictive modeling methodology is the “maximum entropy principle” which is initially used to obtain a novel closed-form expression of the (moments-constrained) fourth-order Maximum Entropy (MaxEnt) probability distribution constructed from the first four moments (means, covariances, skewness, kurtosis), which are assumed to be known, of an otherwise unknown distribution of a high-dimensional multivariate uncertain quantity of interest. This fourth-order MaxEnt distribution provides optimal compatibility of the available information while simultaneously ensuring minimal spurious information content, yielding an estimate of a probability density with the highest uncertainty among all densities satisfying the known moment constraints. Since this novel generic fourth-order MaxEnt distribution is of interest in its own right for applications in addition to predictive modeling, its construction is presented separately, in this first part of a two-part work. The fourth-order predictive modeling methodology that will be constructed by particularizing this generic fourth-order MaxEnt distribution will be presented in the accompanying work (Part-2). 展开更多
关键词 Maximum Entropy Principle Fourth-Order Predictive Modeling data Assimilation data adjustment Reduced Predicted Uncertainties Model Parameter Calibration
下载PDF
Fourth-Order Predictive Modelling: II. 4th-BERRU-PM Methodology for Combining Measurements with Computations to Obtain Best-Estimate Results with Reduced Uncertainties
3
作者 Dan Gabriel Cacuci 《American Journal of Computational Mathematics》 2023年第4期439-475,共37页
This work presents a comprehensive fourth-order predictive modeling (PM) methodology that uses the MaxEnt principle to incorporate fourth-order moments (means, covariances, skewness, kurtosis) of model parameters, com... This work presents a comprehensive fourth-order predictive modeling (PM) methodology that uses the MaxEnt principle to incorporate fourth-order moments (means, covariances, skewness, kurtosis) of model parameters, computed and measured model responses, as well as fourth (and higher) order sensitivities of computed model responses to model parameters. This new methodology is designated by the acronym 4<sup>th</sup>-BERRU-PM, which stands for “fourth-order best-estimate results with reduced uncertainties.” The results predicted by the 4<sup>th</sup>-BERRU-PM incorporates, as particular cases, the results previously predicted by the second-order predictive modeling methodology 2<sup>nd</sup>-BERRU-PM, and vastly generalizes the results produced by extant data assimilation and data adjustment procedures. 展开更多
关键词 Fourth-Order Predictive Modeling data Assimilation data adjustment Uncertainty Quantification Reduced Predicted Uncertainties
下载PDF
Second-Order MaxEnt Predictive Modelling Methodology. I: Deterministically Incorporated Computational Model (2nd-BERRU-PMD)
4
作者 Dan Gabriel Cacuci 《American Journal of Computational Mathematics》 2023年第2期236-266,共31页
This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this met... This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this methodology incorporates second-order uncertainties (means and covariances) and second-order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best- Estimate Results with Reduced Uncertainties” and the last letter (“D”) in the acronym indicates “deterministic,” referring to the deterministic inclusion of the computational model responses. The 2<sup>nd</sup>-BERRU-PMD methodology is fundamentally based on the maximum entropy (MaxEnt) principle. This principle is in contradistinction to the fundamental principle that underlies the extant data assimilation and/or adjustment procedures which minimize in a least-square sense a subjective user-defined functional which is meant to represent the discrepancies between measured and computed model responses. It is shown that the 2<sup>nd</sup>-BERRU-PMD methodology generalizes and extends current data assimilation and/or data adjustment procedures while overcoming the fundamental limitations of these procedures. In the accompanying work (Part II), the alternative framework for developing the “second- order MaxEnt predictive modelling methodology” is presented by incorporating probabilistically (as opposed to “deterministically”) the computed model responses. 展开更多
关键词 Second-Order Predictive Modeling data Assimilation data adjustment Uncertainty Quantification Reduced Predicted Uncertainties
下载PDF
Second-Order MaxEnt Predictive Modelling Methodology. II: Probabilistically Incorporated Computational Model (2nd-BERRU-PMP)
5
作者 Dan Gabriel Cacuci 《American Journal of Computational Mathematics》 2023年第2期267-294,共28页
This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and par... This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and parameters. This methodology is designated by the acronym 2<sup>nd</sup>-BERRU-PMP, where the attribute “2<sup>nd</sup>” indicates that this methodology incorporates second- order uncertainties (means and covariances) and second (and higher) order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best-Estimate Results with Reduced Uncertainties” and the last letter (“P”) in the acronym indicates “probabilistic,” referring to the MaxEnt probabilistic inclusion of the computational model responses. This is in contradistinction to the 2<sup>nd</sup>-BERRU-PMD methodology, which deterministically combines the computed model responses with the experimental information, as presented in the accompanying work (Part I). Although both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies yield expressions that include second (and higher) order sensitivities of responses to model parameters, the respective expressions for the predicted responses, for the calibrated predicted parameters and for their predicted uncertainties (covariances), are not identical to each other. Nevertheless, the results predicted by both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies encompass, as particular cases, the results produced by the extant data assimilation and data adjustment procedures, which rely on the minimization, in a least-square sense, of a user-defined functional meant to represent the discrepancies between measured and computed model responses. 展开更多
关键词 Second-Order Predictive Modeling data Assimilation data adjustment Uncertainty Quantification Reduced Predicted Uncertainties
下载PDF
Ensemble Strategy for Insider Threat Detection from User Activity Logs 被引量:2
6
作者 Shihong Zou Huizhong Sun +1 位作者 Guosheng Xu Ruijie Quan 《Computers, Materials & Continua》 SCIE EI 2020年第11期1321-1334,共14页
In the information era,the core business and confidential information of enterprises/organizations is stored in information systems.However,certain malicious inside network users exist hidden inside the organization;t... In the information era,the core business and confidential information of enterprises/organizations is stored in information systems.However,certain malicious inside network users exist hidden inside the organization;these users intentionally or unintentionally misuse the privileges of the organization to obtain sensitive information from the company.The existing approaches on insider threat detection mostly focus on monitoring,detecting,and preventing any malicious behavior generated by users within an organization’s system while ignoring the imbalanced ground-truth insider threat data impact on security.To this end,to be able to detect insider threats more effectively,a data processing tool was developed to process the detected user activity to generate information-use events,and formulated a Data Adjustment(DA)strategy to adjust the weight of the minority and majority samples.Then,an efficient ensemble strategy was utilized,which applied the extreme gradient boosting(XGBoost)model combined with the DA strategy to detect anomalous behavior.The CERT dataset was used for an insider threat to evaluate our approach,which was a real-world dataset with artificially injected insider threat events.The results demonstrated that the proposed approach can effectively detect insider threats,with an accuracy rate of 99.51%and an average recall rate of 98.16%.Compared with other classifiers,the detection performance is improved by 8.76%. 展开更多
关键词 Insider threat data adjustment imbalanced data ensemble strategy
下载PDF
Calibration of the Infiltration Rate Curve from the LN Trend Line at Eight Sites of Interest, Based on Infiltration Tests Carried out
7
作者 Victor Rogelio Tirado Picado 《Open Journal of Applied Sciences》 CAS 2022年第7期1098-1115,共18页
The purpose of this research is to demonstrate that a calibration curve can be obtained that can be used for any infiltration test, with the double ring method, as well as an equation that helps speed up data processi... The purpose of this research is to demonstrate that a calibration curve can be obtained that can be used for any infiltration test, with the double ring method, as well as an equation that helps speed up data processing. The experimentation was carried out in eight points in Nicaragua, of which five were distributed in Managua and three in Rivas-Nandaime. These results can be used for purposes of other studies of interest. As a result, a calibration curve is obtained, and an expression equal to  is deduced, which will be the equation to determine the average infiltration of a field test occupying the double ring, for a total of 7 hours. And it is from the result that the texture of the soil can be determined by means of the indicator table. The basic methodology allowed analyzing the data since they are obtained, processed and analyzed, resulting in the calibration curve for infiltration tests. Finally, an equation was determined from the averages of the processed data, resulting in a correlation of 0.9976, above 0.5, which means it is very high and reliable. 展开更多
关键词 Double Ring INFILTRATION CALIBRATION Logarithmic Curve data adjustment
下载PDF
The AME2012 atomic mass evaluation(Ⅰ).Evaluation of input data,adjustment procedures 被引量:16
8
作者 G.Audi M.Wang +4 位作者 A.H.Wapstra F.G.Kondev M.MacCormick X.Xu B.Pfeiffer 《Chinese Physics C》 SCIE CAS CSCD 2012年第12期1603-2014,共412页
This paper is the second part of the new evaluation of atomic masses, AME2012. From the results of a leastsquares calculation, described in Part I, for all accepted experimental data, we derive here tables and graphs ... This paper is the second part of the new evaluation of atomic masses, AME2012. From the results of a leastsquares calculation, described in Part I, for all accepted experimental data, we derive here tables and graphs to replace those of AME2003. The first table lists atomic masses. It is followed by a table of the influences of data on primary nuclides, a table of separation energies and reaction energies, and finally, a series of graphs of separation and decay energies. The last section in this paper lists all references to the input data used in Part I of this AME2012 and also to the data included in the NUBASE2012 evaluation (first paper in this issue). 展开更多
关键词 AME The AME2012 atomic mass evaluation Evaluation of input data adjustment procedures data
原文传递
Comparison and combination of EAKF and SIR-PF in the Bayesian filter framework 被引量:3
9
作者 SHEN Zheqi ZHANG Xiangming TANG Youmin 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2016年第3期69-78,共10页
Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustme... Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustment Kalman filter(EAKF) and sequential importance resampling particle filter(SIR-PF), using a well-known nonlinear and non-Gaussian model(Lorenz '63 model). The EAKF, which is a deterministic scheme of the ensemble Kalman filter(En KF), performs better than the classical(stochastic) En KF in a general framework. Comparison between the SIR-PF and the EAKF reveals that the former outperforms the latter if ensemble size is so large that can avoid the filter degeneracy, and vice versa. The impact of the probability density functions and effective ensemble sizes on assimilation performances are also explored. On the basis of comparisons between the SIR-PF and the EAKF, a mixture filter, called ensemble adjustment Kalman particle filter(EAKPF), is proposed to combine their both merits. Similar to the ensemble Kalman particle filter, which combines the stochastic En KF and SIR-PF analysis schemes with a tuning parameter, the new mixture filter essentially provides a continuous interpolation between the EAKF and SIR-PF. The same Lorenz '63 model is used as a testbed, showing that the EAKPF is able to overcome filter degeneracy while maintaining the non-Gaussian nature, and performs better than the EAKF given limited ensemble size. 展开更多
关键词 data assimilation ensemble adjustment Kalman filter particle filter Bayesian estimation ensemble adjustment Kalman particle filter
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部