As global warming continues,the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management.In North Am...As global warming continues,the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management.In North America as elsewhere in the world,changes in water resources strongly impact agriculture and animal husbandry.From a combination of Gravity Recovery and Climate Experiment(GRACE) gravity and Global Positioning System(GPS) data,it is recently found that water storage from August,2002 to March,2011 recovered after the extreme Canadian Prairies drought between 1999 and 2005.In this paper,we use GRACE monthly gravity data of Release 5 to track the water storage change from August,2002 to June,2014.In Canadian Prairies and the Great Lakes areas,the total water storage is found to have increased during the last decade by a rate of 73.8 ± 14.5 Gt/a,which is larger than that found in the previous study due to the longer time span of GRACE observations used and the reduction of the leakage error.We also find a long term decrease of water storage at a rate of-12.0 ± 4.2 Gt/a in Ungava Peninsula,possibly due to permafrost degradation and less snow accumulation during the winter in the region.In addition,the effect of total mass gain in the surveyed area,on present-day sea level,amounts to-0.18 mm/a,and thus should be taken into account in studies of global sea level change.展开更多
This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this met...This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this methodology incorporates second-order uncertainties (means and covariances) and second-order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best- Estimate Results with Reduced Uncertainties” and the last letter (“D”) in the acronym indicates “deterministic,” referring to the deterministic inclusion of the computational model responses. The 2<sup>nd</sup>-BERRU-PMD methodology is fundamentally based on the maximum entropy (MaxEnt) principle. This principle is in contradistinction to the fundamental principle that underlies the extant data assimilation and/or adjustment procedures which minimize in a least-square sense a subjective user-defined functional which is meant to represent the discrepancies between measured and computed model responses. It is shown that the 2<sup>nd</sup>-BERRU-PMD methodology generalizes and extends current data assimilation and/or data adjustment procedures while overcoming the fundamental limitations of these procedures. In the accompanying work (Part II), the alternative framework for developing the “second- order MaxEnt predictive modelling methodology” is presented by incorporating probabilistically (as opposed to “deterministically”) the computed model responses.展开更多
This work presents a comprehensive fourth-order predictive modeling (PM) methodology that uses the MaxEnt principle to incorporate fourth-order moments (means, covariances, skewness, kurtosis) of model parameters, com...This work presents a comprehensive fourth-order predictive modeling (PM) methodology that uses the MaxEnt principle to incorporate fourth-order moments (means, covariances, skewness, kurtosis) of model parameters, computed and measured model responses, as well as fourth (and higher) order sensitivities of computed model responses to model parameters. This new methodology is designated by the acronym 4<sup>th</sup>-BERRU-PM, which stands for “fourth-order best-estimate results with reduced uncertainties.” The results predicted by the 4<sup>th</sup>-BERRU-PM incorporates, as particular cases, the results previously predicted by the second-order predictive modeling methodology 2<sup>nd</sup>-BERRU-PM, and vastly generalizes the results produced by extant data assimilation and data adjustment procedures.展开更多
This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and par...This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and parameters. This methodology is designated by the acronym 2<sup>nd</sup>-BERRU-PMP, where the attribute “2<sup>nd</sup>” indicates that this methodology incorporates second- order uncertainties (means and covariances) and second (and higher) order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best-Estimate Results with Reduced Uncertainties” and the last letter (“P”) in the acronym indicates “probabilistic,” referring to the MaxEnt probabilistic inclusion of the computational model responses. This is in contradistinction to the 2<sup>nd</sup>-BERRU-PMD methodology, which deterministically combines the computed model responses with the experimental information, as presented in the accompanying work (Part I). Although both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies yield expressions that include second (and higher) order sensitivities of responses to model parameters, the respective expressions for the predicted responses, for the calibrated predicted parameters and for their predicted uncertainties (covariances), are not identical to each other. Nevertheless, the results predicted by both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies encompass, as particular cases, the results produced by the extant data assimilation and data adjustment procedures, which rely on the minimization, in a least-square sense, of a user-defined functional meant to represent the discrepancies between measured and computed model responses.展开更多
This work (in two parts) will present a novel predictive modeling methodology aimed at obtaining “best-estimate results with reduced uncertainties” for the first four moments (mean values, covariance, skewness and k...This work (in two parts) will present a novel predictive modeling methodology aimed at obtaining “best-estimate results with reduced uncertainties” for the first four moments (mean values, covariance, skewness and kurtosis) of the optimally predicted distribution of model results and calibrated model parameters, by combining fourth-order experimental and computational information, including fourth (and higher) order sensitivities of computed model responses to model parameters. Underlying the construction of this fourth-order predictive modeling methodology is the “maximum entropy principle” which is initially used to obtain a novel closed-form expression of the (moments-constrained) fourth-order Maximum Entropy (MaxEnt) probability distribution constructed from the first four moments (means, covariances, skewness, kurtosis), which are assumed to be known, of an otherwise unknown distribution of a high-dimensional multivariate uncertain quantity of interest. This fourth-order MaxEnt distribution provides optimal compatibility of the available information while simultaneously ensuring minimal spurious information content, yielding an estimate of a probability density with the highest uncertainty among all densities satisfying the known moment constraints. Since this novel generic fourth-order MaxEnt distribution is of interest in its own right for applications in addition to predictive modeling, its construction is presented separately, in this first part of a two-part work. The fourth-order predictive modeling methodology that will be constructed by particularizing this generic fourth-order MaxEnt distribution will be presented in the accompanying work (Part-2).展开更多
This paper is the second part of the new evaluation of atomic masses, AME2012. From the results of a leastsquares calculation, described in Part I, for all accepted experimental data, we derive here tables and graphs ...This paper is the second part of the new evaluation of atomic masses, AME2012. From the results of a leastsquares calculation, described in Part I, for all accepted experimental data, we derive here tables and graphs to replace those of AME2003. The first table lists atomic masses. It is followed by a table of the influences of data on primary nuclides, a table of separation energies and reaction energies, and finally, a series of graphs of separation and decay energies. The last section in this paper lists all references to the input data used in Part I of this AME2012 and also to the data included in the NUBASE2012 evaluation (first paper in this issue).展开更多
In the information era,the core business and confidential information of enterprises/organizations is stored in information systems.However,certain malicious inside network users exist hidden inside the organization;t...In the information era,the core business and confidential information of enterprises/organizations is stored in information systems.However,certain malicious inside network users exist hidden inside the organization;these users intentionally or unintentionally misuse the privileges of the organization to obtain sensitive information from the company.The existing approaches on insider threat detection mostly focus on monitoring,detecting,and preventing any malicious behavior generated by users within an organization’s system while ignoring the imbalanced ground-truth insider threat data impact on security.To this end,to be able to detect insider threats more effectively,a data processing tool was developed to process the detected user activity to generate information-use events,and formulated a Data Adjustment(DA)strategy to adjust the weight of the minority and majority samples.Then,an efficient ensemble strategy was utilized,which applied the extreme gradient boosting(XGBoost)model combined with the DA strategy to detect anomalous behavior.The CERT dataset was used for an insider threat to evaluate our approach,which was a real-world dataset with artificially injected insider threat events.The results demonstrated that the proposed approach can effectively detect insider threats,with an accuracy rate of 99.51%and an average recall rate of 98.16%.Compared with other classifiers,the detection performance is improved by 8.76%.展开更多
Using Ensemble Adjustment Kalman Filter(EAKF), two types of ocean satellite datasets were assimilated into the First Institute of Oceanography Earth System Model(FIO-ESM), v1.0. One control experiment without data ass...Using Ensemble Adjustment Kalman Filter(EAKF), two types of ocean satellite datasets were assimilated into the First Institute of Oceanography Earth System Model(FIO-ESM), v1.0. One control experiment without data assimilation and four assimilation experiments were conducted. All the experiments were ensemble runs for 1-year period and each ensemble started from different initial conditions. One assimilation experiment was designed to assimilate sea level anomaly(SLA); another, to assimilate sea surface temperature(SST); and the other two assimilation experiments were designed to assimilate both SLA and SST but in different orders. To examine the effects of data assimilation, all the results were compared with an objective analysis dataset of EN3. Different from the ocean model without coupling, the momentum and heat fluxes were calculated via air-sea coupling in FIO-ESM, which makes the relations among variables closer to the reality. The outputs after the assimilation of satellite data were improved on the whole, especially at depth shallower than 1000 m. The effects due to the assimilation of different kinds of satellite datasets were somewhat different. The improvement due to SST assimilation was greater near the surface, while the improvement due to SLA assimilation was relatively great in the subsurface. The results after the assimilation of both SLA and SST were much better than those only assimilated one kind of dataset, but the difference due to the assimilation order of the two kinds of datasets was not significant.展开更多
Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustme...Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustment Kalman filter(EAKF) and sequential importance resampling particle filter(SIR-PF), using a well-known nonlinear and non-Gaussian model(Lorenz '63 model). The EAKF, which is a deterministic scheme of the ensemble Kalman filter(En KF), performs better than the classical(stochastic) En KF in a general framework. Comparison between the SIR-PF and the EAKF reveals that the former outperforms the latter if ensemble size is so large that can avoid the filter degeneracy, and vice versa. The impact of the probability density functions and effective ensemble sizes on assimilation performances are also explored. On the basis of comparisons between the SIR-PF and the EAKF, a mixture filter, called ensemble adjustment Kalman particle filter(EAKPF), is proposed to combine their both merits. Similar to the ensemble Kalman particle filter, which combines the stochastic En KF and SIR-PF analysis schemes with a tuning parameter, the new mixture filter essentially provides a continuous interpolation between the EAKF and SIR-PF. The same Lorenz '63 model is used as a testbed, showing that the EAKPF is able to overcome filter degeneracy while maintaining the non-Gaussian nature, and performs better than the EAKF given limited ensemble size.展开更多
The purpose of this research is to demonstrate that a calibration curve can be obtained that can be used for any infiltration test, with the double ring method, as well as an equation that helps speed up data processi...The purpose of this research is to demonstrate that a calibration curve can be obtained that can be used for any infiltration test, with the double ring method, as well as an equation that helps speed up data processing. The experimentation was carried out in eight points in Nicaragua, of which five were distributed in Managua and three in Rivas-Nandaime. These results can be used for purposes of other studies of interest. As a result, a calibration curve is obtained, and an expression equal to is deduced, which will be the equation to determine the average infiltration of a field test occupying the double ring, for a total of 7 hours. And it is from the result that the texture of the soil can be determined by means of the indicator table. The basic methodology allowed analyzing the data since they are obtained, processed and analyzed, resulting in the calibration curve for infiltration tests. Finally, an equation was determined from the averages of the processed data, resulting in a correlation of 0.9976, above 0.5, which means it is very high and reliable.展开更多
基金supported by National Natural Science Foundation of China(Grant Nos.41431070,41174016,41274026,41274024,41321063)National Key Basic Research Program of China(973 Program,2012CB957703)+1 种基金CAS/SAFEA International Partnership Program for Creative Research Teams(KZZD-EW-TZ-05)The Chinese Academy of Sciences
文摘As global warming continues,the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management.In North America as elsewhere in the world,changes in water resources strongly impact agriculture and animal husbandry.From a combination of Gravity Recovery and Climate Experiment(GRACE) gravity and Global Positioning System(GPS) data,it is recently found that water storage from August,2002 to March,2011 recovered after the extreme Canadian Prairies drought between 1999 and 2005.In this paper,we use GRACE monthly gravity data of Release 5 to track the water storage change from August,2002 to June,2014.In Canadian Prairies and the Great Lakes areas,the total water storage is found to have increased during the last decade by a rate of 73.8 ± 14.5 Gt/a,which is larger than that found in the previous study due to the longer time span of GRACE observations used and the reduction of the leakage error.We also find a long term decrease of water storage at a rate of-12.0 ± 4.2 Gt/a in Ungava Peninsula,possibly due to permafrost degradation and less snow accumulation during the winter in the region.In addition,the effect of total mass gain in the surveyed area,on present-day sea level,amounts to-0.18 mm/a,and thus should be taken into account in studies of global sea level change.
文摘This work presents a comprehensive second-order predictive modeling (PM) methodology designated by the acronym 2<sup>nd</sup>-BERRU-PMD. The attribute “2<sup>nd</sup>” indicates that this methodology incorporates second-order uncertainties (means and covariances) and second-order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best- Estimate Results with Reduced Uncertainties” and the last letter (“D”) in the acronym indicates “deterministic,” referring to the deterministic inclusion of the computational model responses. The 2<sup>nd</sup>-BERRU-PMD methodology is fundamentally based on the maximum entropy (MaxEnt) principle. This principle is in contradistinction to the fundamental principle that underlies the extant data assimilation and/or adjustment procedures which minimize in a least-square sense a subjective user-defined functional which is meant to represent the discrepancies between measured and computed model responses. It is shown that the 2<sup>nd</sup>-BERRU-PMD methodology generalizes and extends current data assimilation and/or data adjustment procedures while overcoming the fundamental limitations of these procedures. In the accompanying work (Part II), the alternative framework for developing the “second- order MaxEnt predictive modelling methodology” is presented by incorporating probabilistically (as opposed to “deterministically”) the computed model responses.
文摘This work presents a comprehensive fourth-order predictive modeling (PM) methodology that uses the MaxEnt principle to incorporate fourth-order moments (means, covariances, skewness, kurtosis) of model parameters, computed and measured model responses, as well as fourth (and higher) order sensitivities of computed model responses to model parameters. This new methodology is designated by the acronym 4<sup>th</sup>-BERRU-PM, which stands for “fourth-order best-estimate results with reduced uncertainties.” The results predicted by the 4<sup>th</sup>-BERRU-PM incorporates, as particular cases, the results previously predicted by the second-order predictive modeling methodology 2<sup>nd</sup>-BERRU-PM, and vastly generalizes the results produced by extant data assimilation and data adjustment procedures.
文摘This work presents a comprehensive second-order predictive modeling (PM) methodology based on the maximum entropy (MaxEnt) principle for obtaining best-estimate mean values and correlations for model responses and parameters. This methodology is designated by the acronym 2<sup>nd</sup>-BERRU-PMP, where the attribute “2<sup>nd</sup>” indicates that this methodology incorporates second- order uncertainties (means and covariances) and second (and higher) order sensitivities of computed model responses to model parameters. The acronym BERRU stands for “Best-Estimate Results with Reduced Uncertainties” and the last letter (“P”) in the acronym indicates “probabilistic,” referring to the MaxEnt probabilistic inclusion of the computational model responses. This is in contradistinction to the 2<sup>nd</sup>-BERRU-PMD methodology, which deterministically combines the computed model responses with the experimental information, as presented in the accompanying work (Part I). Although both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies yield expressions that include second (and higher) order sensitivities of responses to model parameters, the respective expressions for the predicted responses, for the calibrated predicted parameters and for their predicted uncertainties (covariances), are not identical to each other. Nevertheless, the results predicted by both the 2<sup>nd</sup>-BERRU-PMP and the 2<sup>nd</sup>-BERRU-PMD methodologies encompass, as particular cases, the results produced by the extant data assimilation and data adjustment procedures, which rely on the minimization, in a least-square sense, of a user-defined functional meant to represent the discrepancies between measured and computed model responses.
文摘This work (in two parts) will present a novel predictive modeling methodology aimed at obtaining “best-estimate results with reduced uncertainties” for the first four moments (mean values, covariance, skewness and kurtosis) of the optimally predicted distribution of model results and calibrated model parameters, by combining fourth-order experimental and computational information, including fourth (and higher) order sensitivities of computed model responses to model parameters. Underlying the construction of this fourth-order predictive modeling methodology is the “maximum entropy principle” which is initially used to obtain a novel closed-form expression of the (moments-constrained) fourth-order Maximum Entropy (MaxEnt) probability distribution constructed from the first four moments (means, covariances, skewness, kurtosis), which are assumed to be known, of an otherwise unknown distribution of a high-dimensional multivariate uncertain quantity of interest. This fourth-order MaxEnt distribution provides optimal compatibility of the available information while simultaneously ensuring minimal spurious information content, yielding an estimate of a probability density with the highest uncertainty among all densities satisfying the known moment constraints. Since this novel generic fourth-order MaxEnt distribution is of interest in its own right for applications in addition to predictive modeling, its construction is presented separately, in this first part of a two-part work. The fourth-order predictive modeling methodology that will be constructed by particularizing this generic fourth-order MaxEnt distribution will be presented in the accompanying work (Part-2).
文摘This paper is the second part of the new evaluation of atomic masses, AME2012. From the results of a leastsquares calculation, described in Part I, for all accepted experimental data, we derive here tables and graphs to replace those of AME2003. The first table lists atomic masses. It is followed by a table of the influences of data on primary nuclides, a table of separation energies and reaction energies, and finally, a series of graphs of separation and decay energies. The last section in this paper lists all references to the input data used in Part I of this AME2012 and also to the data included in the NUBASE2012 evaluation (first paper in this issue).
基金This work was financially supported by“the National Key R&D Program of China”(No.2018YFB0803602)exploration and practice on the education mode for engineering students based on technology,literature and art interdisciplinary integration with the Internet+background(No.022150118004/001)。
文摘In the information era,the core business and confidential information of enterprises/organizations is stored in information systems.However,certain malicious inside network users exist hidden inside the organization;these users intentionally or unintentionally misuse the privileges of the organization to obtain sensitive information from the company.The existing approaches on insider threat detection mostly focus on monitoring,detecting,and preventing any malicious behavior generated by users within an organization’s system while ignoring the imbalanced ground-truth insider threat data impact on security.To this end,to be able to detect insider threats more effectively,a data processing tool was developed to process the detected user activity to generate information-use events,and formulated a Data Adjustment(DA)strategy to adjust the weight of the minority and majority samples.Then,an efficient ensemble strategy was utilized,which applied the extreme gradient boosting(XGBoost)model combined with the DA strategy to detect anomalous behavior.The CERT dataset was used for an insider threat to evaluate our approach,which was a real-world dataset with artificially injected insider threat events.The results demonstrated that the proposed approach can effectively detect insider threats,with an accuracy rate of 99.51%and an average recall rate of 98.16%.Compared with other classifiers,the detection performance is improved by 8.76%.
基金the National Natural Science Foundation of China-Shandong Joint Fund for Marine Science Research Centers (Grant No. U1406404)the Public Science and Technology Research Funds Projects of Ocean (Grant No. 201505013)Scientific Research Foundation of the First Institute of Oceanography, State Oceanic Administration (Grant No. 2012G24)
文摘Using Ensemble Adjustment Kalman Filter(EAKF), two types of ocean satellite datasets were assimilated into the First Institute of Oceanography Earth System Model(FIO-ESM), v1.0. One control experiment without data assimilation and four assimilation experiments were conducted. All the experiments were ensemble runs for 1-year period and each ensemble started from different initial conditions. One assimilation experiment was designed to assimilate sea level anomaly(SLA); another, to assimilate sea surface temperature(SST); and the other two assimilation experiments were designed to assimilate both SLA and SST but in different orders. To examine the effects of data assimilation, all the results were compared with an objective analysis dataset of EN3. Different from the ocean model without coupling, the momentum and heat fluxes were calculated via air-sea coupling in FIO-ESM, which makes the relations among variables closer to the reality. The outputs after the assimilation of satellite data were improved on the whole, especially at depth shallower than 1000 m. The effects due to the assimilation of different kinds of satellite datasets were somewhat different. The improvement due to SST assimilation was greater near the surface, while the improvement due to SLA assimilation was relatively great in the subsurface. The results after the assimilation of both SLA and SST were much better than those only assimilated one kind of dataset, but the difference due to the assimilation order of the two kinds of datasets was not significant.
基金The National Natural Science Foundation of China under contract Nos 41276029 and 41321004the Project of State Key Laboratory of Satellite Ocean Environment Dynamics,Second Institute of Oceanography under contract No.SOEDZZ1404the National Basic Research Program(973 Program)of China under contract No.2013CB430302
文摘Bayesian estimation theory provides a general approach for the state estimate of linear or nonlinear and Gaussian or non-Gaussian systems. In this study, we first explore two Bayesian-based methods: ensemble adjustment Kalman filter(EAKF) and sequential importance resampling particle filter(SIR-PF), using a well-known nonlinear and non-Gaussian model(Lorenz '63 model). The EAKF, which is a deterministic scheme of the ensemble Kalman filter(En KF), performs better than the classical(stochastic) En KF in a general framework. Comparison between the SIR-PF and the EAKF reveals that the former outperforms the latter if ensemble size is so large that can avoid the filter degeneracy, and vice versa. The impact of the probability density functions and effective ensemble sizes on assimilation performances are also explored. On the basis of comparisons between the SIR-PF and the EAKF, a mixture filter, called ensemble adjustment Kalman particle filter(EAKPF), is proposed to combine their both merits. Similar to the ensemble Kalman particle filter, which combines the stochastic En KF and SIR-PF analysis schemes with a tuning parameter, the new mixture filter essentially provides a continuous interpolation between the EAKF and SIR-PF. The same Lorenz '63 model is used as a testbed, showing that the EAKPF is able to overcome filter degeneracy while maintaining the non-Gaussian nature, and performs better than the EAKF given limited ensemble size.
文摘The purpose of this research is to demonstrate that a calibration curve can be obtained that can be used for any infiltration test, with the double ring method, as well as an equation that helps speed up data processing. The experimentation was carried out in eight points in Nicaragua, of which five were distributed in Managua and three in Rivas-Nandaime. These results can be used for purposes of other studies of interest. As a result, a calibration curve is obtained, and an expression equal to is deduced, which will be the equation to determine the average infiltration of a field test occupying the double ring, for a total of 7 hours. And it is from the result that the texture of the soil can be determined by means of the indicator table. The basic methodology allowed analyzing the data since they are obtained, processed and analyzed, resulting in the calibration curve for infiltration tests. Finally, an equation was determined from the averages of the processed data, resulting in a correlation of 0.9976, above 0.5, which means it is very high and reliable.