The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical...The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical components in rocks is expensive and time consuming.However,the basic well log curves are not well correlated with BI so correlation-based,machine-learning methods are not able to derive highly accurate BI predictions using such data.A correlation-free,optimized data-matching algorithm is configured to predict BI on a supervised basis from well log and core data available from two published wells in the Lower Barnett Shale Formation (Texas).This transparent open box (TOB) algorithm matches data records by calculating the sum of squared errors between their variables and selecting the best matches as those with the minimum squared errors.It then applies optimizers to adjust weights applied to individual variable errors to minimize the root mean square error (RMSE)between calculated and predicted (BI).The prediction accuracy achieved by TOB using just five well logs (Gr,ρb,Ns,Rs,Dt) to predict BI is dependent on the density of data records sampled.At a sampling density of about one sample per 0.5 ft BI is predicted with RMSE~0.056 and R^(2)~0.790.At a sampling density of about one sample per0.1 ft BI is predicted with RMSE~0.008 and R^(2)~0.995.Adding a stratigraphic height index as an additional (sixth)input variable method improves BI prediction accuracy to RMSE~0.003 and R^(2)~0.999 for the two wells with only 1 record in 10,000 yielding a BI prediction error of>±0.1.The model has the potential to be applied in an unsupervised basis to predict BI from basic well log data in surrounding wells lacking mineralogical measurements but with similar lithofacies and burial histories.The method could also be extended to predict elastic rock properties in and seismic attributes from wells and seismic data to improve the precision of brittleness index and fracability mapping spatially.展开更多
Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global...Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global sampling but neglect to exploit global and local sampling simultaneously;ⅱ)they either transfer knowledge from a global perspective or a local perspective,while overlooking transmission of confident knowledge from both perspectives;and ⅲ) they apply repeated sampling during iteration,which takes a lot of time.To address these problems,knowledge transfer learning via dual density sampling(KTL-DDS) is proposed in this study,which consists of three parts:ⅰ) Dual density sampling(DDS) that jointly leverages two sampling methods associated with different views,i.e.,global density sampling that extracts representative samples with the most common features and local density sampling that selects representative samples with critical boundary information;ⅱ)Consistent maximum mean discrepancy(CMMD) that reduces intra-and cross-domain risks and guarantees high consistency of knowledge by shortening the distances of every two subsets among the four subsets collected by DDS;and ⅲ) Knowledge dissemination(KD) that transmits confident and consistent knowledge from the representative target samples with global and local properties to the whole target domain by preserving the neighboring relationships of the target domain.Mathematical analyses show that DDS avoids repeated sampling during the iteration.With the above three actions,confident knowledge with both global and local properties is transferred,and the memory and running time are greatly reduced.In addition,a general framework named dual density sampling approximation(DDSA) is extended,which can be easily applied to other DA algorithms.Extensive experiments on five datasets in clean,label corruption(LC),feature missing(FM),and LC&FM environments demonstrate the encouraging performance of KTL-DDS.展开更多
Non-agricultural lands are surveyed sparsely in general.Meanwhile,soils in these areas usually exhibit strong spatial variability which requires more samples for producing acceptable estimates.Capulin Volcano National...Non-agricultural lands are surveyed sparsely in general.Meanwhile,soils in these areas usually exhibit strong spatial variability which requires more samples for producing acceptable estimates.Capulin Volcano National Monument,as a typical sparsely-surveyed area,was chosen to assess spatial variability of a variety of soil properties,and furthermore,to investigate its implications for sampling design.One hundred and forty one composited soil samples were collected across the Monument and the surrounding areas.Soil properties including pH,organic matter content,extractable elements such as calcium (Ca),magnesium (Mg),potassium (K),sodium (Na),phosphorus (P),sulfur (S),zinc (Zn),and copper (Cu),as well as sand,silt,and clay percentages were analyzed for each sample.Semivariograms of all properties were constructed,standardized,and compared to estimate the spatial variability of the soil properties in the area.Based on the similarity among standardized semivariograms,we found that the semivariograms could be generalized for physical and chemical properties,respectively.The generalized semivariogram for physical properties had a much greater sill value (2.635) and effective range (7 500 m) than that for chemical properties.Optimal sampling density (OSD),which is derived from the generalized semivariogram and defines the relationship between sampling density and expected error percentage,was proposed to represent,interpret,and compare soil spatial variability and to provide guidance for sample scheme design.OSDs showed that chemical properties exhibit a stronger local spatial variability than soil texture parameters,implying more samples or analysis are required to achieve a similar level of precision.展开更多
The spatial estimation for soil properties was improved and sampling intensities also decreased in terms of incorporated auxiliary data. In this study, kriging and two interpolation methods were proven well to estimat...The spatial estimation for soil properties was improved and sampling intensities also decreased in terms of incorporated auxiliary data. In this study, kriging and two interpolation methods were proven well to estimate auxiliary variables: cokriging and regression-kriging, and using the salinity data from the first two stages as auxiliary variables, the methods both improved the interpolation of soil salinity in coastal saline land. The prediction accuracy of the three methods was observed under different sampling density of the target variable by comparison with another group of 80 validation sample points, from which the root-mean-square error (RMSE) and correlation coefficient (r) between the predicted and measured values were calculated. The results showed, with the help of auxiliary data, whatever the sample size of the target variable may be, cokriging and regression-kriging performed better than ordinary kriging. Moreover, regression-kriging produced on average more accurate predictions than cokriging. Compared with the kriging results, cokriging improved the estimations by reducing RMSE from 23.3 to 29% and increasing r from 16.6 to 25.5%, regression-kriging improved the estimations by reducing RMSE from 25 to 41.5% and increasing r from 16.8 to 27.2%. Therefore, regression-kriging shows promise for improved prediction for soil salinity and reduction of soil sampling intensity considerably while maintaining high prediction accuracy. Moreover, in regression-kriging, the regression model can have any form, such as generalized linear models, non-linear models or tree-based models, which provide a possibility to include more ancillary variables.展开更多
China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a...China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a new technology to increase the accuracy of seismic exploration. We briefly discuss point source and receiver technology, analyze the high density spatial sampling in situ method, introduce the symmetric sampling principles presented by Gijs J. O. Vermeer, and discuss high density spatial sampling technology from the point of view of wave field continuity. We emphasize the analysis of the high density spatial sampling characteristics, including the high density first break advantages for investigation of near surface structure, improving static correction precision, the use of dense receiver spacing at short offsets to increase the effective coverage at shallow depth, and the accuracy of reflection imaging. Coherent noise is not aliased and the noise analysis precision and suppression increases as a result. High density spatial sampling enhances wave field continuity and the accuracy of various mathematical transforms, which benefits wave field separation. Finally, we point out that the difficult part of high density spatial sampling technology is the data processing. More research needs to be done on the methods of analyzing and processing huge amounts of seismic data.展开更多
There are some limitations when we apply conventional methods to analyze the massive amounts of seismic data acquired with high-density spatial sampling since processors usually obtain the properties of raw data from ...There are some limitations when we apply conventional methods to analyze the massive amounts of seismic data acquired with high-density spatial sampling since processors usually obtain the properties of raw data from common shot gathers or other datasets located at certain points or along lines. We propose a novel method in this paper to observe seismic data on time slices from spatial subsets. The composition of a spatial subset and the unique character of orthogonal or oblique subsets are described and pre-stack subsets are shown by 3D visualization. In seismic data processing, spatial subsets can be used for the following aspects: (1) to check the trace distribution uniformity and regularity; (2) to observe the main features of ground-roll and linear noise; (3) to find abnormal traces from slices of datasets; and (4) to QC the results of pre-stack noise attenuation. The field data application shows that seismic data analysis in spatial subsets is an effective method that may lead to a better discrimination among various wavefields and help us obtain more information.展开更多
More than 40 national and regional geochemical mapping projects in the world carried out from 1973 to 1988 do not conform to common standards. In particular they have many analytical deficiencies. In the period 1988 t...More than 40 national and regional geochemical mapping projects in the world carried out from 1973 to 1988 do not conform to common standards. In particular they have many analytical deficiencies. In the period 1988 to 1992, the International Geochemical Mapping project (Project 259 of UNESCO's IGCP Program) prepared recommendations designed to standardize geochemical mapping methods. The analytical requirements are an essential component of the overall recommendations. They included the following: 71 elements should be analyzed in future mapping projects; the detection limits of trace and ultratrace elements must be lower than the corresponding crustal abundances; and the Chinese GSD and Canadian STSD standard sample series should be used for the correlation of global data. A proposal was also made to collect 5000 composite samples, at very low sampling densities to cover the whole Earth's land surface. In 1997 an IUGS Working Group on Global Geochemical Baselines was formed to continue the work which began with IGCP 259. From 1997 up to now, new progress has been made especially in China and FOREGS countries under the aegis of this working group, including the study of suitable sampling media, development of a multi-element analytical system, new proficiency test for selection of competent laboratories and role of wide-spaced mapping in mineral exploration. One of the major problems awaiting solution has been the inability of many laboratories to meet the IGCP recommendations to generate high quality geochemical maps. Fortunately several laboratories in China and Europe have demonstrated an ability to meet the requirements and they will be well placed to render technical assistance to other countries.展开更多
Detection and tracking of multi-target with unknown and varying number is a challenging issue, especially under the condition of low signal-to-noise ratio(SNR). A modified multi-target track-before-detect(TBD) method ...Detection and tracking of multi-target with unknown and varying number is a challenging issue, especially under the condition of low signal-to-noise ratio(SNR). A modified multi-target track-before-detect(TBD) method was proposed to tackle this issue using a nonstandard point observation model. The method was developed from sequential Monte Carlo(SMC)-based probability hypothesis density(PHD) filter, and it was implemented by modifying the original calculation in update weights of the particles and by adopting an adaptive particle sampling strategy. To efficiently execute the SMC-PHD based TBD method, a fast implementation approach was also presented by partitioning the particles into multiple subsets according to their position coordinates in 2D resolution cells of the sensor. Simulation results show the effectiveness of the proposed method for time-varying multi-target tracking using raw observation data.展开更多
As to the fact that it is difficult to obtain analytical form of optimal sampling density and tracking performance of standard particle probability hypothesis density(P-PHD) filter would decline when clustering algori...As to the fact that it is difficult to obtain analytical form of optimal sampling density and tracking performance of standard particle probability hypothesis density(P-PHD) filter would decline when clustering algorithm is used to extract target states,a free clustering optimal P-PHD(FCO-P-PHD) filter is proposed.This method can lead to obtainment of analytical form of optimal sampling density of P-PHD filter and realization of optimal P-PHD filter without use of clustering algorithms in extraction target states.Besides,as sate extraction method in FCO-P-PHD filter is coupled with the process of obtaining analytical form for optimal sampling density,through decoupling process,a new single-sensor free clustering state extraction method is proposed.By combining this method with standard P-PHD filter,FC-P-PHD filter can be obtained,which significantly improves the tracking performance of P-PHD filter.In the end,the effectiveness of proposed algorithms and their advantages over other algorithms are validated through several simulation experiments.展开更多
Based on previous studies,the research methods and influencing factors of spatial variation of soil nutrients are summarized.It is concluded that the spatial variation of soil nutrients is studied generally by geostat...Based on previous studies,the research methods and influencing factors of spatial variation of soil nutrients are summarized.It is concluded that the spatial variation of soil nutrients is studied generally by geostatistics methods,and the spatial distribution of nutrients is visually observed by using Kriging interpolation method.The influencing factors mainly include topography,sampling method,sampling spacing,sampling density and sampling scale.The influence of random sampling and grid sampling on interpolation is analyzed based on the specific conditions of the actual study area.The influence of sampling density and topography on the spatial variation of soil nutrients cannot be ignored,especially on available nutrients.When samples are collected in a large area(under a small and medium scale),the spatial variation of soil nutrients is large,and they have strong spatial autocorrelation;in a small area(namely under a large scale),the spatial variability of soil nutrients is small,and they have obvious spatial autocorrelation.This study can provide intuitive and convenient reference materials for the following researchers.展开更多
An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only smal...An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples.展开更多
文摘The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical components in rocks is expensive and time consuming.However,the basic well log curves are not well correlated with BI so correlation-based,machine-learning methods are not able to derive highly accurate BI predictions using such data.A correlation-free,optimized data-matching algorithm is configured to predict BI on a supervised basis from well log and core data available from two published wells in the Lower Barnett Shale Formation (Texas).This transparent open box (TOB) algorithm matches data records by calculating the sum of squared errors between their variables and selecting the best matches as those with the minimum squared errors.It then applies optimizers to adjust weights applied to individual variable errors to minimize the root mean square error (RMSE)between calculated and predicted (BI).The prediction accuracy achieved by TOB using just five well logs (Gr,ρb,Ns,Rs,Dt) to predict BI is dependent on the density of data records sampled.At a sampling density of about one sample per 0.5 ft BI is predicted with RMSE~0.056 and R^(2)~0.790.At a sampling density of about one sample per0.1 ft BI is predicted with RMSE~0.008 and R^(2)~0.995.Adding a stratigraphic height index as an additional (sixth)input variable method improves BI prediction accuracy to RMSE~0.003 and R^(2)~0.999 for the two wells with only 1 record in 10,000 yielding a BI prediction error of>±0.1.The model has the potential to be applied in an unsupervised basis to predict BI from basic well log data in surrounding wells lacking mineralogical measurements but with similar lithofacies and burial histories.The method could also be extended to predict elastic rock properties in and seismic attributes from wells and seismic data to improve the precision of brittleness index and fracability mapping spatially.
基金supported in part by the Key-Area Research and Development Program of Guangdong Province (2020B010166006)the National Natural Science Foundation of China (61972102)+1 种基金the Guangzhou Science and Technology Plan Project (023A04J1729)the Science and Technology development fund (FDCT),Macao SAR (015/2020/AMJ)。
文摘Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global sampling but neglect to exploit global and local sampling simultaneously;ⅱ)they either transfer knowledge from a global perspective or a local perspective,while overlooking transmission of confident knowledge from both perspectives;and ⅲ) they apply repeated sampling during iteration,which takes a lot of time.To address these problems,knowledge transfer learning via dual density sampling(KTL-DDS) is proposed in this study,which consists of three parts:ⅰ) Dual density sampling(DDS) that jointly leverages two sampling methods associated with different views,i.e.,global density sampling that extracts representative samples with the most common features and local density sampling that selects representative samples with critical boundary information;ⅱ)Consistent maximum mean discrepancy(CMMD) that reduces intra-and cross-domain risks and guarantees high consistency of knowledge by shortening the distances of every two subsets among the four subsets collected by DDS;and ⅲ) Knowledge dissemination(KD) that transmits confident and consistent knowledge from the representative target samples with global and local properties to the whole target domain by preserving the neighboring relationships of the target domain.Mathematical analyses show that DDS avoids repeated sampling during the iteration.With the above three actions,confident knowledge with both global and local properties is transferred,and the memory and running time are greatly reduced.In addition,a general framework named dual density sampling approximation(DDSA) is extended,which can be easily applied to other DA algorithms.Extensive experiments on five datasets in clean,label corruption(LC),feature missing(FM),and LC&FM environments demonstrate the encouraging performance of KTL-DDS.
文摘Non-agricultural lands are surveyed sparsely in general.Meanwhile,soils in these areas usually exhibit strong spatial variability which requires more samples for producing acceptable estimates.Capulin Volcano National Monument,as a typical sparsely-surveyed area,was chosen to assess spatial variability of a variety of soil properties,and furthermore,to investigate its implications for sampling design.One hundred and forty one composited soil samples were collected across the Monument and the surrounding areas.Soil properties including pH,organic matter content,extractable elements such as calcium (Ca),magnesium (Mg),potassium (K),sodium (Na),phosphorus (P),sulfur (S),zinc (Zn),and copper (Cu),as well as sand,silt,and clay percentages were analyzed for each sample.Semivariograms of all properties were constructed,standardized,and compared to estimate the spatial variability of the soil properties in the area.Based on the similarity among standardized semivariograms,we found that the semivariograms could be generalized for physical and chemical properties,respectively.The generalized semivariogram for physical properties had a much greater sill value (2.635) and effective range (7 500 m) than that for chemical properties.Optimal sampling density (OSD),which is derived from the generalized semivariogram and defines the relationship between sampling density and expected error percentage,was proposed to represent,interpret,and compare soil spatial variability and to provide guidance for sample scheme design.OSDs showed that chemical properties exhibit a stronger local spatial variability than soil texture parameters,implying more samples or analysis are required to achieve a similar level of precision.
基金the National Natural Science Foundation of China (40571066, 40001008)the Postdoctoral Science Foundation of China (20060401048) the Key Program of Science and Technology Bureau of Zhejiang Province, China 030523).
文摘The spatial estimation for soil properties was improved and sampling intensities also decreased in terms of incorporated auxiliary data. In this study, kriging and two interpolation methods were proven well to estimate auxiliary variables: cokriging and regression-kriging, and using the salinity data from the first two stages as auxiliary variables, the methods both improved the interpolation of soil salinity in coastal saline land. The prediction accuracy of the three methods was observed under different sampling density of the target variable by comparison with another group of 80 validation sample points, from which the root-mean-square error (RMSE) and correlation coefficient (r) between the predicted and measured values were calculated. The results showed, with the help of auxiliary data, whatever the sample size of the target variable may be, cokriging and regression-kriging performed better than ordinary kriging. Moreover, regression-kriging produced on average more accurate predictions than cokriging. Compared with the kriging results, cokriging improved the estimations by reducing RMSE from 23.3 to 29% and increasing r from 16.6 to 25.5%, regression-kriging improved the estimations by reducing RMSE from 25 to 41.5% and increasing r from 16.8 to 27.2%. Therefore, regression-kriging shows promise for improved prediction for soil salinity and reduction of soil sampling intensity considerably while maintaining high prediction accuracy. Moreover, in regression-kriging, the regression model can have any form, such as generalized linear models, non-linear models or tree-based models, which provide a possibility to include more ancillary variables.
文摘China's continental deposition basins are characterized by complex geological structures and various reservoir lithologies. Therefore, high precision exploration methods are needed. High density spatial sampling is a new technology to increase the accuracy of seismic exploration. We briefly discuss point source and receiver technology, analyze the high density spatial sampling in situ method, introduce the symmetric sampling principles presented by Gijs J. O. Vermeer, and discuss high density spatial sampling technology from the point of view of wave field continuity. We emphasize the analysis of the high density spatial sampling characteristics, including the high density first break advantages for investigation of near surface structure, improving static correction precision, the use of dense receiver spacing at short offsets to increase the effective coverage at shallow depth, and the accuracy of reflection imaging. Coherent noise is not aliased and the noise analysis precision and suppression increases as a result. High density spatial sampling enhances wave field continuity and the accuracy of various mathematical transforms, which benefits wave field separation. Finally, we point out that the difficult part of high density spatial sampling technology is the data processing. More research needs to be done on the methods of analyzing and processing huge amounts of seismic data.
文摘There are some limitations when we apply conventional methods to analyze the massive amounts of seismic data acquired with high-density spatial sampling since processors usually obtain the properties of raw data from common shot gathers or other datasets located at certain points or along lines. We propose a novel method in this paper to observe seismic data on time slices from spatial subsets. The composition of a spatial subset and the unique character of orthogonal or oblique subsets are described and pre-stack subsets are shown by 3D visualization. In seismic data processing, spatial subsets can be used for the following aspects: (1) to check the trace distribution uniformity and regularity; (2) to observe the main features of ground-roll and linear noise; (3) to find abnormal traces from slices of datasets; and (4) to QC the results of pre-stack noise attenuation. The field data application shows that seismic data analysis in spatial subsets is an effective method that may lead to a better discrimination among various wavefields and help us obtain more information.
文摘More than 40 national and regional geochemical mapping projects in the world carried out from 1973 to 1988 do not conform to common standards. In particular they have many analytical deficiencies. In the period 1988 to 1992, the International Geochemical Mapping project (Project 259 of UNESCO's IGCP Program) prepared recommendations designed to standardize geochemical mapping methods. The analytical requirements are an essential component of the overall recommendations. They included the following: 71 elements should be analyzed in future mapping projects; the detection limits of trace and ultratrace elements must be lower than the corresponding crustal abundances; and the Chinese GSD and Canadian STSD standard sample series should be used for the correlation of global data. A proposal was also made to collect 5000 composite samples, at very low sampling densities to cover the whole Earth's land surface. In 1997 an IUGS Working Group on Global Geochemical Baselines was formed to continue the work which began with IGCP 259. From 1997 up to now, new progress has been made especially in China and FOREGS countries under the aegis of this working group, including the study of suitable sampling media, development of a multi-element analytical system, new proficiency test for selection of competent laboratories and role of wide-spaced mapping in mineral exploration. One of the major problems awaiting solution has been the inability of many laboratories to meet the IGCP recommendations to generate high quality geochemical maps. Fortunately several laboratories in China and Europe have demonstrated an ability to meet the requirements and they will be well placed to render technical assistance to other countries.
基金Projects(61002022,61471370)supported by the National Natural Science Foundation of China
文摘Detection and tracking of multi-target with unknown and varying number is a challenging issue, especially under the condition of low signal-to-noise ratio(SNR). A modified multi-target track-before-detect(TBD) method was proposed to tackle this issue using a nonstandard point observation model. The method was developed from sequential Monte Carlo(SMC)-based probability hypothesis density(PHD) filter, and it was implemented by modifying the original calculation in update weights of the particles and by adopting an adaptive particle sampling strategy. To efficiently execute the SMC-PHD based TBD method, a fast implementation approach was also presented by partitioning the particles into multiple subsets according to their position coordinates in 2D resolution cells of the sensor. Simulation results show the effectiveness of the proposed method for time-varying multi-target tracking using raw observation data.
文摘As to the fact that it is difficult to obtain analytical form of optimal sampling density and tracking performance of standard particle probability hypothesis density(P-PHD) filter would decline when clustering algorithm is used to extract target states,a free clustering optimal P-PHD(FCO-P-PHD) filter is proposed.This method can lead to obtainment of analytical form of optimal sampling density of P-PHD filter and realization of optimal P-PHD filter without use of clustering algorithms in extraction target states.Besides,as sate extraction method in FCO-P-PHD filter is coupled with the process of obtaining analytical form for optimal sampling density,through decoupling process,a new single-sensor free clustering state extraction method is proposed.By combining this method with standard P-PHD filter,FC-P-PHD filter can be obtained,which significantly improves the tracking performance of P-PHD filter.In the end,the effectiveness of proposed algorithms and their advantages over other algorithms are validated through several simulation experiments.
文摘Based on previous studies,the research methods and influencing factors of spatial variation of soil nutrients are summarized.It is concluded that the spatial variation of soil nutrients is studied generally by geostatistics methods,and the spatial distribution of nutrients is visually observed by using Kriging interpolation method.The influencing factors mainly include topography,sampling method,sampling spacing,sampling density and sampling scale.The influence of random sampling and grid sampling on interpolation is analyzed based on the specific conditions of the actual study area.The influence of sampling density and topography on the spatial variation of soil nutrients cannot be ignored,especially on available nutrients.When samples are collected in a large area(under a small and medium scale),the spatial variation of soil nutrients is large,and they have strong spatial autocorrelation;in a small area(namely under a large scale),the spatial variability of soil nutrients is small,and they have obvious spatial autocorrelation.This study can provide intuitive and convenient reference materials for the following researchers.
基金Funding of Jiangsu Innovation Program for Graduate Education (CXZZ11_0193)NUAA Research Funding (NJ2010009)
文摘An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples.