In recent years there has been an increasing interest in developing spatial statistical models for data sets that are seemingly spatially independent.This lack of spatial structure makes it difficult,if not impossible...In recent years there has been an increasing interest in developing spatial statistical models for data sets that are seemingly spatially independent.This lack of spatial structure makes it difficult,if not impossible to use optimal predictors such as ordinary kriging for modeling the spatial variability in the data.In many instances,the data still contain a wealth of information that could be used to gain flexibility and precision in estimation.In this paper we propose using a combination of regression analysis to describe the large-scale spatial variability in a set of survey data and a tree-based stratification design to enhance the estimation process of the small-scale spatial variability.With this approach,sample units(i.e.,pixel of a satellite image) are classified with respect to predictions of error attributes into homogeneous classes,and the classes are then used as strata in the stratified analysis.Independent variables used as a basis of stratification included terrain data and satellite imagery.A decision rule was used to identify a tree size that minimized the error in estimating the variance of the mean response and prediction uncertainties at new spatial locations.This approach was applied to a set of n=937 forested plots from a state-wide inventory conducted in 2006 in the Mexican State of Jalisco.The final models accounted for 62% to 82% of the variability observed in canopy closure(%),basal area(m2·ha-1),cubic volumes(m3·ha-1) and biomass(t·ha-1) on the sample plots.The spatial models provided unbiased estimates and when averaged over all sample units in the population,estimates of forest structure were very close to those obtained using classical estimates based on the sampling strategy used in the state-wide inventory.The spatial models also provided unbiased estimates of model variances leading to confidence and prediction coverage rates close to the 0.95 nominal rate.展开更多
The rise of historicism at the turn of the 19th century changed the ancient poetics and rhetoric standards held by Aristotle for a long time, trying to integrate literary studies into a larger historical context, but ...The rise of historicism at the turn of the 19th century changed the ancient poetics and rhetoric standards held by Aristotle for a long time, trying to integrate literary studies into a larger historical context, but in the first half of the 20th century several literary criticism methods, including formalism, new criticism, structuralism and poststructuralism, came up to emphasize once again the importance of form and rhetoric. After the 1980s and 1990s, a new inclination to seek and promote the relationship between literature and the real world has encouraged the literary research that interprets classical and contemporary literary works on the basis of current political issues. The influence of formalism and its opponents, as well as the emphasis on literary aesthetics in the process, led to the relaxation of the standard for the search of literary significance. Therefore, we must give brand-new attention to what the literary works themselves are intended to tell the reader.展开更多
Standard deviation(SD)and standard error of the mean(SEM)have been applied widely as error bars in scientific plots.Unfortunately,there is no universally accepted principle addressing which of these 2 measures should ...Standard deviation(SD)and standard error of the mean(SEM)have been applied widely as error bars in scientific plots.Unfortunately,there is no universally accepted principle addressing which of these 2 measures should be used.Here we seek to fill this gap by outlining the reasoning for choosing SEM over SD and hope to shed light on this unsettled disagreement among the biomedical community.The utility of SEM and SD as error bars is further discussed by examining the figures and plots published in 2 research articles on pancreatic disease.展开更多
Many scholars have conducted visual analyses of urban skylines, but little attention has been paid to the quantitative measures regarding specific design elements within the skyline. This article aims to help urban de...Many scholars have conducted visual analyses of urban skylines, but little attention has been paid to the quantitative measures regarding specific design elements within the skyline. This article aims to help urban designers and regulators improve the skyline and investigate which factors can make urban skylines more pleasant for people. Computer generated images of skylines are tested for three factors including greenery, layering, and landmarks. For the data collection, a questionnaire was used, as one of the simple and effective methods to gather results. The authors use statistics as a method of measuring the degree of people's preferences. The study finds that the proportions of landmarks in the overall skyline, the height of layers, and the percentage of greenery deserve special attention. The authors also discuss the current problems of skyline design in typical Chinese cities according to the above findings.展开更多
文摘In recent years there has been an increasing interest in developing spatial statistical models for data sets that are seemingly spatially independent.This lack of spatial structure makes it difficult,if not impossible to use optimal predictors such as ordinary kriging for modeling the spatial variability in the data.In many instances,the data still contain a wealth of information that could be used to gain flexibility and precision in estimation.In this paper we propose using a combination of regression analysis to describe the large-scale spatial variability in a set of survey data and a tree-based stratification design to enhance the estimation process of the small-scale spatial variability.With this approach,sample units(i.e.,pixel of a satellite image) are classified with respect to predictions of error attributes into homogeneous classes,and the classes are then used as strata in the stratified analysis.Independent variables used as a basis of stratification included terrain data and satellite imagery.A decision rule was used to identify a tree size that minimized the error in estimating the variance of the mean response and prediction uncertainties at new spatial locations.This approach was applied to a set of n=937 forested plots from a state-wide inventory conducted in 2006 in the Mexican State of Jalisco.The final models accounted for 62% to 82% of the variability observed in canopy closure(%),basal area(m2·ha-1),cubic volumes(m3·ha-1) and biomass(t·ha-1) on the sample plots.The spatial models provided unbiased estimates and when averaged over all sample units in the population,estimates of forest structure were very close to those obtained using classical estimates based on the sampling strategy used in the state-wide inventory.The spatial models also provided unbiased estimates of model variances leading to confidence and prediction coverage rates close to the 0.95 nominal rate.
文摘The rise of historicism at the turn of the 19th century changed the ancient poetics and rhetoric standards held by Aristotle for a long time, trying to integrate literary studies into a larger historical context, but in the first half of the 20th century several literary criticism methods, including formalism, new criticism, structuralism and poststructuralism, came up to emphasize once again the importance of form and rhetoric. After the 1980s and 1990s, a new inclination to seek and promote the relationship between literature and the real world has encouraged the literary research that interprets classical and contemporary literary works on the basis of current political issues. The influence of formalism and its opponents, as well as the emphasis on literary aesthetics in the process, led to the relaxation of the standard for the search of literary significance. Therefore, we must give brand-new attention to what the literary works themselves are intended to tell the reader.
基金BZ research was supported,in part,by the National Institutes of Health grant U24 AA026968the University of Massachusetts Center for Clinical and Translational Science grants UL1TR001453,TL1TR01454,and KL2TR01455.
文摘Standard deviation(SD)and standard error of the mean(SEM)have been applied widely as error bars in scientific plots.Unfortunately,there is no universally accepted principle addressing which of these 2 measures should be used.Here we seek to fill this gap by outlining the reasoning for choosing SEM over SD and hope to shed light on this unsettled disagreement among the biomedical community.The utility of SEM and SD as error bars is further discussed by examining the figures and plots published in 2 research articles on pancreatic disease.
基金supported by a soft science research grant (No.2013-R2-43) from the Science and Technology Program funded by the Ministry of Housing and Urban-Rural Development of China
文摘Many scholars have conducted visual analyses of urban skylines, but little attention has been paid to the quantitative measures regarding specific design elements within the skyline. This article aims to help urban designers and regulators improve the skyline and investigate which factors can make urban skylines more pleasant for people. Computer generated images of skylines are tested for three factors including greenery, layering, and landmarks. For the data collection, a questionnaire was used, as one of the simple and effective methods to gather results. The authors use statistics as a method of measuring the degree of people's preferences. The study finds that the proportions of landmarks in the overall skyline, the height of layers, and the percentage of greenery deserve special attention. The authors also discuss the current problems of skyline design in typical Chinese cities according to the above findings.