sing the natural limestone samples taken from the field with dimension of 500 mm×500 mm×1 000 mm, the D-D (dilatancy-diffusion) seismogeny pattern was modeled under the condition of water injection, which ob...sing the natural limestone samples taken from the field with dimension of 500 mm×500 mm×1 000 mm, the D-D (dilatancy-diffusion) seismogeny pattern was modeled under the condition of water injection, which observes the time-space evolutionary features about the relative physics fields of the loaded samples from deformation, formation of microcracks to the occurrence of main rupture. The results of observed apparent resistivity show: ① The process of the deformation from microcrack to main rupture on the loaded rock sample could be characterized by the precursory spatial-temporal changes in the observation of apparent resistivity; ② The precursory temporal changes of observation in apparent resistivity could be divided into several stages, and its spatial distribution shows the difference in different parts of the rock sample; ③ Before the main rupture of the rock sample the obvious ″tendency anomaly′ and ′short-term anomaly″ were observed, and some of them could be likely considered as the ″impending earthquake ″anomaly precursor of apparent resistivity. The changes and distribution features of apparent resistivity show that they are intrinsically related to the dilatancy phenomenon of the loaded rock sample. Finally, this paper discusses the mechanism of resistivity change of loaded rock sample theoretically.展开更多
Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requ...Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requirements.The key to handling large-scale point clouds lies in leveraging random sampling,which offers higher computational efficiency and lower memory consumption compared to other sampling methods.Nevertheless,the use of random sampling can potentially result in the loss of crucial points during the encoding stage.To address these issues,this paper proposes cross-fusion self-attention network(CFSA-Net),a lightweight and efficient network architecture specifically designed for directly processing large-scale point clouds.At the core of this network is the incorporation of random sampling alongside a local feature extraction module based on cross-fusion self-attention(CFSA).This module effectively integrates long-range contextual dependencies between points by employing hierarchical position encoding(HPC).Furthermore,it enhances the interaction between each point’s coordinates and feature information through cross-fusion self-attention pooling,enabling the acquisition of more comprehensive geometric information.Finally,a residual optimization(RO)structure is introduced to extend the receptive field of individual points by stacking hierarchical position encoding and cross-fusion self-attention pooling,thereby reducing the impact of information loss caused by random sampling.Experimental results on the Stanford Large-Scale 3D Indoor Spaces(S3DIS),Semantic3D,and SemanticKITTI datasets demonstrate the superiority of this algorithm over advanced approaches such as RandLA-Net and KPConv.These findings underscore the excellent performance of CFSA-Net in large-scale 3D semantic segmentation.展开更多
Research surveys are believed to have originated in antiquity with evidence of them being performed in ancient Egypt and Greece.In the past century,their use has grown significantly and they are now one of the most fr...Research surveys are believed to have originated in antiquity with evidence of them being performed in ancient Egypt and Greece.In the past century,their use has grown significantly and they are now one of the most frequently employed research methods including in the field of healthcare.Modern validation techniques and processes have allowed researchers to broaden the scope of qualitative data they can gather through these surveys such as an individual’s views on service quality to nationwide surveys that are undertaken regularly to follow healthcare trends.This article focuses on the evolution and current utility of research surveys,different methodologies employed in their creation,the advantages and disadvantages of different forms and their future use in healthcare research.We also review the role artificial intelligence and the importance of increased patient participation in the development of these surveys in order to obtain more accurate and clinically relevant data.展开更多
Data from the 2013 Canadian Tobacco, Alcohol and Drugs Survey, and two other surveys are used to determine the effects of cannabis use on self-reported physical and mental health. Daily or almost daily marijuana use i...Data from the 2013 Canadian Tobacco, Alcohol and Drugs Survey, and two other surveys are used to determine the effects of cannabis use on self-reported physical and mental health. Daily or almost daily marijuana use is shown to be detrimental to both measures of health for some age groups but not all. The age group specific effects depend on gender. Males and females respond differently to cannabis use. The health costs of regularly using cannabis are significant but they are much smaller than those associated with tobacco use. These costs are attributed to both the presence of delta9-tetrahydrocannabinol and the fact that smoking cannabis is itself a health hazard because of the toxic properties of the smoke ingested. Cannabis use is costlier to regular smokers and age of first use below the age of 15 or 20 and being a former user leads to reduced physical and mental capacities which are permanent. These results strongly suggest that the legalization of marijuana be accompanied by educational programs, counseling services, and a delivery system, which minimizes juvenile and young adult usage.展开更多
In ordcr to asscss the school attendance status of children aged 7-14 to determine the causes of non-at-tendance,and to formulate appropriate policics for the implementation of the ninc-ycar compulsory cduca-tion prog...In ordcr to asscss the school attendance status of children aged 7-14 to determine the causes of non-at-tendance,and to formulate appropriate policics for the implementation of the ninc-ycar compulsory cduca-tion programme,a sample survcy on school--agc:children was carried out in Jianhc,Lcishan and Taijang,Guizhou Province in October 1993.展开更多
We conducted a large-scale survey of the extremely-cold infrared sources(ECISs) along the Galactic plane. There are 1912 (IRAS) sources selected on the basis of their color indices and their association with recent st...We conducted a large-scale survey of the extremely-cold infrared sources(ECISs) along the Galactic plane. There are 1912 (IRAS) sources selected on the basis of their color indices and their association with recent star formation. A quick survey was made toward 724 sources. There are 251 sources detected with significant CO emission during the quick survey above the detection limit of 0 9 K. Among the various sources detected, there are 147 sources found to have broad CO wing emission, including 116 newly detected sources. These sources comprise a new database for future study of star formation in our Galaxy. Using the known outflow sources as an effective indicator, we found the outflow detection rate of the quick survey is 62%, reasonably sensitive in survey for new outflow sources. Results from limited follow-up studies are introduced.展开更多
In the field work of populationbased research, 3 groups of eyes were graded by 2 observers in LOCS Ⅱ. The reproducibility of LOCS Ⅱwas evaluated by agreements(85%-100%) and k values(0.661-1) obtained in our study. T...In the field work of populationbased research, 3 groups of eyes were graded by 2 observers in LOCS Ⅱ. The reproducibility of LOCS Ⅱwas evaluated by agreements(85%-100%) and k values(0.661-1) obtained in our study. The satisfying results show that LOCS Ⅱis not only easy to be learned and to be applied consistently by different observers, but also good reproducibility in the field work. The longitudinal cataract study is going to be performed in our plan.展开更多
In this paper, the problem of non-response with significant travel costs in multivariate stratified sample surveys has been formulated of as a Multi-Objective Geometric Programming Problem (MOGPP). The fuzzy programmi...In this paper, the problem of non-response with significant travel costs in multivariate stratified sample surveys has been formulated of as a Multi-Objective Geometric Programming Problem (MOGPP). The fuzzy programming approach has been described for solving the formulated MOGPP. The formulated MOGPP has been solved with the help of LINGO Software and the dual solution is obtained. The optimum allocations of sample sizes of respondents and non respondents are obtained with the help of dual solutions and primal-dual relationship theorem. A numerical example is given to illustrate the procedure.展开更多
Background: The importance of structurally diverse forests for the conservation of biodiversity and provision of a wide range of ecosystem services has been widely recognised. However, tools to quantify structural div...Background: The importance of structurally diverse forests for the conservation of biodiversity and provision of a wide range of ecosystem services has been widely recognised. However, tools to quantify structural diversity of forests in an objective and quantitative way across many forest types and sites are still needed, for example to support biodiversity monitoring. The existing approaches to quantify forest structural diversity are based on small geographical regions or single forest types, typically using only small data sets.Results: Here we developed an index of structural diversity based on National Forest Inventory(NFI) data of BadenWurttemberg, Germany, a state with 1.3 million ha of diverse forest types in different ownerships. Based on a literature review, 11 aspects of structural diversity were identified a priori as crucially important to describe structural diversity. An initial comprehensive list of 52 variables derived from National Forest Inventory(NFI) data related to structural diversity was reduced by applying five selection criteria to arrive at one variable for each aspect of structural diversity. These variables comprise 1) quadratic mean diameter at breast height(DBH), 2) standard deviation of DBH, 3) standard deviation of stand height, 4) number of decay classes, 5) bark-diversity index, 6) trees with DBH ≥ 40 cm, 7) diversity of flowering and fructification, 8) average mean diameter of downed deadwood, 9) mean DBH of standing deadwood, 10) tree species richness and 11) tree species richness in the regeneration layer. These variables were combined into a simple,additive index to quantify the level of structural diversity, which assumes values between 0 and 1. We applied this index in an exemplary way to broad forest categories and ownerships to assess its feasibility to analyse structural diversity in large-scale forest inventories.Conclusions: The forest structure index presented here can be derived in a similar way from standard inventory variables for most other large-scale forest inventories to provide important information about biodiversity relevant forest conditions and thus provide an evidence-base for forest management and planning as well as reporting.展开更多
This paper proposes a new method for increasing the precision in survey sam- pling, i.e., a method combining sampling with prediction. The two cases where auxiliary information is or not available are considered. A nu...This paper proposes a new method for increasing the precision in survey sam- pling, i.e., a method combining sampling with prediction. The two cases where auxiliary information is or not available are considered. A numerical example is given.展开更多
This paper develops a sampling method to estimate the integral of a function of the area with a strategy to cover the area with parallel lines of observation. This sampling strategy is special in that lines very close...This paper develops a sampling method to estimate the integral of a function of the area with a strategy to cover the area with parallel lines of observation. This sampling strategy is special in that lines very close to each other are selected much more seldom than under a uniformly random design for the positions of the parallel lines. It is also special in that the positions of some of the lines are deterministic. Two different variance estimators are derived and investigated by sampling different man made signal functions. They show different properties in that the estimator that estimate the biggest variance gives an error interval that, in some situations, may be more than ten times the error interval computed from the other estimator. It is also obvious that the second estimator underestimates the variance. The author has not succeeded to derive an expression for the expectation of this estimator. This work is motivated towards finding the variance of acoustic abundance estimates.展开更多
In stratified survey sampling, sometimes we have complete auxiliary information. One of the fundamental questions is how to effectively use the complete auxiliary information at the estimation stage. In this paper, we...In stratified survey sampling, sometimes we have complete auxiliary information. One of the fundamental questions is how to effectively use the complete auxiliary information at the estimation stage. In this paper, we extend the model-calibration method to obtain estimators of the finite population mean by using complete auxiliary information from stratified sampling survey data. We show that the resulting estimators effectively use auxiliary information at the estimation stage and possess a number of attractive features such as asymptotically design-unbiased irrespective of the working model and approximately model-unbiased under the model. When a linear working-model is used, the resulting estimators reduce to the usual calibration estimator(or GREG).展开更多
An innovative use of spatial sampling designs is here presented. Sampling methods which consider spatial locations of statistical units are already used in agricultural and environmental contexts, while they have neve...An innovative use of spatial sampling designs is here presented. Sampling methods which consider spatial locations of statistical units are already used in agricultural and environmental contexts, while they have never been exploited for establishment surveys. However, the rapidly increasing availability of geo- referenced information about business units makes that possible. In business studies, it may indeed be important to take into account the presence of spatial autocorrelation or spatial trends in the variables of interest, in order to have more precise and efficient estimates. The opportunity of using the most innovative spatial sampling designs in business surveys, in order to produce samples that are well spread in space, is here tested by means of Monte Carlo experiments. For all designs, the Horvitz-Thompson estimator of the population total is used both with equal and unequal inclusion probabilities. The efficiency of sampling designs is evaluated in terms of relative RMSE and efficiency gain compared with designs ignoring the spatial information. Furthermore, an evaluation of spatially balancing samples is also conducted.展开更多
The size distributions of 2D and 3D Voronoi cells and of cells of Vp(2, 3),--2D cut of 3D Voronoi diagram--are explored, with the slngle-parameter (re-scaled) gamma distribution playing a central role in the analy...The size distributions of 2D and 3D Voronoi cells and of cells of Vp(2, 3),--2D cut of 3D Voronoi diagram--are explored, with the slngle-parameter (re-scaled) gamma distribution playing a central role in the analytical fitting. Observational evidence for a cellular universe is briefly reviewed. A simulated Vp(2, 3) map with galaxies lying on the cell boundaries is constructed to compare, as regards general appearance, with the observed CfA map of galaxies and voids, the parameters of the simulation being so chosen as to reproduce the largest observed void size.展开更多
The Jiao Tong University Spectroscopic Telescope(JUST)is a 4.4-meter f/6.0 segmented-mirror telescope dedicated to spectroscopic observations.The JUST primary mirror is composed of 18 hexagonal segments,each with a di...The Jiao Tong University Spectroscopic Telescope(JUST)is a 4.4-meter f/6.0 segmented-mirror telescope dedicated to spectroscopic observations.The JUST primary mirror is composed of 18 hexagonal segments,each with a diameter of 1.1 m.JUST provides two Nasmyth platforms for placing science instruments.One Nasmyth focus fits a field of view of 10′and the other has an extended field of view of 1.2°with correction optics.A tertiary mirror is used to switch between the two Nasmyth foci.JUST will be installed at a site at Lenghu in Qinghai Province,China,and will conduct spectroscopic observations with three types of instruments to explore the dark universe,trace the dynamic universe,and search for exoplanets:(1)a multi-fiber(2000 fibers)medium-resolution spectrometer(R=4000-5000)to spectroscopically map galaxies and large-scale structure;(2)an integral field unit(IFU)array of 500 optical fibers and/or a long-slit spectrograph dedicated to fast follow-ups of transient sources for multi-messenger astronomy;(3)a high-resolution spectrometer(R~100000)designed to identify Jupiter analogs and Earth-like planets,with the capability to characterize the atmospheres of hot exoplanets.展开更多
Fishery-independent surveys are often used for collecting high quality biological and ecological data to support fisheries management. A careful optimization of fishery-independent survey design is necessary to improv...Fishery-independent surveys are often used for collecting high quality biological and ecological data to support fisheries management. A careful optimization of fishery-independent survey design is necessary to improve the precision of survey estimates with cost-effective sampling efforts. We developed a simulation approach to evaluate and optimize the stratification scheme for a fishery-independent survey with multiple goals including estimation of abundance indices of individual species and species diversity indices. We compared the performances of the sampling designs with different stratification schemes for different goals over different months. Gains in precision of survey estimates from the stratification schemes were acquired compared to simple random sampling design for most indices. The stratification scheme with five strata performed the best. This study showed that the loss of precision of survey estimates due to the reduction of sampling efforts could be compensated by improved stratification schemes, which would reduce the cost and negative impacts of survey trawling on those species with low abundance in the fishery-independent survey. This study also suggests that optimization of a survey design differed with different survey objectives. A post-survey analysis can improve the stratification scheme of fishery-independent survey designs.展开更多
A composite random variable is a product (or sum of products) of statistically distributed quantities. Such a variable can represent the solution to a multi-factor quantitative problem submitted to a large, diverse, i...A composite random variable is a product (or sum of products) of statistically distributed quantities. Such a variable can represent the solution to a multi-factor quantitative problem submitted to a large, diverse, independent, anonymous group of non-expert respondents (the “crowd”). The objective of this research is to examine the statistical distribution of solutions from a large crowd to a quantitative problem involving image analysis and object counting. Theoretical analysis by the author, covering a range of conditions and types of factor variables, predicts that composite random variables are distributed log-normally to an excellent approximation. If the factors in a problem are themselves distributed log-normally, then their product is rigorously log-normal. A crowdsourcing experiment devised by the author and implemented with the assistance of a BBC (British Broadcasting Corporation) television show, yielded a sample of approximately 2000 responses consistent with a log-normal distribution. The sample mean was within ~12% of the true count. However, a Monte Carlo simulation (MCS) of the experiment, employing either normal or log-normal random variables as factors to model the processes by which a crowd of 1 million might arrive at their estimates, resulted in a visually perfect log-normal distribution with a mean response within ~5% of the true count. The results of this research suggest that a well-modeled MCS, by simulating a sample of responses from a large, rational, and incentivized crowd, can provide a more accurate solution to a quantitative problem than might be attainable by direct sampling of a smaller crowd or an uninformed crowd, irrespective of size, that guesses randomly.展开更多
Complex survey designs often involve unequal selection probabilities of clus-ters or units within clusters. When estimating models for complex survey data, scaled weights are incorporated into the likelihood, producin...Complex survey designs often involve unequal selection probabilities of clus-ters or units within clusters. When estimating models for complex survey data, scaled weights are incorporated into the likelihood, producing a pseudo likeli-hood. In a 3-level weighted analysis for a binary outcome, we implemented two methods for scaling the sampling weights in the National Health Survey of Pa-kistan (NHSP). For NHSP with health care utilization as a binary outcome we found age, gender, household (HH) goods, urban/rural status, community de-velopment index, province and marital status as significant predictors of health care utilization (p-value < 0.05). The variance of the random intercepts using scaling method 1 is estimated as 0.0961 (standard error 0.0339) for PSU level, and 0.2726 (standard error 0.0995) for household level respectively. Both esti-mates are significantly different from zero (p-value < 0.05) and indicate consid-erable heterogeneity in health care utilization with respect to households and PSUs. The results of the NHSP data analysis showed that all three analyses, weighted (two scaling methods) and un-weighted, converged to almost identical results with few exceptions. This may have occurred because of the large num-ber of 3rd and 2nd level clusters and relatively small ICC. We performed a sim-ulation study to assess the effect of varying prevalence and intra-class correla-tion coefficients (ICCs) on bias of fixed effect parameters and variance components of a multilevel pseudo maximum likelihood (weighted) analysis. The simulation results showed that the performance of the scaled weighted estimators is satisfactory for both scaling methods. Incorporating simulation into the analysis of complex multilevel surveys allows the integrity of the results to be tested and is recommended as good practice.展开更多
文摘sing the natural limestone samples taken from the field with dimension of 500 mm×500 mm×1 000 mm, the D-D (dilatancy-diffusion) seismogeny pattern was modeled under the condition of water injection, which observes the time-space evolutionary features about the relative physics fields of the loaded samples from deformation, formation of microcracks to the occurrence of main rupture. The results of observed apparent resistivity show: ① The process of the deformation from microcrack to main rupture on the loaded rock sample could be characterized by the precursory spatial-temporal changes in the observation of apparent resistivity; ② The precursory temporal changes of observation in apparent resistivity could be divided into several stages, and its spatial distribution shows the difference in different parts of the rock sample; ③ Before the main rupture of the rock sample the obvious ″tendency anomaly′ and ′short-term anomaly″ were observed, and some of them could be likely considered as the ″impending earthquake ″anomaly precursor of apparent resistivity. The changes and distribution features of apparent resistivity show that they are intrinsically related to the dilatancy phenomenon of the loaded rock sample. Finally, this paper discusses the mechanism of resistivity change of loaded rock sample theoretically.
基金funded by the National Natural Science Foundation of China Youth Project(61603127).
文摘Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requirements.The key to handling large-scale point clouds lies in leveraging random sampling,which offers higher computational efficiency and lower memory consumption compared to other sampling methods.Nevertheless,the use of random sampling can potentially result in the loss of crucial points during the encoding stage.To address these issues,this paper proposes cross-fusion self-attention network(CFSA-Net),a lightweight and efficient network architecture specifically designed for directly processing large-scale point clouds.At the core of this network is the incorporation of random sampling alongside a local feature extraction module based on cross-fusion self-attention(CFSA).This module effectively integrates long-range contextual dependencies between points by employing hierarchical position encoding(HPC).Furthermore,it enhances the interaction between each point’s coordinates and feature information through cross-fusion self-attention pooling,enabling the acquisition of more comprehensive geometric information.Finally,a residual optimization(RO)structure is introduced to extend the receptive field of individual points by stacking hierarchical position encoding and cross-fusion self-attention pooling,thereby reducing the impact of information loss caused by random sampling.Experimental results on the Stanford Large-Scale 3D Indoor Spaces(S3DIS),Semantic3D,and SemanticKITTI datasets demonstrate the superiority of this algorithm over advanced approaches such as RandLA-Net and KPConv.These findings underscore the excellent performance of CFSA-Net in large-scale 3D semantic segmentation.
文摘Research surveys are believed to have originated in antiquity with evidence of them being performed in ancient Egypt and Greece.In the past century,their use has grown significantly and they are now one of the most frequently employed research methods including in the field of healthcare.Modern validation techniques and processes have allowed researchers to broaden the scope of qualitative data they can gather through these surveys such as an individual’s views on service quality to nationwide surveys that are undertaken regularly to follow healthcare trends.This article focuses on the evolution and current utility of research surveys,different methodologies employed in their creation,the advantages and disadvantages of different forms and their future use in healthcare research.We also review the role artificial intelligence and the importance of increased patient participation in the development of these surveys in order to obtain more accurate and clinically relevant data.
文摘Data from the 2013 Canadian Tobacco, Alcohol and Drugs Survey, and two other surveys are used to determine the effects of cannabis use on self-reported physical and mental health. Daily or almost daily marijuana use is shown to be detrimental to both measures of health for some age groups but not all. The age group specific effects depend on gender. Males and females respond differently to cannabis use. The health costs of regularly using cannabis are significant but they are much smaller than those associated with tobacco use. These costs are attributed to both the presence of delta9-tetrahydrocannabinol and the fact that smoking cannabis is itself a health hazard because of the toxic properties of the smoke ingested. Cannabis use is costlier to regular smokers and age of first use below the age of 15 or 20 and being a former user leads to reduced physical and mental capacities which are permanent. These results strongly suggest that the legalization of marijuana be accompanied by educational programs, counseling services, and a delivery system, which minimizes juvenile and young adult usage.
文摘In ordcr to asscss the school attendance status of children aged 7-14 to determine the causes of non-at-tendance,and to formulate appropriate policics for the implementation of the ninc-ycar compulsory cduca-tion programme,a sample survcy on school--agc:children was carried out in Jianhc,Lcishan and Taijang,Guizhou Province in October 1993.
文摘We conducted a large-scale survey of the extremely-cold infrared sources(ECISs) along the Galactic plane. There are 1912 (IRAS) sources selected on the basis of their color indices and their association with recent star formation. A quick survey was made toward 724 sources. There are 251 sources detected with significant CO emission during the quick survey above the detection limit of 0 9 K. Among the various sources detected, there are 147 sources found to have broad CO wing emission, including 116 newly detected sources. These sources comprise a new database for future study of star formation in our Galaxy. Using the known outflow sources as an effective indicator, we found the outflow detection rate of the quick survey is 62%, reasonably sensitive in survey for new outflow sources. Results from limited follow-up studies are introduced.
文摘In the field work of populationbased research, 3 groups of eyes were graded by 2 observers in LOCS Ⅱ. The reproducibility of LOCS Ⅱwas evaluated by agreements(85%-100%) and k values(0.661-1) obtained in our study. The satisfying results show that LOCS Ⅱis not only easy to be learned and to be applied consistently by different observers, but also good reproducibility in the field work. The longitudinal cataract study is going to be performed in our plan.
文摘In this paper, the problem of non-response with significant travel costs in multivariate stratified sample surveys has been formulated of as a Multi-Objective Geometric Programming Problem (MOGPP). The fuzzy programming approach has been described for solving the formulated MOGPP. The formulated MOGPP has been solved with the help of LINGO Software and the dual solution is obtained. The optimum allocations of sample sizes of respondents and non respondents are obtained with the help of dual solutions and primal-dual relationship theorem. A numerical example is given to illustrate the procedure.
基金supported by a grant from the Ministry of Science,Research and the Arts of Baden-Württemberg(7533-10-5-78)to Jürgen BauhusFelix Storch received additional support through the BBW ForWerts Graduate Program
文摘Background: The importance of structurally diverse forests for the conservation of biodiversity and provision of a wide range of ecosystem services has been widely recognised. However, tools to quantify structural diversity of forests in an objective and quantitative way across many forest types and sites are still needed, for example to support biodiversity monitoring. The existing approaches to quantify forest structural diversity are based on small geographical regions or single forest types, typically using only small data sets.Results: Here we developed an index of structural diversity based on National Forest Inventory(NFI) data of BadenWurttemberg, Germany, a state with 1.3 million ha of diverse forest types in different ownerships. Based on a literature review, 11 aspects of structural diversity were identified a priori as crucially important to describe structural diversity. An initial comprehensive list of 52 variables derived from National Forest Inventory(NFI) data related to structural diversity was reduced by applying five selection criteria to arrive at one variable for each aspect of structural diversity. These variables comprise 1) quadratic mean diameter at breast height(DBH), 2) standard deviation of DBH, 3) standard deviation of stand height, 4) number of decay classes, 5) bark-diversity index, 6) trees with DBH ≥ 40 cm, 7) diversity of flowering and fructification, 8) average mean diameter of downed deadwood, 9) mean DBH of standing deadwood, 10) tree species richness and 11) tree species richness in the regeneration layer. These variables were combined into a simple,additive index to quantify the level of structural diversity, which assumes values between 0 and 1. We applied this index in an exemplary way to broad forest categories and ownerships to assess its feasibility to analyse structural diversity in large-scale forest inventories.Conclusions: The forest structure index presented here can be derived in a similar way from standard inventory variables for most other large-scale forest inventories to provide important information about biodiversity relevant forest conditions and thus provide an evidence-base for forest management and planning as well as reporting.
基金Supported by the National Natural Science Foundation of China
文摘This paper proposes a new method for increasing the precision in survey sam- pling, i.e., a method combining sampling with prediction. The two cases where auxiliary information is or not available are considered. A numerical example is given.
文摘This paper develops a sampling method to estimate the integral of a function of the area with a strategy to cover the area with parallel lines of observation. This sampling strategy is special in that lines very close to each other are selected much more seldom than under a uniformly random design for the positions of the parallel lines. It is also special in that the positions of some of the lines are deterministic. Two different variance estimators are derived and investigated by sampling different man made signal functions. They show different properties in that the estimator that estimate the biggest variance gives an error interval that, in some situations, may be more than ten times the error interval computed from the other estimator. It is also obvious that the second estimator underestimates the variance. The author has not succeeded to derive an expression for the expectation of this estimator. This work is motivated towards finding the variance of acoustic abundance estimates.
基金Supported by the National Natural Science Foundation of China(10571093)
文摘In stratified survey sampling, sometimes we have complete auxiliary information. One of the fundamental questions is how to effectively use the complete auxiliary information at the estimation stage. In this paper, we extend the model-calibration method to obtain estimators of the finite population mean by using complete auxiliary information from stratified sampling survey data. We show that the resulting estimators effectively use auxiliary information at the estimation stage and possess a number of attractive features such as asymptotically design-unbiased irrespective of the working model and approximately model-unbiased under the model. When a linear working-model is used, the resulting estimators reduce to the usual calibration estimator(or GREG).
文摘An innovative use of spatial sampling designs is here presented. Sampling methods which consider spatial locations of statistical units are already used in agricultural and environmental contexts, while they have never been exploited for establishment surveys. However, the rapidly increasing availability of geo- referenced information about business units makes that possible. In business studies, it may indeed be important to take into account the presence of spatial autocorrelation or spatial trends in the variables of interest, in order to have more precise and efficient estimates. The opportunity of using the most innovative spatial sampling designs in business surveys, in order to produce samples that are well spread in space, is here tested by means of Monte Carlo experiments. For all designs, the Horvitz-Thompson estimator of the population total is used both with equal and unequal inclusion probabilities. The efficiency of sampling designs is evaluated in terms of relative RMSE and efficiency gain compared with designs ignoring the spatial information. Furthermore, an evaluation of spatially balancing samples is also conducted.
文摘The size distributions of 2D and 3D Voronoi cells and of cells of Vp(2, 3),--2D cut of 3D Voronoi diagram--are explored, with the slngle-parameter (re-scaled) gamma distribution playing a central role in the analytical fitting. Observational evidence for a cellular universe is briefly reviewed. A simulated Vp(2, 3) map with galaxies lying on the cell boundaries is constructed to compare, as regards general appearance, with the observed CfA map of galaxies and voids, the parameters of the simulation being so chosen as to reproduce the largest observed void size.
基金This work is supported by“the Fundamental Research Funds for the Central Universities”,111 project No.B20019Shanghai Natural Science Foundation,grant No.19ZR1466800.
文摘The Jiao Tong University Spectroscopic Telescope(JUST)is a 4.4-meter f/6.0 segmented-mirror telescope dedicated to spectroscopic observations.The JUST primary mirror is composed of 18 hexagonal segments,each with a diameter of 1.1 m.JUST provides two Nasmyth platforms for placing science instruments.One Nasmyth focus fits a field of view of 10′and the other has an extended field of view of 1.2°with correction optics.A tertiary mirror is used to switch between the two Nasmyth foci.JUST will be installed at a site at Lenghu in Qinghai Province,China,and will conduct spectroscopic observations with three types of instruments to explore the dark universe,trace the dynamic universe,and search for exoplanets:(1)a multi-fiber(2000 fibers)medium-resolution spectrometer(R=4000-5000)to spectroscopically map galaxies and large-scale structure;(2)an integral field unit(IFU)array of 500 optical fibers and/or a long-slit spectrograph dedicated to fast follow-ups of transient sources for multi-messenger astronomy;(3)a high-resolution spectrometer(R~100000)designed to identify Jupiter analogs and Earth-like planets,with the capability to characterize the atmospheres of hot exoplanets.
基金The Public Science and Technology Research Funds Projects of Ocean under contract No.201305030the Specialized Research Fund for the Doctoral Program of Higher Education under contract No.20120132130001
文摘Fishery-independent surveys are often used for collecting high quality biological and ecological data to support fisheries management. A careful optimization of fishery-independent survey design is necessary to improve the precision of survey estimates with cost-effective sampling efforts. We developed a simulation approach to evaluate and optimize the stratification scheme for a fishery-independent survey with multiple goals including estimation of abundance indices of individual species and species diversity indices. We compared the performances of the sampling designs with different stratification schemes for different goals over different months. Gains in precision of survey estimates from the stratification schemes were acquired compared to simple random sampling design for most indices. The stratification scheme with five strata performed the best. This study showed that the loss of precision of survey estimates due to the reduction of sampling efforts could be compensated by improved stratification schemes, which would reduce the cost and negative impacts of survey trawling on those species with low abundance in the fishery-independent survey. This study also suggests that optimization of a survey design differed with different survey objectives. A post-survey analysis can improve the stratification scheme of fishery-independent survey designs.
文摘A composite random variable is a product (or sum of products) of statistically distributed quantities. Such a variable can represent the solution to a multi-factor quantitative problem submitted to a large, diverse, independent, anonymous group of non-expert respondents (the “crowd”). The objective of this research is to examine the statistical distribution of solutions from a large crowd to a quantitative problem involving image analysis and object counting. Theoretical analysis by the author, covering a range of conditions and types of factor variables, predicts that composite random variables are distributed log-normally to an excellent approximation. If the factors in a problem are themselves distributed log-normally, then their product is rigorously log-normal. A crowdsourcing experiment devised by the author and implemented with the assistance of a BBC (British Broadcasting Corporation) television show, yielded a sample of approximately 2000 responses consistent with a log-normal distribution. The sample mean was within ~12% of the true count. However, a Monte Carlo simulation (MCS) of the experiment, employing either normal or log-normal random variables as factors to model the processes by which a crowd of 1 million might arrive at their estimates, resulted in a visually perfect log-normal distribution with a mean response within ~5% of the true count. The results of this research suggest that a well-modeled MCS, by simulating a sample of responses from a large, rational, and incentivized crowd, can provide a more accurate solution to a quantitative problem than might be attainable by direct sampling of a smaller crowd or an uninformed crowd, irrespective of size, that guesses randomly.
文摘Complex survey designs often involve unequal selection probabilities of clus-ters or units within clusters. When estimating models for complex survey data, scaled weights are incorporated into the likelihood, producing a pseudo likeli-hood. In a 3-level weighted analysis for a binary outcome, we implemented two methods for scaling the sampling weights in the National Health Survey of Pa-kistan (NHSP). For NHSP with health care utilization as a binary outcome we found age, gender, household (HH) goods, urban/rural status, community de-velopment index, province and marital status as significant predictors of health care utilization (p-value < 0.05). The variance of the random intercepts using scaling method 1 is estimated as 0.0961 (standard error 0.0339) for PSU level, and 0.2726 (standard error 0.0995) for household level respectively. Both esti-mates are significantly different from zero (p-value < 0.05) and indicate consid-erable heterogeneity in health care utilization with respect to households and PSUs. The results of the NHSP data analysis showed that all three analyses, weighted (two scaling methods) and un-weighted, converged to almost identical results with few exceptions. This may have occurred because of the large num-ber of 3rd and 2nd level clusters and relatively small ICC. We performed a sim-ulation study to assess the effect of varying prevalence and intra-class correla-tion coefficients (ICCs) on bias of fixed effect parameters and variance components of a multilevel pseudo maximum likelihood (weighted) analysis. The simulation results showed that the performance of the scaled weighted estimators is satisfactory for both scaling methods. Incorporating simulation into the analysis of complex multilevel surveys allows the integrity of the results to be tested and is recommended as good practice.