With constant deepening of the reform and opening-up,national economic system has changed from planned economy to market economy,and rural survey and statistics remain in a difficult transition period. In this period,...With constant deepening of the reform and opening-up,national economic system has changed from planned economy to market economy,and rural survey and statistics remain in a difficult transition period. In this period,China needs transforming original statistical mode according to market economic system. All levels of government should report and submit a lot and increasing statistical information. Besides,in this period,townships,villages and counties are faced with old and new conflicts. These conflicts perplex implementation of rural statistics and survey and development of rural statistical undertaking,and also cause researches and thinking of reform of rural statistical and survey methods.展开更多
In this study,geochemical anomaly separation was carried out with methods based on the distribution model,which includes probability diagram(MPD),fractal(concentration-area technique),and U-statistic methods.The main ...In this study,geochemical anomaly separation was carried out with methods based on the distribution model,which includes probability diagram(MPD),fractal(concentration-area technique),and U-statistic methods.The main objective is to evaluate the efficiency and accuracy of the methods in separation of anomalies on the shear zone gold mineralization.For this purpose,samples were taken from the secondary lithogeochemical environment(stream sediment samples)on the gold mineralization in Saqqez,NW of Iran.Interpretation of the histograms and diagrams showed that the MPD is capable of identifying two phases of mineralization.The fractal method could separate only one phase of change based on the fractal dimension with high concentration areas of the Au element.The spatial analysis showed two mixed subpopulations after U=0 and another subpopulation with very high U values.The MPD analysis followed spatial analysis,which shows the detail of the variations.Six mineralized zones detected from local geochemical exploration results were used for validating the methods mentioned above.The MPD method was able to identify the anomalous areas higher than 90%,whereas the two other methods identified 60%(maximum)of the anomalous areas.The raw data without any estimation for the concentration was used by the MPD method using aminimum of calculations to determine the threshold values.Therefore,the MPD method is more robust than the other methods.The spatial analysis identified the detail soft hegeological and mineralization events that were affected in the study area.MPD is recommended as the best,and the spatial U-analysis is the next reliable method to be used.The fractal method could show more detail of the events and variations in the area with asymmetrical grid net and a higher density of sampling or at the detailed exploration stage.展开更多
A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming probl...A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming problem can be converted into the single objective function by various methods as Chandra Sen’s method, weighted sum method, ranking function method, statistical averaging method. In this paper, Chandra Sen’s method and statistical averaging method both are used here for making single objective function from multi-objective function. Two multi-objective programming problems are solved to verify the result. One is numerical example and the other is real life example. Then the problems are solved by ordinary simplex method and fuzzy programming method. It can be seen that fuzzy programming method gives better optimal values than the ordinary simplex method.展开更多
In the present paper,we mostly focus on P_(p)^(2)-statistical convergence.We will look into the uniform integrability via the power series method and its characterizations for double sequences.Also,the notions of P_(p...In the present paper,we mostly focus on P_(p)^(2)-statistical convergence.We will look into the uniform integrability via the power series method and its characterizations for double sequences.Also,the notions of P_(p)^(2)-statistically Cauchy sequence,P_(p)^(2)-statistical boundedness and core for double sequences will be described in addition to these findings.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
Statistical approaches for evaluating causal effects and for discovering causal networks are discussed in this paper.A causal relation between two variables is different from an association or correlation between them...Statistical approaches for evaluating causal effects and for discovering causal networks are discussed in this paper.A causal relation between two variables is different from an association or correlation between them.An association measurement between two variables and may be changed dramatically from positive to negative by omitting a third variable,which is called Yule-Simpson paradox.We shall discuss how to evaluate the causal effect of a treatment or exposure on an outcome to avoid the phenomena of Yule-Simpson paradox. Surrogates and intermediate variables are often used to reduce measurement costs or duration when measurement of endpoint variables is expensive,inconvenient,infeasible or unobservable in practice.There have been many criteria for surrogates.However,it is possible that for a surrogate satisfying these criteria,a treatment has a positive effect on the surrogate,which in turn has a positive effect on the outcome,but the treatment has a negative effect on the outcome,which is called the surrogate paradox.We shall discuss criteria for surrogates to avoid the phenomena of the surrogate paradox. Causal networks which describe the causal relationships among a large number of variables have been applied to many research fields.It is important to discover structures of causal networks from observed data.We propose a recursive approach for discovering a causal network in which a structural learning of a large network is decomposed recursively into learning of small networks.Further to discover causal relationships,we present an active learning approach in terms of external interventions on some variables.When we focus on the causes of an interest outcome, instead of discovering a whole network,we propose a local learning approach to discover these causes that affect the outcome.展开更多
The development of adaptation measures to climate change relies on data from climate models or impact models. In order to analyze these large data sets or an ensemble of these data sets, the use of statistical methods...The development of adaptation measures to climate change relies on data from climate models or impact models. In order to analyze these large data sets or an ensemble of these data sets, the use of statistical methods is required. In this paper, the methodological approach to collecting, structuring and publishing the methods, which have been used or developed by former or present adaptation initiatives, is described. The intention is to communicate achieved knowledge and thus support future users. A key component is the participation of users in the development process. Main elements of the approach are standardized, template-based descriptions of the methods including the specific applications, references, and method assessment. All contributions have been quality checked, sorted, and placed in a larger context. The result is a report on statistical methods which is freely available as printed or online version. Examples of how to use the methods are presented in this paper and are also included in the brochure.展开更多
Data of traffic flow, speed and density are required for planning, designing, and modelling of traffic stream for all parts of the road system. Specialized equipments such as stationary counts are used to record volum...Data of traffic flow, speed and density are required for planning, designing, and modelling of traffic stream for all parts of the road system. Specialized equipments such as stationary counts are used to record volume and speed;but they are expensive, difficult to set up, and require periodic maintenance. The moving observer method was proposed in 1954 by Wardrop and Charlesworth to estimate these variables inexpensively. Basically, the observer counts the number of vehicles overtaken, the number of vehicles passed, and the number of vehicles encountered while traveling in the opposite direction. The trip time is reported for both travel directions. Additionally, the length of road segment is measured. These variables are then used in estimating speeds and volumes. In a westbound direction from Interstate Highway 30 (I-30) in the DFW area, this study examined the accuracy and feasibility of this method by comparing it with stationary observer method as the standard method for such counts. The statistical tests were used to test the accuracy. Results show that this method provides accurate volume and speed estimates when compared to the stationary method for the road segment with three lanes per direction, especially when several runs are taken.展开更多
In public health,simulation modeling stands as an invaluable asset,enabling the evaluation of new systems without their physical implementation,experimentation with existing systems without operational adjustments,and...In public health,simulation modeling stands as an invaluable asset,enabling the evaluation of new systems without their physical implementation,experimentation with existing systems without operational adjustments,and testing system limits without real-world repercussions.In simulation modeling,the Monte Carlo method emerges as a powerful yet underutilized tool.Although the Monte Carlo method has not yet gained widespread prominence in healthcare,its technological capabilities hold promise for substantial cost reduction and risk mitigation.In this review article,we aimed to explore the transformative potential of the Monte Carlo method in healthcare contexts.We underscore the significance of experiential insights derived from simulated experimentation,especially in resource-constrained scenarios where time,financial constraints,and limited resources necessitate innovative and efficient approaches.As public health faces increasing challenges,incorporating the Monte Carlo method presents an opportunity for enhanced system construction,analysis,and evaluation.展开更多
During the evolution of the global economic, high and new technology industry has become the main driver to impel the global economy growth via technological advancement, and the main means to guarantee the sustainabl...During the evolution of the global economic, high and new technology industry has become the main driver to impel the global economy growth via technological advancement, and the main means to guarantee the sustainable development of the global economy. In the view of China’s situation, this article analyzes the experiences of OECD in high and new technology industry and gives a statistics index system along with the evaluation method to estimate the development of high and new technology industry.展开更多
Government credibility is an important asset of contemporary national governance, an important criterion for evaluating government legitimacy, and a key factor in measuring the effectiveness of government governance. ...Government credibility is an important asset of contemporary national governance, an important criterion for evaluating government legitimacy, and a key factor in measuring the effectiveness of government governance. In recent years, researchers’ research on government credibility has mostly focused on exploring theories and mechanisms, with little empirical research on this topic. This article intends to apply variable selection models in the field of social statistics to the issue of government credibility, in order to achieve empirical research on government credibility and explore its core influencing factors from a statistical perspective. Specifically, this article intends to use four regression-analysis-based methods and three random-forest-based methods to study the influencing factors of government credibility in various provinces in China, and compare the performance of these seven variable selection methods in different dimensions. The research results show that there are certain differences in simplicity, accuracy, and variable importance ranking among different variable selection methods, which present different importance in the study of government credibility issues. This study provides a methodological reference for variable selection models in the field of social science research, and also offers a multidimensional comparative perspective for analyzing the influencing factors of government credibility.展开更多
In this study,methods based on the distribution model(with and without personal opinion)were used for the separation of anomalous zones,which include two different methods of U-spatial statistics and mean plus values ...In this study,methods based on the distribution model(with and without personal opinion)were used for the separation of anomalous zones,which include two different methods of U-spatial statistics and mean plus values of standard deviation(X+nS).The primary purpose is to compare the results of these methods with each other.To increase the accuracy of comparison,regional geochemical data were used where occurrences and mineralization zones of epithermal gold have been introduced.The study area is part of the Hashtjin geological map,which is structurally part of the folded and thrust belt and part of the Alborz Tertiary magmatic complex.Samples were taken from secondary lithogeochemical environments.Au element data concerning epithermal gold reserves were used to investigate the efficacy of these two methods.In the U-spatial statistics method,and criteria were used to determine the threshold,and in the method,the element enrichment index of the region rock units was obtained with grouping these units.The anomalous areas were identified by,and criteria.Comparison of methods was made considering the position of discovered occurrences and the occurrences obtained from these methods,the flexibility of the methods in separating the anomalous zones,and the two-dimensional spatial correlation of the three elements As,Pb,and Ag with Au element.The ability of two methods to identify potential areas is acceptable.Among these methods,it seems the method with criteria has a high degree of flexibility in separating anomalous regions in the case of epithermal type gold deposits.展开更多
Quantitative descriptions of geochemical patterns and providing geochemical anomaly map are important in applied geochemistry. Several statistical methodologies are presented in order to identify and separate geochemi...Quantitative descriptions of geochemical patterns and providing geochemical anomaly map are important in applied geochemistry. Several statistical methodologies are presented in order to identify and separate geochemical anomalies. The U-statistic method is one of the most important structural methods and is a kind of weighted mean that surrounding points of samples are considered in U value determination. However, it is able to separate the different anomalies based on only one variable. The main aim of the presented study is development of this method in a multivariate mode. For this purpose, U-statistic method should be combined with a multivariate method which devotes a new value to each sample based on several variables. Therefore, at the first step, the optimum p is calculated in p-norm distance and then U-statistic method is applied on p-norm distance values of the samples because p-norm distance is calculated based on several variables. This method is a combination of efficient U-statistic method and p-norm distance and is used for the first time in this research. Results show that p-norm distance of p=2(Euclidean distance) in the case of a fact that Au and As can be considered optimized p-norm distance with the lowest error. The samples indicated by the combination of these methods as anomalous are more regular, less dispersed and more accurate than using just the U-statistic or other nonstructural methods such as Mahalanobis distance. Also it was observed that the combination results are closely associated with the defined Au ore indication within the studied area. Finally, univariate and bivariate geochemical anomaly maps are provided for Au and As, which have been respectively prepared using U-statistic and its combination with Euclidean distance method.展开更多
Aim To improve the efficiency of fatigue material tests and relevant statistical treatment of test data. Methods\ Least square approach and other special treatments were used. Results and Conclusion\ The concepts...Aim To improve the efficiency of fatigue material tests and relevant statistical treatment of test data. Methods\ Least square approach and other special treatments were used. Results and Conclusion\ The concepts of each phase in fatigue tests and statistical treatment are clarified. The method proposed leads to three important properties. Reduced number of specimens brings to the advantage of lowering test expenditures. The whole test procedure has more flexibility for there is no need to conduct many tests at the same stress level as in traditional cases.展开更多
Identification of modal parameters of a linear structure with output-only measurements has received much attention over the past decades. In the paper, the Natural Excitation Technique (NExT) is used for acquisition o...Identification of modal parameters of a linear structure with output-only measurements has received much attention over the past decades. In the paper, the Natural Excitation Technique (NExT) is used for acquisition of the impulse signals from the structural responses. Then Eigensystem Realization Algorithm (ERA) is utilized for modal identification. For disregarding the fictitious ‘computational modes', a procedure, Statistically Averaging Modal Frequency Method (SAMFM), is developed to distinguish the true modes from noise modes, and to improve the precision of the identified modal frequencies of the structure. An offshore platform is modeled with the finite element method. The theoretical modal parameters are obtained for a comparison with the identified values. The dynamic responses of the platform under random wave loading are computed for providing the output signals used for identification with ERA. Results of simulation demonstrate that the proposed method can determine the system modal frequency with high precision.展开更多
A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, an...A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.展开更多
A statistical monitoring method has been developedfor accurate, safety surveillance methods of γ-BHC resideueor harmful substances in foods or feeds. It is very importantfor safety monitoring and arbitrament inspecti...A statistical monitoring method has been developedfor accurate, safety surveillance methods of γ-BHC resideueor harmful substances in foods or feeds. It is very importantfor safety monitoring and arbitrament inspections. This paperintroduces a calculation formula by a six-point calibrationmethod and an example for detection of Y-BHC in corn.The method can guarantee the accuracy of the results,and it does very substantially reduce the probability of an er-ror by one-point calibration.展开更多
It is well known that the nonparametric estimation of the regression function is highly sensitive to the presence of even a small proportion of outliers in the data.To solve the problem of typical observations when th...It is well known that the nonparametric estimation of the regression function is highly sensitive to the presence of even a small proportion of outliers in the data.To solve the problem of typical observations when the covariates of the nonparametric component are functional,the robust estimates for the regression parameter and regression operator are introduced.The main propose of the paper is to consider data-driven methods of selecting the number of neighbors in order to make the proposed processes fully automatic.We use thek Nearest Neighbors procedure(kNN)to construct the kernel estimator of the proposed robust model.Under some regularity conditions,we state consistency results for kNN functional estimators,which are uniform in the number of neighbors(UINN).Furthermore,a simulation study and an empirical application to a real data analysis of octane gasoline predictions are carried out to illustrate the higher predictive performances and the usefulness of the kNN approach.展开更多
It is a matter of course that Kolmogorov’s probability theory is a very useful mathematical tool for the analysis of statistics. However, this fact never means that statistics is based on Kolmogorov’s probability th...It is a matter of course that Kolmogorov’s probability theory is a very useful mathematical tool for the analysis of statistics. However, this fact never means that statistics is based on Kolmogorov’s probability theory, since it is not guaranteed that mathematics and our world are connected. In order that mathematics asserts some statements concerning our world, a certain theory (so called “world view”) mediates between mathematics and our world. Recently we propose measurement theory (i.e., the theory of the quantum mechanical world view), which is characterized as the linguistic turn of quantum mechanics. In this paper, we assert that statistics is based on measurement theory. And, for example, we show, from the pure theoretical point of view (i.e., from the measurement theoretical point of view), that regression analysis can not be justified without Bayes’ theorem. This may imply that even the conventional classification of (Fisher’s) statistics and Bayesian statistics should be reconsidered.展开更多
The offshore pipeline network in the U.S. Gulf of Mexico is the largest and most transparent system in the world. A review of deepwater projects in the region provides insight into construction cost and installation m...The offshore pipeline network in the U.S. Gulf of Mexico is the largest and most transparent system in the world. A review of deepwater projects in the region provides insight into construction cost and installation methods and the evolution of contract strategies. Pipeline projects are identified as export systems, infield flowline systems, and combined export and infield systems, and three dozen deepwater pipeline installations from 1980–2014 are described based on Offshore Technology Conference(OTC) and Society of Petroleum Engineers(SPE) industry publications and press release data. Export lines and infield flowlines are equally represented and many projects used a combination of J-lay, S-lay and reel methods with rigid steel, flexible line, and pipe-in-pipe systems. The average 2014 inflation-adjusted cost for pipeline projects based on OTC/SPE publications was $2.76 million/mi and ranged from $520 000/mi to $12.94 million/mi. High cost pipelines tend to be short segments or specialized pipeline. Excluding the two cost endpoints, the majority of projects ranged from $1 to $6 million/mi. The average inflation-adjusted cost to install deepwater pipelines in the U.S. Gulf of Mexico based on available public data is estimated at $3.1 million/mi.展开更多
基金Supported by Project of Business Management Cultivation Discipline in Commerce Department of Rongchang Campus,Southwest University
文摘With constant deepening of the reform and opening-up,national economic system has changed from planned economy to market economy,and rural survey and statistics remain in a difficult transition period. In this period,China needs transforming original statistical mode according to market economic system. All levels of government should report and submit a lot and increasing statistical information. Besides,in this period,townships,villages and counties are faced with old and new conflicts. These conflicts perplex implementation of rural statistics and survey and development of rural statistical undertaking,and also cause researches and thinking of reform of rural statistical and survey methods.
文摘In this study,geochemical anomaly separation was carried out with methods based on the distribution model,which includes probability diagram(MPD),fractal(concentration-area technique),and U-statistic methods.The main objective is to evaluate the efficiency and accuracy of the methods in separation of anomalies on the shear zone gold mineralization.For this purpose,samples were taken from the secondary lithogeochemical environment(stream sediment samples)on the gold mineralization in Saqqez,NW of Iran.Interpretation of the histograms and diagrams showed that the MPD is capable of identifying two phases of mineralization.The fractal method could separate only one phase of change based on the fractal dimension with high concentration areas of the Au element.The spatial analysis showed two mixed subpopulations after U=0 and another subpopulation with very high U values.The MPD analysis followed spatial analysis,which shows the detail of the variations.Six mineralized zones detected from local geochemical exploration results were used for validating the methods mentioned above.The MPD method was able to identify the anomalous areas higher than 90%,whereas the two other methods identified 60%(maximum)of the anomalous areas.The raw data without any estimation for the concentration was used by the MPD method using aminimum of calculations to determine the threshold values.Therefore,the MPD method is more robust than the other methods.The spatial analysis identified the detail soft hegeological and mineralization events that were affected in the study area.MPD is recommended as the best,and the spatial U-analysis is the next reliable method to be used.The fractal method could show more detail of the events and variations in the area with asymmetrical grid net and a higher density of sampling or at the detailed exploration stage.
文摘A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming problem can be converted into the single objective function by various methods as Chandra Sen’s method, weighted sum method, ranking function method, statistical averaging method. In this paper, Chandra Sen’s method and statistical averaging method both are used here for making single objective function from multi-objective function. Two multi-objective programming problems are solved to verify the result. One is numerical example and the other is real life example. Then the problems are solved by ordinary simplex method and fuzzy programming method. It can be seen that fuzzy programming method gives better optimal values than the ordinary simplex method.
文摘In the present paper,we mostly focus on P_(p)^(2)-statistical convergence.We will look into the uniform integrability via the power series method and its characterizations for double sequences.Also,the notions of P_(p)^(2)-statistically Cauchy sequence,P_(p)^(2)-statistical boundedness and core for double sequences will be described in addition to these findings.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
文摘Statistical approaches for evaluating causal effects and for discovering causal networks are discussed in this paper.A causal relation between two variables is different from an association or correlation between them.An association measurement between two variables and may be changed dramatically from positive to negative by omitting a third variable,which is called Yule-Simpson paradox.We shall discuss how to evaluate the causal effect of a treatment or exposure on an outcome to avoid the phenomena of Yule-Simpson paradox. Surrogates and intermediate variables are often used to reduce measurement costs or duration when measurement of endpoint variables is expensive,inconvenient,infeasible or unobservable in practice.There have been many criteria for surrogates.However,it is possible that for a surrogate satisfying these criteria,a treatment has a positive effect on the surrogate,which in turn has a positive effect on the outcome,but the treatment has a negative effect on the outcome,which is called the surrogate paradox.We shall discuss criteria for surrogates to avoid the phenomena of the surrogate paradox. Causal networks which describe the causal relationships among a large number of variables have been applied to many research fields.It is important to discover structures of causal networks from observed data.We propose a recursive approach for discovering a causal network in which a structural learning of a large network is decomposed recursively into learning of small networks.Further to discover causal relationships,we present an active learning approach in terms of external interventions on some variables.When we focus on the causes of an interest outcome, instead of discovering a whole network,we propose a local learning approach to discover these causes that affect the outcome.
文摘The development of adaptation measures to climate change relies on data from climate models or impact models. In order to analyze these large data sets or an ensemble of these data sets, the use of statistical methods is required. In this paper, the methodological approach to collecting, structuring and publishing the methods, which have been used or developed by former or present adaptation initiatives, is described. The intention is to communicate achieved knowledge and thus support future users. A key component is the participation of users in the development process. Main elements of the approach are standardized, template-based descriptions of the methods including the specific applications, references, and method assessment. All contributions have been quality checked, sorted, and placed in a larger context. The result is a report on statistical methods which is freely available as printed or online version. Examples of how to use the methods are presented in this paper and are also included in the brochure.
文摘Data of traffic flow, speed and density are required for planning, designing, and modelling of traffic stream for all parts of the road system. Specialized equipments such as stationary counts are used to record volume and speed;but they are expensive, difficult to set up, and require periodic maintenance. The moving observer method was proposed in 1954 by Wardrop and Charlesworth to estimate these variables inexpensively. Basically, the observer counts the number of vehicles overtaken, the number of vehicles passed, and the number of vehicles encountered while traveling in the opposite direction. The trip time is reported for both travel directions. Additionally, the length of road segment is measured. These variables are then used in estimating speeds and volumes. In a westbound direction from Interstate Highway 30 (I-30) in the DFW area, this study examined the accuracy and feasibility of this method by comparing it with stationary observer method as the standard method for such counts. The statistical tests were used to test the accuracy. Results show that this method provides accurate volume and speed estimates when compared to the stationary method for the road segment with three lanes per direction, especially when several runs are taken.
基金Supported by the European Union-NextGenerationEU,through the National Recovery and Resilience Plan of the Republic of Bulgaria,No.BG-RRP-2.004-0008.
文摘In public health,simulation modeling stands as an invaluable asset,enabling the evaluation of new systems without their physical implementation,experimentation with existing systems without operational adjustments,and testing system limits without real-world repercussions.In simulation modeling,the Monte Carlo method emerges as a powerful yet underutilized tool.Although the Monte Carlo method has not yet gained widespread prominence in healthcare,its technological capabilities hold promise for substantial cost reduction and risk mitigation.In this review article,we aimed to explore the transformative potential of the Monte Carlo method in healthcare contexts.We underscore the significance of experiential insights derived from simulated experimentation,especially in resource-constrained scenarios where time,financial constraints,and limited resources necessitate innovative and efficient approaches.As public health faces increasing challenges,incorporating the Monte Carlo method presents an opportunity for enhanced system construction,analysis,and evaluation.
文摘During the evolution of the global economic, high and new technology industry has become the main driver to impel the global economy growth via technological advancement, and the main means to guarantee the sustainable development of the global economy. In the view of China’s situation, this article analyzes the experiences of OECD in high and new technology industry and gives a statistics index system along with the evaluation method to estimate the development of high and new technology industry.
文摘Government credibility is an important asset of contemporary national governance, an important criterion for evaluating government legitimacy, and a key factor in measuring the effectiveness of government governance. In recent years, researchers’ research on government credibility has mostly focused on exploring theories and mechanisms, with little empirical research on this topic. This article intends to apply variable selection models in the field of social statistics to the issue of government credibility, in order to achieve empirical research on government credibility and explore its core influencing factors from a statistical perspective. Specifically, this article intends to use four regression-analysis-based methods and three random-forest-based methods to study the influencing factors of government credibility in various provinces in China, and compare the performance of these seven variable selection methods in different dimensions. The research results show that there are certain differences in simplicity, accuracy, and variable importance ranking among different variable selection methods, which present different importance in the study of government credibility issues. This study provides a methodological reference for variable selection models in the field of social science research, and also offers a multidimensional comparative perspective for analyzing the influencing factors of government credibility.
文摘In this study,methods based on the distribution model(with and without personal opinion)were used for the separation of anomalous zones,which include two different methods of U-spatial statistics and mean plus values of standard deviation(X+nS).The primary purpose is to compare the results of these methods with each other.To increase the accuracy of comparison,regional geochemical data were used where occurrences and mineralization zones of epithermal gold have been introduced.The study area is part of the Hashtjin geological map,which is structurally part of the folded and thrust belt and part of the Alborz Tertiary magmatic complex.Samples were taken from secondary lithogeochemical environments.Au element data concerning epithermal gold reserves were used to investigate the efficacy of these two methods.In the U-spatial statistics method,and criteria were used to determine the threshold,and in the method,the element enrichment index of the region rock units was obtained with grouping these units.The anomalous areas were identified by,and criteria.Comparison of methods was made considering the position of discovered occurrences and the occurrences obtained from these methods,the flexibility of the methods in separating the anomalous zones,and the two-dimensional spatial correlation of the three elements As,Pb,and Ag with Au element.The ability of two methods to identify potential areas is acceptable.Among these methods,it seems the method with criteria has a high degree of flexibility in separating anomalous regions in the case of epithermal type gold deposits.
文摘Quantitative descriptions of geochemical patterns and providing geochemical anomaly map are important in applied geochemistry. Several statistical methodologies are presented in order to identify and separate geochemical anomalies. The U-statistic method is one of the most important structural methods and is a kind of weighted mean that surrounding points of samples are considered in U value determination. However, it is able to separate the different anomalies based on only one variable. The main aim of the presented study is development of this method in a multivariate mode. For this purpose, U-statistic method should be combined with a multivariate method which devotes a new value to each sample based on several variables. Therefore, at the first step, the optimum p is calculated in p-norm distance and then U-statistic method is applied on p-norm distance values of the samples because p-norm distance is calculated based on several variables. This method is a combination of efficient U-statistic method and p-norm distance and is used for the first time in this research. Results show that p-norm distance of p=2(Euclidean distance) in the case of a fact that Au and As can be considered optimized p-norm distance with the lowest error. The samples indicated by the combination of these methods as anomalous are more regular, less dispersed and more accurate than using just the U-statistic or other nonstructural methods such as Mahalanobis distance. Also it was observed that the combination results are closely associated with the defined Au ore indication within the studied area. Finally, univariate and bivariate geochemical anomaly maps are provided for Au and As, which have been respectively prepared using U-statistic and its combination with Euclidean distance method.
文摘Aim To improve the efficiency of fatigue material tests and relevant statistical treatment of test data. Methods\ Least square approach and other special treatments were used. Results and Conclusion\ The concepts of each phase in fatigue tests and statistical treatment are clarified. The method proposed leads to three important properties. Reduced number of specimens brings to the advantage of lowering test expenditures. The whole test procedure has more flexibility for there is no need to conduct many tests at the same stress level as in traditional cases.
文摘Identification of modal parameters of a linear structure with output-only measurements has received much attention over the past decades. In the paper, the Natural Excitation Technique (NExT) is used for acquisition of the impulse signals from the structural responses. Then Eigensystem Realization Algorithm (ERA) is utilized for modal identification. For disregarding the fictitious ‘computational modes', a procedure, Statistically Averaging Modal Frequency Method (SAMFM), is developed to distinguish the true modes from noise modes, and to improve the precision of the identified modal frequencies of the structure. An offshore platform is modeled with the finite element method. The theoretical modal parameters are obtained for a comparison with the identified values. The dynamic responses of the platform under random wave loading are computed for providing the output signals used for identification with ERA. Results of simulation demonstrate that the proposed method can determine the system modal frequency with high precision.
基金Natural Natural Science Foundation of China Under Grant No 50778077 & 50608036the Graduate Innovation Fund of Huazhong University of Science and Technology Under Grant No HF-06-028
文摘A novel damage detection method is applied to a 3-story frame structure, to obtain statistical quantification control criterion of the existence, location and identification of damage. The mean, standard deviation, and exponentially weighted moving average (EWMA) are applied to detect damage information according to statistical process control (SPC) theory. It is concluded that the detection is insignificant with the mean and EWMA because the structural response is not independent and is not a normal distribution. On the other hand, the damage information is detected well with the standard deviation because the influence of the data distribution is not pronounced with this parameter. A suitable moderate confidence level is explored for more significant damage location and quantification detection, and the impact of noise is investigated to illustrate the robustness of the method.
文摘A statistical monitoring method has been developedfor accurate, safety surveillance methods of γ-BHC resideueor harmful substances in foods or feeds. It is very importantfor safety monitoring and arbitrament inspections. This paperintroduces a calculation formula by a six-point calibrationmethod and an example for detection of Y-BHC in corn.The method can guarantee the accuracy of the results,and it does very substantially reduce the probability of an er-ror by one-point calibration.
文摘It is well known that the nonparametric estimation of the regression function is highly sensitive to the presence of even a small proportion of outliers in the data.To solve the problem of typical observations when the covariates of the nonparametric component are functional,the robust estimates for the regression parameter and regression operator are introduced.The main propose of the paper is to consider data-driven methods of selecting the number of neighbors in order to make the proposed processes fully automatic.We use thek Nearest Neighbors procedure(kNN)to construct the kernel estimator of the proposed robust model.Under some regularity conditions,we state consistency results for kNN functional estimators,which are uniform in the number of neighbors(UINN).Furthermore,a simulation study and an empirical application to a real data analysis of octane gasoline predictions are carried out to illustrate the higher predictive performances and the usefulness of the kNN approach.
文摘It is a matter of course that Kolmogorov’s probability theory is a very useful mathematical tool for the analysis of statistics. However, this fact never means that statistics is based on Kolmogorov’s probability theory, since it is not guaranteed that mathematics and our world are connected. In order that mathematics asserts some statements concerning our world, a certain theory (so called “world view”) mediates between mathematics and our world. Recently we propose measurement theory (i.e., the theory of the quantum mechanical world view), which is characterized as the linguistic turn of quantum mechanics. In this paper, we assert that statistics is based on measurement theory. And, for example, we show, from the pure theoretical point of view (i.e., from the measurement theoretical point of view), that regression analysis can not be justified without Bayes’ theorem. This may imply that even the conventional classification of (Fisher’s) statistics and Bayesian statistics should be reconsidered.
基金provided through the U.S. Department of the Interior, Bureau of Ocean Energy Management
文摘The offshore pipeline network in the U.S. Gulf of Mexico is the largest and most transparent system in the world. A review of deepwater projects in the region provides insight into construction cost and installation methods and the evolution of contract strategies. Pipeline projects are identified as export systems, infield flowline systems, and combined export and infield systems, and three dozen deepwater pipeline installations from 1980–2014 are described based on Offshore Technology Conference(OTC) and Society of Petroleum Engineers(SPE) industry publications and press release data. Export lines and infield flowlines are equally represented and many projects used a combination of J-lay, S-lay and reel methods with rigid steel, flexible line, and pipe-in-pipe systems. The average 2014 inflation-adjusted cost for pipeline projects based on OTC/SPE publications was $2.76 million/mi and ranged from $520 000/mi to $12.94 million/mi. High cost pipelines tend to be short segments or specialized pipeline. Excluding the two cost endpoints, the majority of projects ranged from $1 to $6 million/mi. The average inflation-adjusted cost to install deepwater pipelines in the U.S. Gulf of Mexico based on available public data is estimated at $3.1 million/mi.