To estimate percentiles of a response distribution, the transformed response rule of Wetherill and Robbins-Monro sequential design were proposed under Log-Logistic model. Based on responses data, a necessary and suffi...To estimate percentiles of a response distribution, the transformed response rule of Wetherill and Robbins-Monro sequential design were proposed under Log-Logistic model. Based on responses data, a necessary and sufficient condition for the existence of maximum likelihood estimators and then the calculating formula were presented. After a simulation study, the proposed approach was applied to 65# detonator. Numerical results showed that estimators of percentiles from the proposed approach are robust to the parametric models lacking information on the original response distribution.展开更多
There are diverse products related to human buttocks, which need to be designed, manufactured and evaluated with 3D buttock model. The 3D buttock model used in present research field is just simple approximate model s...There are diverse products related to human buttocks, which need to be designed, manufactured and evaluated with 3D buttock model. The 3D buttock model used in present research field is just simple approximate model similar to human buttocks. The 3D buttock percentile model is highly desired in the ergonomics design and evaluation for these products. So far, there is no research on the percentile sizing system of human 3D buttock model. So the purpose of this paper is to develop a new method for building three-dimensional buttock percentile model in computer system. After scanning the 3D shape of buttocks, the cloud data of 3D points is imported into the reverse engineering software(Geomagic) for the reconstructing of the buttock surface model. Five characteristic dimensions of the buttock are measured through mark-points after models being imported into engineering software CATIA. A series of space points are obtained by the intersecting of the cutting slices and 3D buttock surface model, and then are ordered based on the sequence number of the horizontal and vertical slices. The 1st, 5th, 50 th, 95 th, 99 th percentile values of the five dimensions and the spatial coordinate values of the space points are obtained, and used to reconstruct percentile buttock models. This research proposes a establishing method of percentile sizing system of buttock 3D model based on the percentile values of the ischial tuberosities diameter, the distances from margin to ischial tuberosity and the space coordinates value of coordinate points, for establishing the Nth percentile 3D buttock model and every special buttock types model. The proposed method also serves as a useful guidance for the other 3D percentile models establishment for other part in human body with characteristic points.展开更多
Advantage and disadvantage of three rainfall indices were demonstrated. It indicates that the Gamma distribution provides a good fit to precipitation data and enables precipitation amounts to be accurately expressed i...Advantage and disadvantage of three rainfall indices were demonstrated. It indicates that the Gamma distribution provides a good fit to precipitation data and enables precipitation amounts to be accurately expressed in terms of probability in the rainfall analysis of large scale region. The relationship between SST in east equatorial Pacific and precipitation in China and India was also studied by Gamma percentile series.展开更多
In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For exampl...In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For example, it may be more informative to compare two or more populations with respect to their within population distributions by testing the hypothesis that their corresponding respective 10th, 50th, and 90th percentiles are equal. As a generalization of the median test, the proposed test statistic is asymptotically distributed as Chi-square with degrees of freedom dependent upon the number of percentiles tested and constraints of the null hypothesis. Results from simulation studies are used to validate the nominal 0.05 significance level under the null hypothesis, and asymptotic power properties that are suitable for testing equality of percentile profiles against selected profile discrepancies for a variety of underlying distributions. A pragmatic example is provided to illustrate the comparison of the percentile profiles for four body mass index distributions.展开更多
Testing the equality of percentiles (quantiles) between populations is an effective method for robust, nonparametric comparison, especially when the distributions are asymmetric or irregularly shaped. Unlike global no...Testing the equality of percentiles (quantiles) between populations is an effective method for robust, nonparametric comparison, especially when the distributions are asymmetric or irregularly shaped. Unlike global nonparametric tests for homogeneity such as the Kolmogorv-Smirnov test, testing the equality of a set of percentiles (i.e., a percentile profile) yields an estimate of the location and extent of the differences between the populations along the entire domain. The Wald test using bootstrap estimates of variance of the order statistics provides a unified method for hypothesis testing of functions of the population percentiles. Simulation studies are conducted to show performance of the method under various scenarios and to give suggestions on its use. Several examples are given to illustrate some useful applications to real data.展开更多
Extreme weather and climatic phenomena, such as heatwaves, cold waves, floods and droughts, are expected to become more common and have a significant impact on ecosystems, biodiversity, and society. Devastating disast...Extreme weather and climatic phenomena, such as heatwaves, cold waves, floods and droughts, are expected to become more common and have a significant impact on ecosystems, biodiversity, and society. Devastating disasters are mostly caused by record-breaking extreme events, which are becoming more frequent throughout the world, including Tanzania. A clear global signal of an increase in warm days and nights and a decrease in cold days and nights has been observed. The present study assessed the trends of annual extreme temperature indices during the period of 1982 to 2022 from 29 meteorological stations in which the daily minimum and maximum data were obtained from NASA/POWER. The Mann-Kendall and Sen slope estimator were employed for trend analysis calculation over the study area. The analyzed data have indicated for the most parts, the country has an increase in warm days and nights, extreme warm days and nights and a decrease in cold days and nights, extreme cold days and nights. It has been disclosed that the number of warm nights and days is on the rise, with the number of warm nights trending significantly faster than the number of warm days. The percentile-based extreme temperature indices exhibited more noticeable changes than the absolute extreme temperature indices. Specifically, 66% and 97% of stations demonstrated positive increasing trends in warm days (TX90p) and nights (TN90p), respectively. Conversely, the cold indices demonstrated 41% and 97% negative decreasing trends in TX10p and TN10p, respectively. The results are seemingly consistent with the observed temperature extreme trends in various parts of the world as indicated in IPCC reports.展开更多
This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while ...This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.展开更多
The study assessed changes in rainfall variability and the frequency of extreme events (very wet and very dry) in the state of S?o Paulo, Brazil, for a 40-year period that divided into two sub-groups: 1973-1992 (P1) a...The study assessed changes in rainfall variability and the frequency of extreme events (very wet and very dry) in the state of S?o Paulo, Brazil, for a 40-year period that divided into two sub-groups: 1973-1992 (P1) and 1993-2012 (P2). Data of 79 rain gauge stations were selected to represent the different climatic and geomorphological domains of the state. The annual pattern was evaluated through the scale and the shape parameters of the gamma distribution and the 95th and the 5th percentiles thresholds, the latter also employed to evaluate the seasonal spatial patterns (rainy season, Oct.-Mar. and sub-humid to dry season, Apr.-Sep.). Results showed that the average precipitation was similar in P1 and P2, but S?o Paulo evolved to a pattern of increased irregularity in the rainfall distribution, with a rise of approximately 10% in the number of extremes between 1973 and 2012, especially in the very dry occurrences, and in the north and west of the state, which are the least rainy regions. Moreover, while 55% of the evaluated rain gauges recorded more extreme wet episodes in P2, 76% registered more dry extreme episodes in the same period. Some very dry or very wet events recorded after the 40-year period evaluated were discussed in terms of the associated weather patterns and their impacts on society and attested to the validity of the results found in the quantitative assessment. The qualitative analysis indicates that if the trends of more irregular distribution of rain and increase in extreme events persist, as pointed out by the gamma and percentile analyses, they would continue to bring serious effects on the natural and social systems in the state, which is the most populous and has the strongest and most diversified economy in Brazil.展开更多
Using the observed daily temperatures from 756 stations in China during the period from 1951 to 2009 extensive and persistent extreme cold events (EPECEs) were defined according to the following three steps:1) a stati...Using the observed daily temperatures from 756 stations in China during the period from 1951 to 2009 extensive and persistent extreme cold events (EPECEs) were defined according to the following three steps:1) a station was defined as an extreme cold station (ECS) if the observed temperature was lower than its 10th percentile threshold;2) an extensive extreme cold event was determined to be present if the approximated area occupied by the ECSs was more than 10% of the total area of China (83rd percentile) on its starting day and the maximum area occupied by the ECSs was at least 20% of the total area of China (96th percentile);and 3) an EPECE was determined to be present if the extensive extreme cold event lasted for at least for eight days.52 EPECEs were identified in this manner,and these identification results were also verified using other reliable data.On the basis of cluster analysis,five types of EPECEs were classified according to the spatial distribution of ECSs at their most extensive time over the course of the EPECE.展开更多
The Statistical Priority-based Multiple Access Protocol(SPMA)is the de facto standard for Tactical Target Network Technology(TTNT)and has also been implemented in ad hoc networks.In this paper,we present a non-preempt...The Statistical Priority-based Multiple Access Protocol(SPMA)is the de facto standard for Tactical Target Network Technology(TTNT)and has also been implemented in ad hoc networks.In this paper,we present a non-preemptive M/M/1/K queuing model to analyze the performance of different priorities in SPMA in terms of average packet loss rate and delay.And based on this queuing model,we designed a percentile scoring system combined with Q-learning algorithm to optimize the protocol parameters.The simulation results show that our theoretical model is closely matched with the reality,and the proposed algorithm improves the efficiency and accuracy in finding the optimal parameter set of SPMA protocol.展开更多
Two approaches of statistical downscaling were applied to indices of temperature extremes based on percentiles of daily maximum and minimum temperature observations at Beijing station in summer during 1960-2008. One w...Two approaches of statistical downscaling were applied to indices of temperature extremes based on percentiles of daily maximum and minimum temperature observations at Beijing station in summer during 1960-2008. One was to downscale daily maximum and minimum temperatures by using EOF analysis and stepwise linear regression at first, then to calculate the indices of extremes; the other was to directly downseale the percentile-based indices by using seasonal large-scale temperature and geo-potential height records. The cross-validation results showed that the latter approach has a better performance than the former. Then, the latter approach was applied to 48 meteorological stations in northern China. The cross- validation results for all 48 stations showed close correlation between the percentile-based indices and the seasonal large-scale variables. Finally, future scenarios of indices of temperature extremes in northern China were projected by applying the statistical downsealing to Hadley Centre Coupled Model Version 3 (HadCM3) simulations under the Representative Concentration Pathways 4.5 (RCP 4.5) scenario of the Fifth Coupled Model Inter-comparison Project (CMIP5). The results showed that the 90th percentile of daily maximum temperatures will increase by about 1.5℃, and the 10th of daily minimum temperatures will increase by about 2℃ during the period 2011- 35 relative to 1980-99.展开更多
This study designed an approach to derive land-cover in the South Africa with insufficient ground samples, and made a case demonstration in Nzhelele and Levhuvu catchments, South Africa. The method was developed based...This study designed an approach to derive land-cover in the South Africa with insufficient ground samples, and made a case demonstration in Nzhelele and Levhuvu catchments, South Africa. The method was developed based on an integration of Landsat 8, Sentinel-1, and Shuttle Radar Topography Mission(SRTM) Digital Elevation Model(DEM), and the Google Earth Engine(GEE) platform. Random forest classifier with 300 trees is employed as land-cover classification model. In order to overcome the defect of insufficient ground data, the stratified sampling method was used to generate the training and validation samples from the existing land-cover product. Likewise, in order to recognize different land-cover categories, the percentile and monthly median composites were employed to expand input metrics of random forest classifier. Results showed that the overall accuracy of the land-cover of Nzhelele and Levhuvu catchments, South Africa in 2017–2018 reached to 76.43%. Three important results can be drawn from our research. 1) The participation of Sentinel-1 data can slightly improve overall accuracy of land-cover while its contribution on land-cover classification varied with land types. 2) Under-fitting problem was observed in the training of non-dominant land-cover categories using the random sampling, the stratified sampling method is recommended to make sure the classification accuracy of non-dominant classes. 3) When related reflectance bands participated in the training process, individual Normalized Difference Vegetation index(NDVI), Enhanced Vegetation Index(EVI), Soil Adjusted Vegetation Index(SAVI), Normalized Difference Built-up Index(NDBI) have little effect on final land-cover classification result.展开更多
This work focuses on the evaluation of the seismic hazard for Romania using earthquake catalogues generated by a Monte Carlo approach. The seismicity of Romania can be attributed to the Vrancea intermediate-depth seis...This work focuses on the evaluation of the seismic hazard for Romania using earthquake catalogues generated by a Monte Carlo approach. The seismicity of Romania can be attributed to the Vrancea intermediate-depth seismic source and to 13 other crustal seismic sources. The recurrence times of large magnitude seismic events(both crustal and subcrustal), as well as the moment release rates are computed using simulated earthquake catalogues. The results show that the largest contribution to the overall moment release for the crustal seismic sources is from the seismic regions in Bulgaria, while the seismic regions in Romania contribute less than 5% of the overall moment release. In addition, the computations show that the moment release rate for the Vrancea subcrustal seismic source is about ten times larger than that of all the crustal seismic sources. Finally, the Monte Carlo approach is used to evaluate the seismic hazard for 20 cities in Romania with populations larger than 100,000 inhabitants. The results show some differences between the seismic hazard values obtained through Monte-Carlo simulation and those in the Romanian seismic design code P100-1/2013, notably for cities situated in the western part of Romania that are influenced by local crustal seismic sources.展开更多
We study a general framework for assessing the injury probability corresponding to an input dose quantity. In many applications, the true value of input dose may not be directly measurable. Instead, the input dose is ...We study a general framework for assessing the injury probability corresponding to an input dose quantity. In many applications, the true value of input dose may not be directly measurable. Instead, the input dose is estimated from measurable/controllable quantities via numerical simulations using assumed representative parameter values. We aim at developing a simple modeling framework for accommodating all uncertainties, including the discrepancy between the estimated input dose and the true input dose. We first interpret the widely used logistic dose-injury model as the result of dose propagation uncertainty from input dose to target dose at the active site for injury where the binary outcome is completely determined by the target dose. We specify the symmetric logistic dose-injury function using two shape parameters: the median injury dose and the 10 - 90 percentile width. We relate the two shape parameters of injury function to the mean and standard deviation of the dose propagation uncertainty. We find 1) a larger total uncertainty will spread more the dose-response function, increasing the 10 - 90 percentile width and 2) a systematic over-estimate of the input dose will shift the injury probability toward the right along the estimated input dose. This framework provides a way of revising an established injury model for a particular test population to predict the injury model for a new population with different distributions of parameters that affect the dose propagation and dose estimation. In addition to modeling dose propagation uncertainty, we propose a new 3-parameter model to include the skewness of injury function. The proposed 3-parameter function form is based on shifted log-normal distribution of dose propagation uncertainty and is approximately invariant when other uncertainties are added. The proposed 3-parameter function form provides a framework for extending skewed injury model from a test population to a target population in application.展开更多
The paper focuses on measuring self-similarity using few techniques by an index called Hurst index which is a self-similarity parameter. It has been evident that Internet traffic exhibits self-similarity. Motivated by...The paper focuses on measuring self-similarity using few techniques by an index called Hurst index which is a self-similarity parameter. It has been evident that Internet traffic exhibits self-similarity. Motivated by this fact, real time web users at various centers considered here as traffic and it has been examined by various methods to test the self-similarity. The results from the experiments carried out verify that the traffic examined in the present study is self similar using a new method based on some descriptive measures;for example percentiles have been applied to compute Hurst parameter which gives intensity of the self-similarity. Numerical results and analysis we discussed and presented here play a significant role to improve the services at web centers in the view of quality of service (QOS).展开更多
In practice,the control charts for monitoring of process mean are based on the normality assumption.But the performance of the control charts is seriously affected if the process of quality characteristics departs fro...In practice,the control charts for monitoring of process mean are based on the normality assumption.But the performance of the control charts is seriously affected if the process of quality characteristics departs from normality.For such situations,we have modified the already existing control charts such as Shewhart control chart,exponentially weighted moving average(EWMA)control chart and hybrid exponentially weighted moving average(HEWMA)control chart by assuming that the distribution of underlying process follows Power function distribution(PFD).By considering the situation that the parameters of PFD are unknown,we estimate them by using three classical estimation methods,i.e.,percentile estimator(P.E),maximum likelihood estimator(MLE)and modified maximum likelihood estimator(MMLE).We construct Shewhart,EWMA and HEWMA control charts based on P.E,MLE and MMLE.We have compared all these control charts using Monte Carlo simulation studies and concluded that HEWMA control chart under MLE is more sensitive to detect an early shift in the shape parameter when the distribution of the underlying process follows power function distribution.展开更多
The continuous monitoring of the machine is beneficial in improving its process reliability through reflected power function distribution.It is substantial for identifying and removing errors at the early stages of pr...The continuous monitoring of the machine is beneficial in improving its process reliability through reflected power function distribution.It is substantial for identifying and removing errors at the early stages of production that ultimately benefit the firms in cost-saving and quality improvement.The current study introduces control charts that help the manufacturing concerns to keep the production process in control.It presents an exponentially weighted moving average and extended exponentially weighted moving average and then compared their performance.The percentiles estimator and the modified maximum likelihood estimator are used to constructing the control charts.The findings suggest that an extended exponentially weighted moving average control chart based on the percentiles estimator performs better than exponentially weightedmoving average control charts based on the percentiles estimator and modified maximum likelihood estimator.Further,these results will help the firms in the early detection of errors that enhance the process reliability of the telecommunications and financing industry.展开更多
Purpose:A new point of view in the study of impact is introduced.Design/methodology/approach:Using fundamental theorems in real analysis we study the convergence of well-known impact measures.Findings:We show that poi...Purpose:A new point of view in the study of impact is introduced.Design/methodology/approach:Using fundamental theorems in real analysis we study the convergence of well-known impact measures.Findings:We show that pointwise convergence is maintained by all well-known impact bundles(such as the h-,g-,and R-bundle)and that theμ-bundle even maintains uniform convergence.Based on these results,a classification of impact bundles is given.Research limitations:As for all impact studies,it is just impossible to study all measures in depth.Practical implications:It is proposed to include convergence properties in the study of impact measures.Originality/value:This article is the first to present a bundle classification based on convergence properties of impact bundles.展开更多
文摘To estimate percentiles of a response distribution, the transformed response rule of Wetherill and Robbins-Monro sequential design were proposed under Log-Logistic model. Based on responses data, a necessary and sufficient condition for the existence of maximum likelihood estimators and then the calculating formula were presented. After a simulation study, the proposed approach was applied to 65# detonator. Numerical results showed that estimators of percentiles from the proposed approach are robust to the parametric models lacking information on the original response distribution.
文摘There are diverse products related to human buttocks, which need to be designed, manufactured and evaluated with 3D buttock model. The 3D buttock model used in present research field is just simple approximate model similar to human buttocks. The 3D buttock percentile model is highly desired in the ergonomics design and evaluation for these products. So far, there is no research on the percentile sizing system of human 3D buttock model. So the purpose of this paper is to develop a new method for building three-dimensional buttock percentile model in computer system. After scanning the 3D shape of buttocks, the cloud data of 3D points is imported into the reverse engineering software(Geomagic) for the reconstructing of the buttock surface model. Five characteristic dimensions of the buttock are measured through mark-points after models being imported into engineering software CATIA. A series of space points are obtained by the intersecting of the cutting slices and 3D buttock surface model, and then are ordered based on the sequence number of the horizontal and vertical slices. The 1st, 5th, 50 th, 95 th, 99 th percentile values of the five dimensions and the spatial coordinate values of the space points are obtained, and used to reconstruct percentile buttock models. This research proposes a establishing method of percentile sizing system of buttock 3D model based on the percentile values of the ischial tuberosities diameter, the distances from margin to ischial tuberosity and the space coordinates value of coordinate points, for establishing the Nth percentile 3D buttock model and every special buttock types model. The proposed method also serves as a useful guidance for the other 3D percentile models establishment for other part in human body with characteristic points.
文摘Advantage and disadvantage of three rainfall indices were demonstrated. It indicates that the Gamma distribution provides a good fit to precipitation data and enables precipitation amounts to be accurately expressed in terms of probability in the rainfall analysis of large scale region. The relationship between SST in east equatorial Pacific and precipitation in China and India was also studied by Gamma percentile series.
文摘In large sample studies where distributions may be skewed and not readily transformed to symmetry, it may be of greater interest to compare different distributions in terms of percentiles rather than means. For example, it may be more informative to compare two or more populations with respect to their within population distributions by testing the hypothesis that their corresponding respective 10th, 50th, and 90th percentiles are equal. As a generalization of the median test, the proposed test statistic is asymptotically distributed as Chi-square with degrees of freedom dependent upon the number of percentiles tested and constraints of the null hypothesis. Results from simulation studies are used to validate the nominal 0.05 significance level under the null hypothesis, and asymptotic power properties that are suitable for testing equality of percentile profiles against selected profile discrepancies for a variety of underlying distributions. A pragmatic example is provided to illustrate the comparison of the percentile profiles for four body mass index distributions.
文摘Testing the equality of percentiles (quantiles) between populations is an effective method for robust, nonparametric comparison, especially when the distributions are asymmetric or irregularly shaped. Unlike global nonparametric tests for homogeneity such as the Kolmogorv-Smirnov test, testing the equality of a set of percentiles (i.e., a percentile profile) yields an estimate of the location and extent of the differences between the populations along the entire domain. The Wald test using bootstrap estimates of variance of the order statistics provides a unified method for hypothesis testing of functions of the population percentiles. Simulation studies are conducted to show performance of the method under various scenarios and to give suggestions on its use. Several examples are given to illustrate some useful applications to real data.
文摘Extreme weather and climatic phenomena, such as heatwaves, cold waves, floods and droughts, are expected to become more common and have a significant impact on ecosystems, biodiversity, and society. Devastating disasters are mostly caused by record-breaking extreme events, which are becoming more frequent throughout the world, including Tanzania. A clear global signal of an increase in warm days and nights and a decrease in cold days and nights has been observed. The present study assessed the trends of annual extreme temperature indices during the period of 1982 to 2022 from 29 meteorological stations in which the daily minimum and maximum data were obtained from NASA/POWER. The Mann-Kendall and Sen slope estimator were employed for trend analysis calculation over the study area. The analyzed data have indicated for the most parts, the country has an increase in warm days and nights, extreme warm days and nights and a decrease in cold days and nights, extreme cold days and nights. It has been disclosed that the number of warm nights and days is on the rise, with the number of warm nights trending significantly faster than the number of warm days. The percentile-based extreme temperature indices exhibited more noticeable changes than the absolute extreme temperature indices. Specifically, 66% and 97% of stations demonstrated positive increasing trends in warm days (TX90p) and nights (TN90p), respectively. Conversely, the cold indices demonstrated 41% and 97% negative decreasing trends in TX10p and TN10p, respectively. The results are seemingly consistent with the observed temperature extreme trends in various parts of the world as indicated in IPCC reports.
文摘This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.
文摘The study assessed changes in rainfall variability and the frequency of extreme events (very wet and very dry) in the state of S?o Paulo, Brazil, for a 40-year period that divided into two sub-groups: 1973-1992 (P1) and 1993-2012 (P2). Data of 79 rain gauge stations were selected to represent the different climatic and geomorphological domains of the state. The annual pattern was evaluated through the scale and the shape parameters of the gamma distribution and the 95th and the 5th percentiles thresholds, the latter also employed to evaluate the seasonal spatial patterns (rainy season, Oct.-Mar. and sub-humid to dry season, Apr.-Sep.). Results showed that the average precipitation was similar in P1 and P2, but S?o Paulo evolved to a pattern of increased irregularity in the rainfall distribution, with a rise of approximately 10% in the number of extremes between 1973 and 2012, especially in the very dry occurrences, and in the north and west of the state, which are the least rainy regions. Moreover, while 55% of the evaluated rain gauges recorded more extreme wet episodes in P2, 76% registered more dry extreme episodes in the same period. Some very dry or very wet events recorded after the 40-year period evaluated were discussed in terms of the associated weather patterns and their impacts on society and attested to the validity of the results found in the quantitative assessment. The qualitative analysis indicates that if the trends of more irregular distribution of rain and increase in extreme events persist, as pointed out by the gamma and percentile analyses, they would continue to bring serious effects on the natural and social systems in the state, which is the most populous and has the strongest and most diversified economy in Brazil.
基金supportedby the National Key Technologies R&D Program of China (Grant No. 2009BAC51B02)the Special Funds for Meteorology Scientific Research on Public Cause (Grant No. GYHY201106015)
文摘Using the observed daily temperatures from 756 stations in China during the period from 1951 to 2009 extensive and persistent extreme cold events (EPECEs) were defined according to the following three steps:1) a station was defined as an extreme cold station (ECS) if the observed temperature was lower than its 10th percentile threshold;2) an extensive extreme cold event was determined to be present if the approximated area occupied by the ECSs was more than 10% of the total area of China (83rd percentile) on its starting day and the maximum area occupied by the ECSs was at least 20% of the total area of China (96th percentile);and 3) an EPECE was determined to be present if the extensive extreme cold event lasted for at least for eight days.52 EPECEs were identified in this manner,and these identification results were also verified using other reliable data.On the basis of cluster analysis,five types of EPECEs were classified according to the spatial distribution of ECSs at their most extensive time over the course of the EPECE.
基金supported by national fundamental research key project (No. JCKY2017203B082)
文摘The Statistical Priority-based Multiple Access Protocol(SPMA)is the de facto standard for Tactical Target Network Technology(TTNT)and has also been implemented in ad hoc networks.In this paper,we present a non-preemptive M/M/1/K queuing model to analyze the performance of different priorities in SPMA in terms of average packet loss rate and delay.And based on this queuing model,we designed a percentile scoring system combined with Q-learning algorithm to optimize the protocol parameters.The simulation results show that our theoretical model is closely matched with the reality,and the proposed algorithm improves the efficiency and accuracy in finding the optimal parameter set of SPMA protocol.
基金jointly sponsored by the National Basic Research Program of China "973" Program (Grant No. 2012CB956200)the Knowledge Innovation Project (Grant No. KZCX2-EW-202)the Strategic Priority Research Program (Grant No. XDA05090103) of the Chinese Academy of Sciences
文摘Two approaches of statistical downscaling were applied to indices of temperature extremes based on percentiles of daily maximum and minimum temperature observations at Beijing station in summer during 1960-2008. One was to downscale daily maximum and minimum temperatures by using EOF analysis and stepwise linear regression at first, then to calculate the indices of extremes; the other was to directly downseale the percentile-based indices by using seasonal large-scale temperature and geo-potential height records. The cross-validation results showed that the latter approach has a better performance than the former. Then, the latter approach was applied to 48 meteorological stations in northern China. The cross- validation results for all 48 stations showed close correlation between the percentile-based indices and the seasonal large-scale variables. Finally, future scenarios of indices of temperature extremes in northern China were projected by applying the statistical downsealing to Hadley Centre Coupled Model Version 3 (HadCM3) simulations under the Representative Concentration Pathways 4.5 (RCP 4.5) scenario of the Fifth Coupled Model Inter-comparison Project (CMIP5). The results showed that the 90th percentile of daily maximum temperatures will increase by about 1.5℃, and the 10th of daily minimum temperatures will increase by about 2℃ during the period 2011- 35 relative to 1980-99.
基金Under the auspices of National Natural Science Foundation of China(No.4171101213,41561144013,41991232)National Key R&D Program of China(No.2016YFC0503401,2016YFA0600304)International Partnership Program of Chinese Academy of Sciences(No.121311KYSB20170004)。
文摘This study designed an approach to derive land-cover in the South Africa with insufficient ground samples, and made a case demonstration in Nzhelele and Levhuvu catchments, South Africa. The method was developed based on an integration of Landsat 8, Sentinel-1, and Shuttle Radar Topography Mission(SRTM) Digital Elevation Model(DEM), and the Google Earth Engine(GEE) platform. Random forest classifier with 300 trees is employed as land-cover classification model. In order to overcome the defect of insufficient ground data, the stratified sampling method was used to generate the training and validation samples from the existing land-cover product. Likewise, in order to recognize different land-cover categories, the percentile and monthly median composites were employed to expand input metrics of random forest classifier. Results showed that the overall accuracy of the land-cover of Nzhelele and Levhuvu catchments, South Africa in 2017–2018 reached to 76.43%. Three important results can be drawn from our research. 1) The participation of Sentinel-1 data can slightly improve overall accuracy of land-cover while its contribution on land-cover classification varied with land types. 2) Under-fitting problem was observed in the training of non-dominant land-cover categories using the random sampling, the stratified sampling method is recommended to make sure the classification accuracy of non-dominant classes. 3) When related reflectance bands participated in the training process, individual Normalized Difference Vegetation index(NDVI), Enhanced Vegetation Index(EVI), Soil Adjusted Vegetation Index(SAVI), Normalized Difference Built-up Index(NDBI) have little effect on final land-cover classification result.
基金Romanian Ministry of Education and Scientific Research (MECS) under the Grant Number 72/2012
文摘This work focuses on the evaluation of the seismic hazard for Romania using earthquake catalogues generated by a Monte Carlo approach. The seismicity of Romania can be attributed to the Vrancea intermediate-depth seismic source and to 13 other crustal seismic sources. The recurrence times of large magnitude seismic events(both crustal and subcrustal), as well as the moment release rates are computed using simulated earthquake catalogues. The results show that the largest contribution to the overall moment release for the crustal seismic sources is from the seismic regions in Bulgaria, while the seismic regions in Romania contribute less than 5% of the overall moment release. In addition, the computations show that the moment release rate for the Vrancea subcrustal seismic source is about ten times larger than that of all the crustal seismic sources. Finally, the Monte Carlo approach is used to evaluate the seismic hazard for 20 cities in Romania with populations larger than 100,000 inhabitants. The results show some differences between the seismic hazard values obtained through Monte-Carlo simulation and those in the Romanian seismic design code P100-1/2013, notably for cities situated in the western part of Romania that are influenced by local crustal seismic sources.
文摘We study a general framework for assessing the injury probability corresponding to an input dose quantity. In many applications, the true value of input dose may not be directly measurable. Instead, the input dose is estimated from measurable/controllable quantities via numerical simulations using assumed representative parameter values. We aim at developing a simple modeling framework for accommodating all uncertainties, including the discrepancy between the estimated input dose and the true input dose. We first interpret the widely used logistic dose-injury model as the result of dose propagation uncertainty from input dose to target dose at the active site for injury where the binary outcome is completely determined by the target dose. We specify the symmetric logistic dose-injury function using two shape parameters: the median injury dose and the 10 - 90 percentile width. We relate the two shape parameters of injury function to the mean and standard deviation of the dose propagation uncertainty. We find 1) a larger total uncertainty will spread more the dose-response function, increasing the 10 - 90 percentile width and 2) a systematic over-estimate of the input dose will shift the injury probability toward the right along the estimated input dose. This framework provides a way of revising an established injury model for a particular test population to predict the injury model for a new population with different distributions of parameters that affect the dose propagation and dose estimation. In addition to modeling dose propagation uncertainty, we propose a new 3-parameter model to include the skewness of injury function. The proposed 3-parameter function form is based on shifted log-normal distribution of dose propagation uncertainty and is approximately invariant when other uncertainties are added. The proposed 3-parameter function form provides a framework for extending skewed injury model from a test population to a target population in application.
文摘The paper focuses on measuring self-similarity using few techniques by an index called Hurst index which is a self-similarity parameter. It has been evident that Internet traffic exhibits self-similarity. Motivated by this fact, real time web users at various centers considered here as traffic and it has been examined by various methods to test the self-similarity. The results from the experiments carried out verify that the traffic examined in the present study is self similar using a new method based on some descriptive measures;for example percentiles have been applied to compute Hurst parameter which gives intensity of the self-similarity. Numerical results and analysis we discussed and presented here play a significant role to improve the services at web centers in the view of quality of service (QOS).
文摘In practice,the control charts for monitoring of process mean are based on the normality assumption.But the performance of the control charts is seriously affected if the process of quality characteristics departs from normality.For such situations,we have modified the already existing control charts such as Shewhart control chart,exponentially weighted moving average(EWMA)control chart and hybrid exponentially weighted moving average(HEWMA)control chart by assuming that the distribution of underlying process follows Power function distribution(PFD).By considering the situation that the parameters of PFD are unknown,we estimate them by using three classical estimation methods,i.e.,percentile estimator(P.E),maximum likelihood estimator(MLE)and modified maximum likelihood estimator(MMLE).We construct Shewhart,EWMA and HEWMA control charts based on P.E,MLE and MMLE.We have compared all these control charts using Monte Carlo simulation studies and concluded that HEWMA control chart under MLE is more sensitive to detect an early shift in the shape parameter when the distribution of the underlying process follows power function distribution.
文摘The continuous monitoring of the machine is beneficial in improving its process reliability through reflected power function distribution.It is substantial for identifying and removing errors at the early stages of production that ultimately benefit the firms in cost-saving and quality improvement.The current study introduces control charts that help the manufacturing concerns to keep the production process in control.It presents an exponentially weighted moving average and extended exponentially weighted moving average and then compared their performance.The percentiles estimator and the modified maximum likelihood estimator are used to constructing the control charts.The findings suggest that an extended exponentially weighted moving average control chart based on the percentiles estimator performs better than exponentially weightedmoving average control charts based on the percentiles estimator and modified maximum likelihood estimator.Further,these results will help the firms in the early detection of errors that enhance the process reliability of the telecommunications and financing industry.
基金The author thanks Li Li(National Science Library,CAS)for drawing Figure 1.
文摘Purpose:A new point of view in the study of impact is introduced.Design/methodology/approach:Using fundamental theorems in real analysis we study the convergence of well-known impact measures.Findings:We show that pointwise convergence is maintained by all well-known impact bundles(such as the h-,g-,and R-bundle)and that theμ-bundle even maintains uniform convergence.Based on these results,a classification of impact bundles is given.Research limitations:As for all impact studies,it is just impossible to study all measures in depth.Practical implications:It is proposed to include convergence properties in the study of impact measures.Originality/value:This article is the first to present a bundle classification based on convergence properties of impact bundles.