SUMMARY The p value has been widely used as a way to summarise the significance in data analysis. However, misuse and misinterpretation of the p value is common in practice. Our result shows that if the model specific...SUMMARY The p value has been widely used as a way to summarise the significance in data analysis. However, misuse and misinterpretation of the p value is common in practice. Our result shows that if the model specification is wrong, the distribution of the p value may be inappropriate, which makes the decision based on the p value invalid.展开更多
In this study,we investigate how a stress variation generated by a fault that experiences transient postseismic slip(TPS)affects the rate of aftershocks.First,we show that the postseismic slip from Rubin-Ampuero model...In this study,we investigate how a stress variation generated by a fault that experiences transient postseismic slip(TPS)affects the rate of aftershocks.First,we show that the postseismic slip from Rubin-Ampuero model is a TPS that can occur on the main fault with a velocity-weakening frictional motion,that the resultant slip function is similar to the generalized Jeffreys-Lomnitz creep law,and that the TPS can be explained by a continuous creep process undergoing reloading.Second,we obtain an approximate solution based on the Helmstetter-Shaw seismicity model relating the rate of aftershocks to such TPS.For the Wenchuan sequence,we perform a numerical fitting of the cumulative number of aftershocks using the Modified Omori Law(MOL),the Dieterich model,and the specific TPS model.The fitting curves indicate that the data can be better explained by the TPS model with a B/A ratio of approximately 1.12,where A and B are the parameters in the rate-and state-dependent friction law respectively.Moreover,the p and c that appear in the MOL can be interpreted by the B/A and the critical slip distance,respectively.Because the B/A ratio in the current model is always larger than 1,the model could become a possible candidate to explain aftershock rate commonly decay as a power law with a p-value larger than 1.Finally,the influence of the background seismicity rate r on parameters is studied;the results show that except for the apparent aftershock duration,other parameters are insensitive to r.展开更多
We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algori...We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algorithmic information theory (AIT), in general, and algorithmic randomness and Kolmogorov complexity (KC), in particular. The processing and recognition tasks addressed include data discrimination and multilayer open set data categorization, change detection, data aggregation, clustering and data segmentation, data selection and link analysis, data cleaning and data revision, and prediction and identification of critical states. The unifying theme throughout the paper is that of “compression entails comprehension”, which is realized using the interrelated concepts of randomness vs. regularity and Kolmogorov complexity. The constructive and all encompassing active learning (AL) methodology, which mediates and supports the above theme, is context-driven and takes advantage of statistical learning, in general, and semi-supervised learning and transduction, in particular. Active learning employs explore and exploit actions characteristic of closed-loop control for evidence accumulation in order to revise its prediction models and to reduce uncertainty. The set-based similarity scores, driven by algorithmic randomness and Kolmogorov complexity, employ strangeness / typicality and p-values. We propose the application of the IIM framework to critical states prediction for complex physical systems;in particular, the prediction of cyclone genesis and intensification.展开更多
We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation,...We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation, and functionality, and robustness refers to the ability to handle incomplete and/or corrupt adversarial information, on one side, and image and or device variability, on the other side. The proposed methodology is model-free and non-parametric. It draws support from discriminative methods using likelihood ratios to link at the conceptual level biometrics and forensics. It further links, at the modeling and implementation level, the Bayesian framework, statistical learning theory (SLT) using transduction and semi-supervised lea- rning, and Information Theory (IY) using mutual information. The key concepts supporting the proposed methodology are a) local estimation to facilitate learning and prediction using both labeled and unlabeled data;b) similarity metrics using regularity of patterns, randomness deficiency, and Kolmogorov complexity (similar to MDL) using strangeness/typicality and ranking p-values;and c) the Cover – Hart theorem on the asymptotical performance of k-nearest neighbors approaching the optimal Bayes error. Several topics on biometric inference and prediction related to 1) multi-level and multi-layer data fusion including quality and multi-modal biometrics;2) score normalization and revision theory;3) face selection and tracking;and 4) identity management, are described here using an integrated approach that includes transduction and boosting for ranking and sequential fusion/aggregation, respectively, on one side, and active learning and change/ outlier/intrusion detection realized using information gain and martingale, respectively, on the other side. The methodology proposed can be mapped to additional types of information beyond biometrics.展开更多
An AR(1) model with ARCH(1) error structure is known as the first-order double autoregressive (DAR(1)) model. In this paper, a conditional likelihood based method is proposed to obtain inference for the two scalar par...An AR(1) model with ARCH(1) error structure is known as the first-order double autoregressive (DAR(1)) model. In this paper, a conditional likelihood based method is proposed to obtain inference for the two scalar parameters of interest of the DAR(1) model. Theoretically, the proposed method has rate of convergence O(n-3/2). Applying the proposed method to a real-life data set shows that the results obtained by the proposed method can be quite different from the results obtained by the existing methods. Results from Monte Carlo simulation studies illustrate the supreme accuracy of the proposed method even when the sample size is small.展开更多
Today,coronavirus appears as a serious challenge to the whole world.Epidemiological data of coronavirus is collected through media and web sources for the purpose of analysis.New data on COVID-19 are available daily,y...Today,coronavirus appears as a serious challenge to the whole world.Epidemiological data of coronavirus is collected through media and web sources for the purpose of analysis.New data on COVID-19 are available daily,yet information about the biological aspects of SARS-CoV-2 and epidemiological characteristics of COVID-19 remains limited,and uncertainty remains around nearly all its parameters’values.This research provides the scientic and public health communities better resources,knowledge,and tools to improve their ability to control the infectious diseases.Using the publicly available data on the ongoing pandemic,the present study investigates the incubation period and other time intervals that govern the epidemiological dynamics of the COVID-19 infections.Formulation of the testing hypotheses for different countries with a 95%level of condence,and descriptive statistics have been calculated to analyze in which region will COVID-19 fall according to the tested hypothesized mean of different countries.The results will be helpful in decision making as well as in further mathematical analysis and control strategy.Statistical tools are used to investigate this pandemic,which will be useful for further research.The testing of the hypothesis is done for the differences in various effects including standard errors.Changes in states’variables are observed over time.The rapid outbreak of coronavirus can be stopped by reducing its transmission.Susceptible should maintain safe distance and follow precautionary measures regarding COVID-19 transmission.展开更多
A renewable component with exponential failure and repair times is considered, and its instantaneous availability at time t is denoted by A(t). This paper proposes two methods for constructing lower confidence limit...A renewable component with exponential failure and repair times is considered, and its instantaneous availability at time t is denoted by A(t). This paper proposes two methods for constructing lower confidence limits of A(t) with Chebyshev inequation and generalized p-value approach.展开更多
The increasing volume of data in the area of environmental sciences needs analysis and interpretation. Among the challenges generated by this “data deluge”, the development of efficient strategies for the knowledge ...The increasing volume of data in the area of environmental sciences needs analysis and interpretation. Among the challenges generated by this “data deluge”, the development of efficient strategies for the knowledge discovery is an important issue. Here, statistical and tools from computational intelligence are applied to analyze large data sets from meteorology and climate sciences. Our approach allows a geographical mapping of the statistical property to be easily interpreted by meteorologists. Our data analysis comprises two main steps of knowledge extraction, applied successively in order to reduce the complexity from the original data set. The goal is to identify a much smaller subset of climatic variables that might still be able to describe or even predict the probability of occurrence of an extreme event. The first step applies a class comparison technique: p-value estimation. The second step consists of a decision tree (DT) configured from the data available and the p-value analysis. The DT is used as a predictive model, identifying the most statistically significant climate variables of the precipitation intensity. The methodology is employed to the study the climatic causes of an extreme precipitation events occurred in Alagoas and Pernambuco States (Brazil) at June/2010.展开更多
This paper considers a Kullback-Leibler distance (KLD) which is asymptotically equivalent to the KLD by Goutis and Robert [1] when the reference model (in comparison to a competing fitted model) is correctly specified...This paper considers a Kullback-Leibler distance (KLD) which is asymptotically equivalent to the KLD by Goutis and Robert [1] when the reference model (in comparison to a competing fitted model) is correctly specified and that certain regularity conditions hold true (ref. Akaike [2]). We derive the asymptotic property of this Goutis-Robert-Akaike KLD under certain regularity conditions. We also examine the impact of this asymptotic property when the regularity conditions are partially satisfied. Furthermore, the connection between the Goutis-Robert-Akaike KLD and a weighted posterior predictive p-value (WPPP) is established. Finally, both the Goutis-Robert-Akaike KLD and WPPP are applied to compare models using various simulated examples as well as two cohort studies of diabetes.展开更多
This research work employed a simulation study to evaluate six outlier techniques: t-Statistic, Modified Z-Statistic, Cancer Outlier Profile Analysis (COPA), Outlier Sum-Statistic (OS), Outlier Robust t-Statistic (ORT...This research work employed a simulation study to evaluate six outlier techniques: t-Statistic, Modified Z-Statistic, Cancer Outlier Profile Analysis (COPA), Outlier Sum-Statistic (OS), Outlier Robust t-Statistic (ORT), and the Truncated Outlier Robust t-Statistic (TORT) with the aim of determining the technique that has a higher power of detecting and handling outliers in terms of their P-values, true positives, false positives, False Discovery Rate (FDR) and their corresponding Receiver Operating Characteristic (ROC) curves. From the result of the analysis, it was revealed that OS was the best technique followed by COPA, t, ORT, TORT and Z respectively in terms of their P-values. The result of the False Discovery Rate (FDR) shows that OS is the best technique followed by COPA, t, ORT, TORT and Z. In terms of their ROC curves, t-Statistic and OS have the largest Area under the ROC Curve (AUC) which indicates better sensitivity and specificity and is more significant followed by COPA and ORT with the equal significant AUC while Z and TORT have the least AUC which is not significant.展开更多
Background: Fracture of distal radius with involvement of the ulnar styloid process is a common clinical problem. It can be treated conservatively, usually involving wrist immobilization in plaster cast or surgically....Background: Fracture of distal radius with involvement of the ulnar styloid process is a common clinical problem. It can be treated conservatively, usually involving wrist immobilization in plaster cast or surgically. A key method of surgical fixation is external fixation by distractor. Distractor can be applied either only on the radial side or on both ulnar and radial sides. Materials and Methods: A prospective randomized and comparative study of 1 year duration was conducted on 32 patients admitted in the Department of Orthopaedics of BSMC & H in the age group of 20 to 75 years old with AO types B and C distal radius fracture along with involvement of the ulnar styloid process. The parameters studied were restoration of radial length, restoration of radial angle, intracarpal step-off and palmar tilt which were statistically evaluated and Fisher’s exact test was performed. The two tailed P-value was calculated and both the groups were statistically compared. Results: In our study, 37.5% patients in Group A and 81.25% in Group B had a radial difference Table 1, Chart 1). 43.75% patients in Group A and 87.5% in Group B had radial angle Table 2, Chart 2). 31.25% in Group A and 75% had intra carpal step off Table 3, Chart 3). 62.5% had an abnormal palmar tilt in Group A while only 6.25% had an abnormal palmar tilt in Group B which is extremely statistically significant. On an average, 2 mm of distraction was required in 75% patients of Group A while only 30% patients in Group B required distraction (Table 4, Chart 4). Conclusion: In our study, the radial difference, radial angle, intra carpal step off and palmar tilt returned significantly to normal in the patients treated with distractor on radial side only when compared with distractor application on both radial and ulnar sides for distal radius fracture with ulnar styloid process involvement. Also post-operative distraction required under image intensifier was higher in the group treated with distractor on either side than those with distractor only on radial side.展开更多
In the present study, the temporal behavior of 2001 Bhuj aftershock sequence in Kachchh region of western peninsular India is studied by the modified Omori law. The Omori law parameters p, c and K are determined with ...In the present study, the temporal behavior of 2001 Bhuj aftershock sequence in Kachchh region of western peninsular India is studied by the modified Omori law. The Omori law parameters p, c and K are determined with the standard errors by the maximum likelihood estimates using ZMAP algorithm in MatLab environment. The entire aftershock sequence is analyzed by diving it into three separate series with respect to time to weigh up the bigger earthquake of magnitude M 5.7 occurring on March 7, 2006 at Gedi fault. This study helps to understand the cumulative effect of the aftershocks generated by this bigger earthquake of the mainshock sequence. The results of this analysis are discussed with other studies of the different earthquake sequence for the different parts of the world and suggest that all the three series of Bhuj aftershock sequence follow the Omori relation. Values of parameter p vary significantly from series 1 to series 3, i.e., p-value varies significantly with time. Similarly, other two Omori law parameters K and c are also found to change significantly with time. These parameters are useful to describe temporal behavior of aftershocks and to forecast aftershock activity in time domain. Aftershock decay rate provides insight into stress release processes after the mainshock, thus helping to understand the heterogeneity of the fault zone properties and evaluate time-dependent seismic hazard analysis over the region.展开更多
This paper provides a general method for constructing generalized p-value via the fiducial inference.Furthermore,the power properties of the generalized test are discussed.As illustrations, the two-parameter exponenti...This paper provides a general method for constructing generalized p-value via the fiducial inference.Furthermore,the power properties of the generalized test are discussed.As illustrations, the two-parameter exponential distribution and unbalanced two-fold nested design are researched.It is shown that the resulting generalized p-values are of good frequency property.展开更多
文摘SUMMARY The p value has been widely used as a way to summarise the significance in data analysis. However, misuse and misinterpretation of the p value is common in practice. Our result shows that if the model specification is wrong, the distribution of the p value may be inappropriate, which makes the decision based on the p value invalid.
基金supported by the National Natural Science Foundation of China (Nos.41974068 and 41574040)Key International S&T Cooperation Project of P.R.China (No.2015DFA21260)。
文摘In this study,we investigate how a stress variation generated by a fault that experiences transient postseismic slip(TPS)affects the rate of aftershocks.First,we show that the postseismic slip from Rubin-Ampuero model is a TPS that can occur on the main fault with a velocity-weakening frictional motion,that the resultant slip function is similar to the generalized Jeffreys-Lomnitz creep law,and that the TPS can be explained by a continuous creep process undergoing reloading.Second,we obtain an approximate solution based on the Helmstetter-Shaw seismicity model relating the rate of aftershocks to such TPS.For the Wenchuan sequence,we perform a numerical fitting of the cumulative number of aftershocks using the Modified Omori Law(MOL),the Dieterich model,and the specific TPS model.The fitting curves indicate that the data can be better explained by the TPS model with a B/A ratio of approximately 1.12,where A and B are the parameters in the rate-and state-dependent friction law respectively.Moreover,the p and c that appear in the MOL can be interpreted by the B/A and the critical slip distance,respectively.Because the B/A ratio in the current model is always larger than 1,the model could become a possible candidate to explain aftershock rate commonly decay as a power law with a p-value larger than 1.Finally,the influence of the background seismicity rate r on parameters is studied;the results show that except for the apparent aftershock duration,other parameters are insensitive to r.
文摘We describe here a comprehensive framework for intelligent information management (IIM) of data collection and decision-making actions for reliable and robust event processing and recognition. This is driven by algorithmic information theory (AIT), in general, and algorithmic randomness and Kolmogorov complexity (KC), in particular. The processing and recognition tasks addressed include data discrimination and multilayer open set data categorization, change detection, data aggregation, clustering and data segmentation, data selection and link analysis, data cleaning and data revision, and prediction and identification of critical states. The unifying theme throughout the paper is that of “compression entails comprehension”, which is realized using the interrelated concepts of randomness vs. regularity and Kolmogorov complexity. The constructive and all encompassing active learning (AL) methodology, which mediates and supports the above theme, is context-driven and takes advantage of statistical learning, in general, and semi-supervised learning and transduction, in particular. Active learning employs explore and exploit actions characteristic of closed-loop control for evidence accumulation in order to revise its prediction models and to reduce uncertainty. The set-based similarity scores, driven by algorithmic randomness and Kolmogorov complexity, employ strangeness / typicality and p-values. We propose the application of the IIM framework to critical states prediction for complex physical systems;in particular, the prediction of cyclone genesis and intensification.
文摘We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation, and functionality, and robustness refers to the ability to handle incomplete and/or corrupt adversarial information, on one side, and image and or device variability, on the other side. The proposed methodology is model-free and non-parametric. It draws support from discriminative methods using likelihood ratios to link at the conceptual level biometrics and forensics. It further links, at the modeling and implementation level, the Bayesian framework, statistical learning theory (SLT) using transduction and semi-supervised lea- rning, and Information Theory (IY) using mutual information. The key concepts supporting the proposed methodology are a) local estimation to facilitate learning and prediction using both labeled and unlabeled data;b) similarity metrics using regularity of patterns, randomness deficiency, and Kolmogorov complexity (similar to MDL) using strangeness/typicality and ranking p-values;and c) the Cover – Hart theorem on the asymptotical performance of k-nearest neighbors approaching the optimal Bayes error. Several topics on biometric inference and prediction related to 1) multi-level and multi-layer data fusion including quality and multi-modal biometrics;2) score normalization and revision theory;3) face selection and tracking;and 4) identity management, are described here using an integrated approach that includes transduction and boosting for ranking and sequential fusion/aggregation, respectively, on one side, and active learning and change/ outlier/intrusion detection realized using information gain and martingale, respectively, on the other side. The methodology proposed can be mapped to additional types of information beyond biometrics.
文摘An AR(1) model with ARCH(1) error structure is known as the first-order double autoregressive (DAR(1)) model. In this paper, a conditional likelihood based method is proposed to obtain inference for the two scalar parameters of interest of the DAR(1) model. Theoretically, the proposed method has rate of convergence O(n-3/2). Applying the proposed method to a real-life data set shows that the results obtained by the proposed method can be quite different from the results obtained by the existing methods. Results from Monte Carlo simulation studies illustrate the supreme accuracy of the proposed method even when the sample size is small.
文摘Today,coronavirus appears as a serious challenge to the whole world.Epidemiological data of coronavirus is collected through media and web sources for the purpose of analysis.New data on COVID-19 are available daily,yet information about the biological aspects of SARS-CoV-2 and epidemiological characteristics of COVID-19 remains limited,and uncertainty remains around nearly all its parameters’values.This research provides the scientic and public health communities better resources,knowledge,and tools to improve their ability to control the infectious diseases.Using the publicly available data on the ongoing pandemic,the present study investigates the incubation period and other time intervals that govern the epidemiological dynamics of the COVID-19 infections.Formulation of the testing hypotheses for different countries with a 95%level of condence,and descriptive statistics have been calculated to analyze in which region will COVID-19 fall according to the tested hypothesized mean of different countries.The results will be helpful in decision making as well as in further mathematical analysis and control strategy.Statistical tools are used to investigate this pandemic,which will be useful for further research.The testing of the hypothesis is done for the differences in various effects including standard errors.Changes in states’variables are observed over time.The rapid outbreak of coronavirus can be stopped by reducing its transmission.Susceptible should maintain safe distance and follow precautionary measures regarding COVID-19 transmission.
文摘A renewable component with exponential failure and repair times is considered, and its instantaneous availability at time t is denoted by A(t). This paper proposes two methods for constructing lower confidence limits of A(t) with Chebyshev inequation and generalized p-value approach.
文摘The increasing volume of data in the area of environmental sciences needs analysis and interpretation. Among the challenges generated by this “data deluge”, the development of efficient strategies for the knowledge discovery is an important issue. Here, statistical and tools from computational intelligence are applied to analyze large data sets from meteorology and climate sciences. Our approach allows a geographical mapping of the statistical property to be easily interpreted by meteorologists. Our data analysis comprises two main steps of knowledge extraction, applied successively in order to reduce the complexity from the original data set. The goal is to identify a much smaller subset of climatic variables that might still be able to describe or even predict the probability of occurrence of an extreme event. The first step applies a class comparison technique: p-value estimation. The second step consists of a decision tree (DT) configured from the data available and the p-value analysis. The DT is used as a predictive model, identifying the most statistically significant climate variables of the precipitation intensity. The methodology is employed to the study the climatic causes of an extreme precipitation events occurred in Alagoas and Pernambuco States (Brazil) at June/2010.
文摘This paper considers a Kullback-Leibler distance (KLD) which is asymptotically equivalent to the KLD by Goutis and Robert [1] when the reference model (in comparison to a competing fitted model) is correctly specified and that certain regularity conditions hold true (ref. Akaike [2]). We derive the asymptotic property of this Goutis-Robert-Akaike KLD under certain regularity conditions. We also examine the impact of this asymptotic property when the regularity conditions are partially satisfied. Furthermore, the connection between the Goutis-Robert-Akaike KLD and a weighted posterior predictive p-value (WPPP) is established. Finally, both the Goutis-Robert-Akaike KLD and WPPP are applied to compare models using various simulated examples as well as two cohort studies of diabetes.
文摘This research work employed a simulation study to evaluate six outlier techniques: t-Statistic, Modified Z-Statistic, Cancer Outlier Profile Analysis (COPA), Outlier Sum-Statistic (OS), Outlier Robust t-Statistic (ORT), and the Truncated Outlier Robust t-Statistic (TORT) with the aim of determining the technique that has a higher power of detecting and handling outliers in terms of their P-values, true positives, false positives, False Discovery Rate (FDR) and their corresponding Receiver Operating Characteristic (ROC) curves. From the result of the analysis, it was revealed that OS was the best technique followed by COPA, t, ORT, TORT and Z respectively in terms of their P-values. The result of the False Discovery Rate (FDR) shows that OS is the best technique followed by COPA, t, ORT, TORT and Z. In terms of their ROC curves, t-Statistic and OS have the largest Area under the ROC Curve (AUC) which indicates better sensitivity and specificity and is more significant followed by COPA and ORT with the equal significant AUC while Z and TORT have the least AUC which is not significant.
文摘Background: Fracture of distal radius with involvement of the ulnar styloid process is a common clinical problem. It can be treated conservatively, usually involving wrist immobilization in plaster cast or surgically. A key method of surgical fixation is external fixation by distractor. Distractor can be applied either only on the radial side or on both ulnar and radial sides. Materials and Methods: A prospective randomized and comparative study of 1 year duration was conducted on 32 patients admitted in the Department of Orthopaedics of BSMC & H in the age group of 20 to 75 years old with AO types B and C distal radius fracture along with involvement of the ulnar styloid process. The parameters studied were restoration of radial length, restoration of radial angle, intracarpal step-off and palmar tilt which were statistically evaluated and Fisher’s exact test was performed. The two tailed P-value was calculated and both the groups were statistically compared. Results: In our study, 37.5% patients in Group A and 81.25% in Group B had a radial difference Table 1, Chart 1). 43.75% patients in Group A and 87.5% in Group B had radial angle Table 2, Chart 2). 31.25% in Group A and 75% had intra carpal step off Table 3, Chart 3). 62.5% had an abnormal palmar tilt in Group A while only 6.25% had an abnormal palmar tilt in Group B which is extremely statistically significant. On an average, 2 mm of distraction was required in 75% patients of Group A while only 30% patients in Group B required distraction (Table 4, Chart 4). Conclusion: In our study, the radial difference, radial angle, intra carpal step off and palmar tilt returned significantly to normal in the patients treated with distractor on radial side only when compared with distractor application on both radial and ulnar sides for distal radius fracture with ulnar styloid process involvement. Also post-operative distraction required under image intensifier was higher in the group treated with distractor on either side than those with distractor only on radial side.
文摘In the present study, the temporal behavior of 2001 Bhuj aftershock sequence in Kachchh region of western peninsular India is studied by the modified Omori law. The Omori law parameters p, c and K are determined with the standard errors by the maximum likelihood estimates using ZMAP algorithm in MatLab environment. The entire aftershock sequence is analyzed by diving it into three separate series with respect to time to weigh up the bigger earthquake of magnitude M 5.7 occurring on March 7, 2006 at Gedi fault. This study helps to understand the cumulative effect of the aftershocks generated by this bigger earthquake of the mainshock sequence. The results of this analysis are discussed with other studies of the different earthquake sequence for the different parts of the world and suggest that all the three series of Bhuj aftershock sequence follow the Omori relation. Values of parameter p vary significantly from series 1 to series 3, i.e., p-value varies significantly with time. Similarly, other two Omori law parameters K and c are also found to change significantly with time. These parameters are useful to describe temporal behavior of aftershocks and to forecast aftershock activity in time domain. Aftershock decay rate provides insight into stress release processes after the mainshock, thus helping to understand the heterogeneity of the fault zone properties and evaluate time-dependent seismic hazard analysis over the region.
基金This work was partly supported by the National Natural Science Foundation of China(Grant Nos.10271013,30600119 and 90403130)the Fund of Shandong University of Technology(Grant Nos.2006KJZ01,2005KJM18)
文摘This paper provides a general method for constructing generalized p-value via the fiducial inference.Furthermore,the power properties of the generalized test are discussed.As illustrations, the two-parameter exponential distribution and unbalanced two-fold nested design are researched.It is shown that the resulting generalized p-values are of good frequency property.