The value of a statistical life(VSL)is a crucial tool for monetizing health impacts.To explore the VSL in China,this study examines people’s willingness to pay(WTP)to reduce death risk from air pollution in six repre...The value of a statistical life(VSL)is a crucial tool for monetizing health impacts.To explore the VSL in China,this study examines people’s willingness to pay(WTP)to reduce death risk from air pollution in six representative cities in China based on face-to-face contingent valuation interviews(n=3936)from March 7,2019 to September 30,2019.The results reveal that the WTP varied from CNY 455 to 763 in 2019(USD 66-111),corresponding to a VSL range of CNY 3.79-6.36 million(USD 549395-921940).The VSL in China in 2019 is estimated to be CNY 4.76 million(USD 689659).The statistics indicate that monthly expenditure levels,environmental concerns,risk attitudes,and assumed market acceptance,which have seldom been dis‐cussed in previous studies,significantly impact WTP and VSL.These findings will serve as a reference for ana‐lyzing mortality risk reduction benefits in future research and for policymaking.展开更多
Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challe...Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challenging when training data(e.g.truck haulage information and weather conditions)are massive.In machine learning(ML)algorithms,deep neural network(DNN)is a superior method for processing nonlinear and massive data by adjusting the amount of neurons and hidden layers.This study adopted DNN to forecast ore production using truck haulage information and weather conditions at open-pit mines as training data.Before the prediction models were built,principal component analysis(PCA)was employed to reduce the data dimensionality and eliminate the multicollinearity among highly correlated input variables.To verify the superiority of DNN,three ANNs containing only one hidden layer and six traditional ML models were established as benchmark models.The DNN model with multiple hidden layers performed better than the ANN models with a single hidden layer.The DNN model outperformed the extensively applied benchmark models in predicting ore production.This can provide engineers and researchers with an accurate method to forecast ore production,which helps make sound budgetary decisions and mine planning at open-pit mines.展开更多
Cone-disk systems find frequent use such as conical diffusers,medical devices,various rheometric,and viscosimetry applications.In this study,we investigate the three-dimensional flow of a water-based Ag-Mg O hybrid na...Cone-disk systems find frequent use such as conical diffusers,medical devices,various rheometric,and viscosimetry applications.In this study,we investigate the three-dimensional flow of a water-based Ag-Mg O hybrid nanofluid in a static cone-disk system while considering temperature-dependent fluid properties.How the variable fluid properties affect the dynamics and heat transfer features is studied by Reynolds's linearized model for variable viscosity and Chiam's model for variable thermal conductivity.The single-phase nanofluid model is utilized to describe convective heat transfer in hybrid nanofluids,incorporating the experimental data.This model is developed as a coupled system of convective-diffusion equations,encompassing the conservation of momentum and the conservation of thermal energy,in conjunction with an incompressibility condition.A self-similar model is developed by the Lie-group scaling transformations,and the subsequent self-similar equations are then solved numerically.The influence of variable fluid parameters on both swirling and non-swirling flow cases is analyzed.Additionally,the Nusselt number for the disk surface is calculated.It is found that an increase in the temperature-dependent viscosity parameter enhances heat transfer characteristics in the static cone-disk system,while the thermal conductivity parameter has the opposite effect.展开更多
Green supplier selection is an important debate in green supply chain management(GSCM),attracting global attention from scholars,especially companies and policymakers.Companies frequently search for new ideas and stra...Green supplier selection is an important debate in green supply chain management(GSCM),attracting global attention from scholars,especially companies and policymakers.Companies frequently search for new ideas and strategies to assist them in realizing sustainable development.Because of the speculative character of human opinions,supplier selection frequently includes unreliable data,and the interval-valued Pythagorean fuzzy soft set(IVPFSS)provides an exceptional capacity to cope with excessive fuzziness,inconsistency,and inexactness through the decision-making procedure.The main goal of this study is to come up with new operational laws for interval-valued Pythagorean fuzzy soft numbers(IVPFSNs)and create two interaction operators-the intervalvalued Pythagorean fuzzy soft interaction weighted average(IVPFSIWA)and the interval-valued Pythagorean fuzzy soft interaction weighted geometric(IVPFSIWG)operators,and analyze their properties.These operators are highly advantageous in addressing uncertain problems by considering membership and non-membership values within intervals,providing a superior solution to other methods.Moreover,specialist judgments were calculated by the MCGDM technique,supporting the use of interaction AOs to regulate the interdependence and fundamental partiality of green supplier assessment aspects.Lastly,a statistical clarification of the planned method for green supplier selection is presented.展开更多
Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct ...Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct power functions to select the optimal sample size. We revise this approach when the focus is on testing a single binomial proportion. We consider exact methods and introduce a conservative criterion to account for the typical non-monotonic behavior of the power functions, when dealing with discrete data. The main purpose of this paper is to present a Shiny App providing a user-friendly, interactive tool to apply these criteria. The app also provides specific tools to elicit the analysis and the design prior distributions, which are the core of the two-priors approach.展开更多
Background Psychiatric comorbidities are common in patients with epilepsy.Reasons for the co-occurrence of psychiatric conditions and epilepsy remain poorly understood.Aim We aimed to triangulate the relationship betw...Background Psychiatric comorbidities are common in patients with epilepsy.Reasons for the co-occurrence of psychiatric conditions and epilepsy remain poorly understood.Aim We aimed to triangulate the relationship between epilepsy and psychiatric conditions to determine the extent and possible origins of these conditions.Methods Using nationwide Swedish health registries,we quantified the lifetime prevalence of psychiatric disorders in patients with epilepsy.We then used summarydata from genome-wide association studies to investigate whether the identified observational associations could be attributed to a shared underlying genetic aetiology using cross-trait linkage disequilibrium score regression.Finally,we assessed the potential bidirectional relationships using two-sample Mendelian randomisation.Results In a cohort of 7628495 individuals,we found that almost half of the 94435 individuals diagnosed with epilepsy were also diagnosed with a psychiatric condition in their lifetime(adjusted lifetime prevalence,44.09%;95%confidence interval(Cl)43.78%to 44.39%).We found evidence for a genetic correlation between epilepsy and some neurodevelopmental and psychiatric conditions.For example,we observed a genetic correlation between epilepsy and attention-deficit/hyperactivity disorder(r,=0.18,95%Cl 0.09 to 0.27,p<0.001)—a correlation that was more pronounced in focal epilepsy(r=0.23,95%CI 0.09 to 0.36,p<0.001).Findings from Mendelian randomisation using common genetic variants did not support bidirectional effects between epilepsy and neurodevelopmental or psychiatric conditions.Conclusions Psychiatric comorbidities are common in patients with epilepsy.Genetic correlations may partially explain some comorbidities;however,there is little evidence of a bidirectional relationship between the genetic liability of epilepsy and psychiatric conditions.These findings highlight the need to understand the role of environmental factors or rare genetic variations in the origins of psychiatric comorbidities in epilepsy.展开更多
This study used Topological Weighted Centroid (TWC) to analyze the Coronavirus outbreak in Brazil. This analysis only uses latitude and longitude in formation of the capitals with the confirmed cases on May 24, 2020 t...This study used Topological Weighted Centroid (TWC) to analyze the Coronavirus outbreak in Brazil. This analysis only uses latitude and longitude in formation of the capitals with the confirmed cases on May 24, 2020 to illustrate the usefulness of TWC though any date could have been used. There are three types of TWC analyses, each type having five associated algorithms that produce fifteen maps, TWC-Original, TWC-Frequency and TWC-Windowing. We focus on TWC-Original to illustrate our approach. The TWC method without using the transportation information predicts the network for COVID-19 outbreak that matches very well with the main radial transportation routes network in Brazil.展开更多
In this paper,we highlight some recent developments of a new route to evaluate macroeconomic policy effects,which are investigated under the framework with potential outcomes.First,this paper begins with a brief intro...In this paper,we highlight some recent developments of a new route to evaluate macroeconomic policy effects,which are investigated under the framework with potential outcomes.First,this paper begins with a brief introduction of the basic model setup in modern econometric analysis of program evaluation.Secondly,primary attention goes to the focus on causal effect estimation of macroeconomic policy with single time series data together with some extensions to multiple time series data.Furthermore,we examine the connection of this new approach to traditional macroeconomic models for policy analysis and evaluation.Finally,we conclude by addressing some possible future research directions in statistics and econometrics.展开更多
Building a well-off society in an all-round way is the goal put forward at the 16th CPC National Congress for the first two decades of this century.According to 'Statistical Monitoring Program on Building a Well-o...Building a well-off society in an all-round way is the goal put forward at the 16th CPC National Congress for the first two decades of this century.According to 'Statistical Monitoring Program on Building a Well-off Society' [1], Institute of Statistical Science,National Bureau of Statistics of China and local statistics research departments had conducted statistical monitoring for the process of building a well-off society in an all-round way from 2000 to 2010 nationwide and locally.The result shows that,over the past decade,under the correct leadership of the CPC Central Committee and the State Council,China has succeeded in overcoming the impacts of many unfavorable factors including serious international financial crisis,rising production costs,the SARS epidemic,rare snow disasters and earthquakes, landslides,and the debt crisis of European sovereign.展开更多
The era of big data brings opportunities and challenges to developing new statistical methods and models to evaluate social programs or economic policies or interventions. This paper provides a comprehensive review on...The era of big data brings opportunities and challenges to developing new statistical methods and models to evaluate social programs or economic policies or interventions. This paper provides a comprehensive review on some recent advances in statistical methodologies and models to evaluate programs with high-dimensional data. In particular, four kinds of methods for making valid statistical inferences for treatment effects in high dimensions are addressed. The first one is the so-called doubly robust type estimation, which models the outcome regression and propensity score functions simultaneously. The second one is the covariate balance method to construct the treatment effect estimators. The third one is the sufficient dimension reduction approach for causal inferences. The last one is the machine learning procedure directly or indirectly to make statistical inferences to treatment effect. In such a way, some of these methods and models are closely related to the de-biased Lasso type methods for the regression model with high dimensions in the statistical literature. Finally, some future research topics are also discussed.展开更多
Proposing new statistical distributions which are more flexible than the existing distributions have become a recent trend in the practice of distribution theory.Actuaries often search for new and appropriate statisti...Proposing new statistical distributions which are more flexible than the existing distributions have become a recent trend in the practice of distribution theory.Actuaries often search for new and appropriate statistical models to address data related to financial and risk management problems.In the present study,an extension of the Lomax distribution is proposed via using the approach of the weighted T-X family of distributions.The mathematical properties along with the characterization of the new model via truncated moments are derived.The model parameters are estimated via a prominent approach called the maximum likelihood estimation method.A brief Monte Carlo simulation study to assess the performance of the model parameters is conducted.An application to medical care insurance data is provided to illustrate the potentials of the newly proposed extension of the Lomax distribution.The comparison of the proposed model is made with the(i)Two-parameter Lomax distribution,(ii)Three-parameter models called the half logistic Lomax and exponentiated Lomax distributions,and(iii)A four-parameter model called the Kumaraswamy Lomax distribution.The statistical analysis indicates that the proposed model performs better than the competitive models in analyzing data in financial and actuarial sciences.展开更多
Although hierarchical correlated data are increasingly available and are being used in evidence-based medical practices and health policy decision making, there is a lack of information about the strengths and weaknes...Although hierarchical correlated data are increasingly available and are being used in evidence-based medical practices and health policy decision making, there is a lack of information about the strengths and weaknesses of the methods of analysis with such data. In this paper, we describe the use of hierarchical data in a family study of alcohol abuse conducted in Edmonton, Canada, that attempted to determine whether alcohol abuse in probands is associated with abuse in their first-degree relatives. We review three methods of analyzing discrete hierarchical data to account for correlations among the relatives. We conclude that the best analytic choice for typical correlated discrete hierarchical data is by nonlinear mixed effects modeling using a likelihood-based approach or multilevel (hierarchical) modeling using a quasilikelihood approach, especially when dealing with heterogeneous patient data.展开更多
We consider matrix integrable fifth-order mKdV equations via a kind of group reductions of the Ablowitz–Kaup–Newell–Segur matrix spectral problems. Based on properties of eigenvalue and adjoint eigenvalue problems,...We consider matrix integrable fifth-order mKdV equations via a kind of group reductions of the Ablowitz–Kaup–Newell–Segur matrix spectral problems. Based on properties of eigenvalue and adjoint eigenvalue problems, we solve the corresponding Riemann–Hilbert problems, where eigenvalues could equal adjoint eigenvalues, and construct their soliton solutions, when there are zero reflection coefficients. Illustrative examples of scalar and two-component integrable fifthorder mKdV equations are given.展开更多
Dear Editor,This letter presents a coverage optimization algorithm for underwater acoustic sensor networks(UASN)based on Dijkstra method.Due to the particularity of underwater environment,the multipath effect and chan...Dear Editor,This letter presents a coverage optimization algorithm for underwater acoustic sensor networks(UASN)based on Dijkstra method.Due to the particularity of underwater environment,the multipath effect and channel are easily disturbed,resulting in more node energy consumption.Once the energy is exhausted,the network transmission stability and network connectivity will be affected.展开更多
Decision-theoretic interval estimation requires the use of loss functions that, typically, take into account the size and the coverage of the sets. We here consider the class of monotone loss functions that, under qui...Decision-theoretic interval estimation requires the use of loss functions that, typically, take into account the size and the coverage of the sets. We here consider the class of monotone loss functions that, under quite general conditions, guarantee Bayesian optimality of highest posterior probability sets. We focus on three specific families of monotone losses, namely the linear, the exponential and the rational losses whose difference consists in the way the sizes of the sets are penalized. Within the standard yet important set-up of a normal model we propose: 1) an optimality analysis, to compare the solutions yielded by the alternative classes of losses;2) a regret analysis, to evaluate the additional loss of standard non-optimal intervals of fixed credibility. The article uses an application to a clinical trial as an illustrative example.展开更多
The purpose of this paper is to construct near-vector spaces using a result by Van der Walt, with Z<sub>p</sub> for p a prime, as the underlying near-field. There are two notions of near-vector spaces, we ...The purpose of this paper is to construct near-vector spaces using a result by Van der Walt, with Z<sub>p</sub> for p a prime, as the underlying near-field. There are two notions of near-vector spaces, we focus on those studied by André [1]. These near-vector spaces have recently proven to be very useful in finite linear games. We will discuss the construction and properties, give examples of these near-vector spaces and give its application in finite linear games.展开更多
Background:To systematically summarize and categorize the Chinese herbal medicine in the domestic traditional Chinese medicine(TCM)literature on type 2 diabetes mellitus(T2DM),in this paper,we mine traditional Chinese...Background:To systematically summarize and categorize the Chinese herbal medicine in the domestic traditional Chinese medicine(TCM)literature on type 2 diabetes mellitus(T2DM),in this paper,we mine traditional Chinese medicine data for relationships and provide for future practitioners and researchers.Methods:Taking randomized controlled trials on the treatment of T2DM in TCM as the research theme,we searched for full-text literature in three major clinical databases,including CNKI,Wan Fang,and VIP,published between 1990 and 2020.We then conducted frequency statistics,cluster analysis,association rules extraction,and principal component analysis based on a corpus of medical academic words extracted from 1116 research articles.Results:The most frequently used is Astragali Radix,and the most commonly used two-herb combination in T2DM treatment consisted of Coptidis Rhizoma and Moutan Cortex.Moutan Cortex,Alismatis Rhizoma,and Dioscoreae Rhizoma were the most frequently used three-herb combination.We found a“lung”and“liver”and“kidney”model and confirmed the value of classical meridian tropism theory and pattern identification.The treatment is mainly to fill deficiency and clear heat and consider water infiltration,dampness,blood circulation,and silt.Conclusion:This study provides an in-depth perspective on the TCM medication rules for T2DM and offers practitioners and researchers valuable information about the current status and frontier trends of TCM research on T2DM in terms of diagnosis and treatment.展开更多
基金supported by the National Natural Science Foun‐dation of China[Grant No.71773061].
文摘The value of a statistical life(VSL)is a crucial tool for monetizing health impacts.To explore the VSL in China,this study examines people’s willingness to pay(WTP)to reduce death risk from air pollution in six representative cities in China based on face-to-face contingent valuation interviews(n=3936)from March 7,2019 to September 30,2019.The results reveal that the WTP varied from CNY 455 to 763 in 2019(USD 66-111),corresponding to a VSL range of CNY 3.79-6.36 million(USD 549395-921940).The VSL in China in 2019 is estimated to be CNY 4.76 million(USD 689659).The statistics indicate that monthly expenditure levels,environmental concerns,risk attitudes,and assumed market acceptance,which have seldom been dis‐cussed in previous studies,significantly impact WTP and VSL.These findings will serve as a reference for ana‐lyzing mortality risk reduction benefits in future research and for policymaking.
基金This work was supported by the Pilot Seed Grant(Grant No.RES0049944)the Collaborative Research Project(Grant No.RES0043251)from the University of Alberta.
文摘Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challenging when training data(e.g.truck haulage information and weather conditions)are massive.In machine learning(ML)algorithms,deep neural network(DNN)is a superior method for processing nonlinear and massive data by adjusting the amount of neurons and hidden layers.This study adopted DNN to forecast ore production using truck haulage information and weather conditions at open-pit mines as training data.Before the prediction models were built,principal component analysis(PCA)was employed to reduce the data dimensionality and eliminate the multicollinearity among highly correlated input variables.To verify the superiority of DNN,three ANNs containing only one hidden layer and six traditional ML models were established as benchmark models.The DNN model with multiple hidden layers performed better than the ANN models with a single hidden layer.The DNN model outperformed the extensively applied benchmark models in predicting ore production.This can provide engineers and researchers with an accurate method to forecast ore production,which helps make sound budgetary decisions and mine planning at open-pit mines.
文摘Cone-disk systems find frequent use such as conical diffusers,medical devices,various rheometric,and viscosimetry applications.In this study,we investigate the three-dimensional flow of a water-based Ag-Mg O hybrid nanofluid in a static cone-disk system while considering temperature-dependent fluid properties.How the variable fluid properties affect the dynamics and heat transfer features is studied by Reynolds's linearized model for variable viscosity and Chiam's model for variable thermal conductivity.The single-phase nanofluid model is utilized to describe convective heat transfer in hybrid nanofluids,incorporating the experimental data.This model is developed as a coupled system of convective-diffusion equations,encompassing the conservation of momentum and the conservation of thermal energy,in conjunction with an incompressibility condition.A self-similar model is developed by the Lie-group scaling transformations,and the subsequent self-similar equations are then solved numerically.The influence of variable fluid parameters on both swirling and non-swirling flow cases is analyzed.Additionally,the Nusselt number for the disk surface is calculated.It is found that an increase in the temperature-dependent viscosity parameter enhances heat transfer characteristics in the static cone-disk system,while the thermal conductivity parameter has the opposite effect.
基金funded by King Saud University,Riyadh,Saudi Arabia.
文摘Green supplier selection is an important debate in green supply chain management(GSCM),attracting global attention from scholars,especially companies and policymakers.Companies frequently search for new ideas and strategies to assist them in realizing sustainable development.Because of the speculative character of human opinions,supplier selection frequently includes unreliable data,and the interval-valued Pythagorean fuzzy soft set(IVPFSS)provides an exceptional capacity to cope with excessive fuzziness,inconsistency,and inexactness through the decision-making procedure.The main goal of this study is to come up with new operational laws for interval-valued Pythagorean fuzzy soft numbers(IVPFSNs)and create two interaction operators-the intervalvalued Pythagorean fuzzy soft interaction weighted average(IVPFSIWA)and the interval-valued Pythagorean fuzzy soft interaction weighted geometric(IVPFSIWG)operators,and analyze their properties.These operators are highly advantageous in addressing uncertain problems by considering membership and non-membership values within intervals,providing a superior solution to other methods.Moreover,specialist judgments were calculated by the MCGDM technique,supporting the use of interaction AOs to regulate the interdependence and fundamental partiality of green supplier assessment aspects.Lastly,a statistical clarification of the planned method for green supplier selection is presented.
文摘Sample size determination typically relies on a power analysis based on a frequentist conditional approach. This latter can be seen as a particular case of the two-priors approach, which allows to build four distinct power functions to select the optimal sample size. We revise this approach when the focus is on testing a single binomial proportion. We consider exact methods and introduce a conservative criterion to account for the typical non-monotonic behavior of the power functions, when dealing with discrete data. The main purpose of this paper is to present a Shiny App providing a user-friendly, interactive tool to apply these criteria. The app also provides specific tools to elicit the analysis and the design prior distributions, which are the core of the two-priors approach.
基金the National Institutes of Health(1R01NS107607-01A1)Erik and Edith Fernstrom Foundation for Medical Research(2020-00321)+5 种基金Karolinska Institutet(2020-00160,2020-01172)the Swedish Society for Medical Research(RM21-0005)This study was also supported by the NIHR Biomedical Research Centre at the University of Bristol and University Hospitals Bristol and the Weston NHS Foundation TrustThe Medical Research Council(MRC)and the University of Bristol supported the MRC Integrative Epidemiology Unit(MC_UU_00011/1)NMD was supported by the Norwegian Research Council(grant number 295989)The Swedish Research Council(523-2010-1052)supports the(Psychiatry Sweden)register linkage.
文摘Background Psychiatric comorbidities are common in patients with epilepsy.Reasons for the co-occurrence of psychiatric conditions and epilepsy remain poorly understood.Aim We aimed to triangulate the relationship between epilepsy and psychiatric conditions to determine the extent and possible origins of these conditions.Methods Using nationwide Swedish health registries,we quantified the lifetime prevalence of psychiatric disorders in patients with epilepsy.We then used summarydata from genome-wide association studies to investigate whether the identified observational associations could be attributed to a shared underlying genetic aetiology using cross-trait linkage disequilibrium score regression.Finally,we assessed the potential bidirectional relationships using two-sample Mendelian randomisation.Results In a cohort of 7628495 individuals,we found that almost half of the 94435 individuals diagnosed with epilepsy were also diagnosed with a psychiatric condition in their lifetime(adjusted lifetime prevalence,44.09%;95%confidence interval(Cl)43.78%to 44.39%).We found evidence for a genetic correlation between epilepsy and some neurodevelopmental and psychiatric conditions.For example,we observed a genetic correlation between epilepsy and attention-deficit/hyperactivity disorder(r,=0.18,95%Cl 0.09 to 0.27,p<0.001)—a correlation that was more pronounced in focal epilepsy(r=0.23,95%CI 0.09 to 0.36,p<0.001).Findings from Mendelian randomisation using common genetic variants did not support bidirectional effects between epilepsy and neurodevelopmental or psychiatric conditions.Conclusions Psychiatric comorbidities are common in patients with epilepsy.Genetic correlations may partially explain some comorbidities;however,there is little evidence of a bidirectional relationship between the genetic liability of epilepsy and psychiatric conditions.These findings highlight the need to understand the role of environmental factors or rare genetic variations in the origins of psychiatric comorbidities in epilepsy.
文摘This study used Topological Weighted Centroid (TWC) to analyze the Coronavirus outbreak in Brazil. This analysis only uses latitude and longitude in formation of the capitals with the confirmed cases on May 24, 2020 to illustrate the usefulness of TWC though any date could have been used. There are three types of TWC analyses, each type having five associated algorithms that produce fifteen maps, TWC-Original, TWC-Frequency and TWC-Windowing. We focus on TWC-Original to illustrate our approach. The TWC method without using the transportation information predicts the network for COVID-19 outbreak that matches very well with the main radial transportation routes network in Brazil.
基金the National Natural Science Foundation of China(71631004,Key Project)the National Science Fund for Distinguished Young Scholars(71625001)+2 种基金the Basic Scientific Center Project of National Science Foundation of China:Econometrics and Quantitative Policy Evaluation(71988101)the Science Foundation of Ministry of Education of China(19YJA910003)China Scholarship Council Funded Project(201806315045).
文摘In this paper,we highlight some recent developments of a new route to evaluate macroeconomic policy effects,which are investigated under the framework with potential outcomes.First,this paper begins with a brief introduction of the basic model setup in modern econometric analysis of program evaluation.Secondly,primary attention goes to the focus on causal effect estimation of macroeconomic policy with single time series data together with some extensions to multiple time series data.Furthermore,we examine the connection of this new approach to traditional macroeconomic models for policy analysis and evaluation.Finally,we conclude by addressing some possible future research directions in statistics and econometrics.
文摘Building a well-off society in an all-round way is the goal put forward at the 16th CPC National Congress for the first two decades of this century.According to 'Statistical Monitoring Program on Building a Well-off Society' [1], Institute of Statistical Science,National Bureau of Statistics of China and local statistics research departments had conducted statistical monitoring for the process of building a well-off society in an all-round way from 2000 to 2010 nationwide and locally.The result shows that,over the past decade,under the correct leadership of the CPC Central Committee and the State Council,China has succeeded in overcoming the impacts of many unfavorable factors including serious international financial crisis,rising production costs,the SARS epidemic,rare snow disasters and earthquakes, landslides,and the debt crisis of European sovereign.
基金Supported by the National Natural Science Foundation of China(71631004, 72033008)National Science Foundation for Distinguished Young Scholars(71625001)Science Foundation of Ministry of Education of China(19YJA910003)。
文摘The era of big data brings opportunities and challenges to developing new statistical methods and models to evaluate social programs or economic policies or interventions. This paper provides a comprehensive review on some recent advances in statistical methodologies and models to evaluate programs with high-dimensional data. In particular, four kinds of methods for making valid statistical inferences for treatment effects in high dimensions are addressed. The first one is the so-called doubly robust type estimation, which models the outcome regression and propensity score functions simultaneously. The second one is the covariate balance method to construct the treatment effect estimators. The third one is the sufficient dimension reduction approach for causal inferences. The last one is the machine learning procedure directly or indirectly to make statistical inferences to treatment effect. In such a way, some of these methods and models are closely related to the de-biased Lasso type methods for the regression model with high dimensions in the statistical literature. Finally, some future research topics are also discussed.
文摘Proposing new statistical distributions which are more flexible than the existing distributions have become a recent trend in the practice of distribution theory.Actuaries often search for new and appropriate statistical models to address data related to financial and risk management problems.In the present study,an extension of the Lomax distribution is proposed via using the approach of the weighted T-X family of distributions.The mathematical properties along with the characterization of the new model via truncated moments are derived.The model parameters are estimated via a prominent approach called the maximum likelihood estimation method.A brief Monte Carlo simulation study to assess the performance of the model parameters is conducted.An application to medical care insurance data is provided to illustrate the potentials of the newly proposed extension of the Lomax distribution.The comparison of the proposed model is made with the(i)Two-parameter Lomax distribution,(ii)Three-parameter models called the half logistic Lomax and exponentiated Lomax distributions,and(iii)A four-parameter model called the Kumaraswamy Lomax distribution.The statistical analysis indicates that the proposed model performs better than the competitive models in analyzing data in financial and actuarial sciences.
文摘Although hierarchical correlated data are increasingly available and are being used in evidence-based medical practices and health policy decision making, there is a lack of information about the strengths and weaknesses of the methods of analysis with such data. In this paper, we describe the use of hierarchical data in a family study of alcohol abuse conducted in Edmonton, Canada, that attempted to determine whether alcohol abuse in probands is associated with abuse in their first-degree relatives. We review three methods of analyzing discrete hierarchical data to account for correlations among the relatives. We conclude that the best analytic choice for typical correlated discrete hierarchical data is by nonlinear mixed effects modeling using a likelihood-based approach or multilevel (hierarchical) modeling using a quasilikelihood approach, especially when dealing with heterogeneous patient data.
基金supported in part by the National Natural Science Foundation of China (Grant Nos. 11975145, 11972291, and 51771083)the Ministry of Science and Technology of China (Grant No. G2021016032L)the Natural Science Foundation for Colleges and Universities in Jiangsu Province, China (Grant No. 17 KJB 110020)。
文摘We consider matrix integrable fifth-order mKdV equations via a kind of group reductions of the Ablowitz–Kaup–Newell–Segur matrix spectral problems. Based on properties of eigenvalue and adjoint eigenvalue problems, we solve the corresponding Riemann–Hilbert problems, where eigenvalues could equal adjoint eigenvalues, and construct their soliton solutions, when there are zero reflection coefficients. Illustrative examples of scalar and two-component integrable fifthorder mKdV equations are given.
基金supported by the Natural Science Foundation of Shandong Province(ZR2022MF247)。
文摘Dear Editor,This letter presents a coverage optimization algorithm for underwater acoustic sensor networks(UASN)based on Dijkstra method.Due to the particularity of underwater environment,the multipath effect and channel are easily disturbed,resulting in more node energy consumption.Once the energy is exhausted,the network transmission stability and network connectivity will be affected.
文摘Decision-theoretic interval estimation requires the use of loss functions that, typically, take into account the size and the coverage of the sets. We here consider the class of monotone loss functions that, under quite general conditions, guarantee Bayesian optimality of highest posterior probability sets. We focus on three specific families of monotone losses, namely the linear, the exponential and the rational losses whose difference consists in the way the sizes of the sets are penalized. Within the standard yet important set-up of a normal model we propose: 1) an optimality analysis, to compare the solutions yielded by the alternative classes of losses;2) a regret analysis, to evaluate the additional loss of standard non-optimal intervals of fixed credibility. The article uses an application to a clinical trial as an illustrative example.
文摘The purpose of this paper is to construct near-vector spaces using a result by Van der Walt, with Z<sub>p</sub> for p a prime, as the underlying near-field. There are two notions of near-vector spaces, we focus on those studied by André [1]. These near-vector spaces have recently proven to be very useful in finite linear games. We will discuss the construction and properties, give examples of these near-vector spaces and give its application in finite linear games.
基金supported by China’s National Key R&D Program,NO.2019YFC1709801.
文摘Background:To systematically summarize and categorize the Chinese herbal medicine in the domestic traditional Chinese medicine(TCM)literature on type 2 diabetes mellitus(T2DM),in this paper,we mine traditional Chinese medicine data for relationships and provide for future practitioners and researchers.Methods:Taking randomized controlled trials on the treatment of T2DM in TCM as the research theme,we searched for full-text literature in three major clinical databases,including CNKI,Wan Fang,and VIP,published between 1990 and 2020.We then conducted frequency statistics,cluster analysis,association rules extraction,and principal component analysis based on a corpus of medical academic words extracted from 1116 research articles.Results:The most frequently used is Astragali Radix,and the most commonly used two-herb combination in T2DM treatment consisted of Coptidis Rhizoma and Moutan Cortex.Moutan Cortex,Alismatis Rhizoma,and Dioscoreae Rhizoma were the most frequently used three-herb combination.We found a“lung”and“liver”and“kidney”model and confirmed the value of classical meridian tropism theory and pattern identification.The treatment is mainly to fill deficiency and clear heat and consider water infiltration,dampness,blood circulation,and silt.Conclusion:This study provides an in-depth perspective on the TCM medication rules for T2DM and offers practitioners and researchers valuable information about the current status and frontier trends of TCM research on T2DM in terms of diagnosis and treatment.