In this paper, a deep learning-based method is proposed for crowdcountingproblems. Specifically, by utilizing the convolution kernel densitymap, the ground truth is generated dynamically to enhance the featureextracti...In this paper, a deep learning-based method is proposed for crowdcountingproblems. Specifically, by utilizing the convolution kernel densitymap, the ground truth is generated dynamically to enhance the featureextractingability of the generator model. Meanwhile, the “cross stage partial”module is integrated into congested scene recognition network (CSRNet) toobtain a lightweight network model. In addition, to compensate for the accuracydrop owing to the lightweight model, we take advantage of “structuredknowledge transfer” to train the model in an end-to-end manner. It aimsto accelerate the fitting speed and enhance the learning ability of the studentmodel. The crowd-counting system solution for edge computing is alsoproposed and implemented on an embedded device equipped with a neuralprocessing unit. Simulations demonstrate the performance improvement ofthe proposed solution in terms of model size, processing speed and accuracy.The performance on the Venice dataset shows that the mean absolute error(MAE) and the root mean squared error (RMSE) of our model drop by32.63% and 39.18% compared with CSRNet. Meanwhile, the performance onthe ShanghaiTech PartB dataset reveals that the MAE and the RMSE of ourmodel are close to those of CSRNet. Therefore, we provide a novel embeddedplatform system scheme for public safety pre-warning applications.展开更多
In the process of aquaculture,monitoring the number of fish bait particles is of great significance to improve the growth and welfare of fish.Although the counting method based on onvolutional neural network(CNN)achie...In the process of aquaculture,monitoring the number of fish bait particles is of great significance to improve the growth and welfare of fish.Although the counting method based on onvolutional neural network(CNN)achieve good accuracy and applicability,it has a high amount of parameters and computation,which limit the deployment on resource-constrained hardware devices.In order to solve the above problems,this paper proposes a lightweight bait particle counting method based on shift quantization and model pruning strategies.Firstly,we take corresponding lightweight strategies for different layers to flexibly balance the counting accuracy and performance of the model.In order to deeply lighten the counting model,the redundant and less informative weights of the model are removed through the combination of model quantization and pruning.The experimental results show that the compression rate is nearly 9 times.Finally,the quantization candidate value is refined by introducing a power-of-two addition term,which improves the matches of the weight distribution.By analyzing the experimental results,the counting loss at 3 bit is reduced by 35.31%.In summary,the lightweight bait particle counting model proposed in this paper achieves lossless counting accuracy and reduces the storage and computational overhead required for running convolutional neural networks.展开更多
Machine learning,especially deep learning,has been highly successful in data-intensive applications;however,the performance of these models will drop significantly when the amount of the training data amount does not ...Machine learning,especially deep learning,has been highly successful in data-intensive applications;however,the performance of these models will drop significantly when the amount of the training data amount does not meet the requirement.This leads to the so-called few-shot learning(FSL)problem,which requires the model rapidly generalize to new tasks that containing only a few labeled samples.In this paper,we proposed a new deep model,called deep convolutional meta-learning networks,to address the low performance of generalization under limited data for bearing fault diagnosis.The essential of our approach is to learn a base model from the multiple learning tasks using a support dataset and finetune the learnt parameters using few-shot tasks before it can adapt to the new learning task based on limited training data.The proposed method was compared to several FSL methods,including methods with and without pre-training the embedding mapping,and methods with finetuning the classifier or the whole model by utilizing the few-shot data from the target domain.The comparisons are carried out on 1-shot and 10-shot tasks using the Case Western Reserve University bearing dataset and a cylindrical roller bearing dataset.The experimental result illustrates that our method has good performance on the bearing fault diagnosis across various few-shot conditions.In addition,we found that the pretraining process does not always improve the prediction accuracy.展开更多
Different from the usual full counting statistics theoretical work that focuses on the higher order cumulants computation by using cumulant generating function in electrical structures, Monte Carlo simulation of singl...Different from the usual full counting statistics theoretical work that focuses on the higher order cumulants computation by using cumulant generating function in electrical structures, Monte Carlo simulation of single-barrier structure is performed to obtain time series for two types of widely applicable exclusion models, counter-flows model, and tunnel model. With high-order spectrum analysis of Matlab, the validation of Monte Carlo methods is shown through the extracted first four cumulants from the time series, which are in agreement with those from cumulant generating function. After the comparison between the counter-flows model and the tunnel model in a single barrier structure, it is found that the essential difference between them consists in the strictly holding of Pauli principle in the former and in the statistical consideration of Pauli principle in the latter.展开更多
In order to improve crash occurrence models to account for the influence of various contributing factors, a conditional autoregressive negative binomial (CAR-NB) model is employed to allow for overdispersion (tackl...In order to improve crash occurrence models to account for the influence of various contributing factors, a conditional autoregressive negative binomial (CAR-NB) model is employed to allow for overdispersion (tackled by the NB component), unobserved heterogeneity and spatial autocorrelation (captured by the CAR process), using Markov chain Monte Carlo methods and the Gibbs sampler. Statistical tests suggest that the CAR-NB model is preferred over the CAR-Poisson, NB, zero-inflated Poisson, zero-inflated NB models, due to its lower prediction errors and more robust parameter inference. The study results show that crash frequency and fatalities are positively associated with the number of lanes, curve length, annual average daily traffic (AADT) per lane, as well as rainfall. Speed limit and the distances to the nearest hospitals have negative associations with segment-based crash counts but positive associations with fatality counts, presumably as a result of worsened collision impacts at higher speed and time loss during transporting crash victims.展开更多
Objective: In our previous work, we incorporated complete blood count (CBC) into TNM stage to develop a new prognostic score model, which was validated to improve prediction efficiency of TNM stage for nasopharynge...Objective: In our previous work, we incorporated complete blood count (CBC) into TNM stage to develop a new prognostic score model, which was validated to improve prediction efficiency of TNM stage for nasopharyngeal carcinoma (NPC). The purpose of this study was to revalidate the accuracy of the model, and its superiority to TNM stage, through data from a prospective study.Methods: CBC of 249 eligible patients from the 863 Program No. 2006AA02Z4B4 was evaluated. Prognostic index (PI) of each patient was calculated according to the score model. Then they were divided by the PI into three categories: the low-, intermediate-and high-risk patients. The 5-year disease-specific survival (DSS) of the three categories was compared by a log-rank test. The model and TNM stage (Tth edition) were compared on efficiency for predicting the 5-year DSS, through comparison of the area under curve (AUC) of their receiver-operating characteristic curves.Results: The 5-year DSS of the low-, intermediate- and high-risk patients were 96.0%, 79.1% and 62.2%, respectively. The low- and intermediate-risk patients had better DSS than the high-risk patients (P〈0.001 and P〈0.005, respectively). And there was a trend of better DSS in the low-risk patients, compared with the intermediate-risk patients (P=0.049). The AUC of the model was larger than that of TNM stage (0.726 vs. 0.661, P:0.023). Conclusions: A CBC-based prognostic score model was revalidated to be accurate and superior to TNM stage on predicting 5-year DSS of NPC.展开更多
Survival of HIV/AIDS patients is crucially dependent on comprehensive and targeted medical interventions such as supply of antiretroviral therapy and monitoring disease progression with CD4 T-cell counts. Statistical ...Survival of HIV/AIDS patients is crucially dependent on comprehensive and targeted medical interventions such as supply of antiretroviral therapy and monitoring disease progression with CD4 T-cell counts. Statistical modelling approaches are helpful towards this goal. This study aims at developing Bayesian joint models with assumed generalized error distribution (GED) for the longitudinal CD4 data and two accelerated failure time distributions, Lognormal and loglogistic, for the survival time of HIV/AIDS patients. Data are obtained from patients under antiretroviral therapy follow-up at Shashemene referral hospital during January 2006-January 2012 and at Bale Robe general hospital during January 2008-March 2015. The Bayesian joint models are defined through latent variables and association parameters and with specified non-informative prior distributions for the model parameters. Simulations are conducted using Gibbs sampler algorithm implemented in the WinBUGS software. The results of the analyses of the two different data sets show that distributions of measurement errors of the longitudinal CD4 variable follow the generalized error distribution with fatter tails than the normal distribution. The Bayesian joint GED loglogistic models fit better to the data sets compared to the lognormal cases. Findings reveal that patients’ health can be improved over time. Compared to the males, female patients gain more CD4 counts. Survival time of a patient is negatively affected by TB infection. Moreover, increase in number of opportunistic infection implies decline of CD4 counts. Patients’ age negatively affects the disease marker with no effects on survival time. Improving weight may improve survival time of patients. Bayesian joint models with GED and AFT distributions are found to be useful in modelling the longitudinal and survival processes. Thus we recommend the generalized error distributions for measurement errors of the longitudinal data under the Bayesian joint modelling. Further studies may investigate the models with various types of shared random effects and more covariates with predictions.展开更多
Often the lifecycle data occur as count of the vital events and are recorded as integers.The purpose of this article is to model the fertility behavior based on religious,educational,economic,and occupational characte...Often the lifecycle data occur as count of the vital events and are recorded as integers.The purpose of this article is to model the fertility behavior based on religious,educational,economic,and occupational characteristics.The responses of classified groups according to these determinants are examined for significant influence on fertility using Poisson regression model(PRM) based on the National Family Health Survey-3 dataset.The observed and predicted probabilities under PRM indicate modal value of two children for the Poisson distribution modeled data.Presence of dominance of two child in the data motivates the authors to adopt multinomial regression model(MRM) in order to link fertility with various socioeconomic indicators responsible for fertility variation.Choice of the explanatory factors is limited to the availability of data.Trends and patterns of preference for birth counts suggest that religion,caste,wealth,female education,and occupation are the dominant factors shaping the observed birth process.Empirical analysis suggests that both the models used in the study perform similarly on the sample data.However,fitting of MRM by taking birth count of two as comparison category shows improved Akaike information criterion and consistent Akaike information criterion values.Current work contributes to the existing literature as it attempts to provide more insight into the determinants of Indian fertility using Poisson and MRM.展开更多
Several economists agree to say that the need for adjustment was essential for African countries over the decade of the 80’s. The econometric analysis of a sample of 28 sub-Saharan African countries, from variables r...Several economists agree to say that the need for adjustment was essential for African countries over the decade of the 80’s. The econometric analysis of a sample of 28 sub-Saharan African countries, from variables regarded as “representatives” for the adjustment objectives, proves that this assertion cannot be completely rejected.展开更多
Dark count is one of the inherent noise types in single-photon diodes,which may restrict the performances of detectors based on these diodes.To formulate better designs for peripheral circuits of such diodes,an accura...Dark count is one of the inherent noise types in single-photon diodes,which may restrict the performances of detectors based on these diodes.To formulate better designs for peripheral circuits of such diodes,an accurate statistical behavioral model of dark current must be established.Research has shown that there are four main mechanisms that contribute to the dark count in single-photon avalanche diodes.However,in the existing dark count models only three models have been considered,thus leading to inaccuracies in these models.To resolve these shortcomings,the dark current caused by carrier diffusion in the neutral region is deduced by multiplying the carrier detection probability with the carrier particle current at the boundary of the depletion layer.Thus,a comprehensive dark current model is constructed by adding the dark current caused by carrier diffusion to the dark current caused by the other three mechanisms.To the best of our knowledge,this is the first dark count simulation model into which incorporated simultaneously are the thermal generation,trap-assisted tunneling,band-to-band tunneling mechanisms,and carrier diffusion in neutral regions to evaluate dark count behavior.The comparison between the measured data and the simulation results from the models shows that the proposed model is more accurate than other existing models,and the maximum of accuracy increases up to 31.48%when excess bias voltage equals 3.5 V and temperature is 50℃.展开更多
Background: Daily paediatric asthma readmissions within 28 days are a good example of a low count time series and not easily amenable to common time series methods used in studies of asthma seasonality and time trends...Background: Daily paediatric asthma readmissions within 28 days are a good example of a low count time series and not easily amenable to common time series methods used in studies of asthma seasonality and time trends. We sought to model and predict daily trends of childhood asthma readmissions over time inVictoria,Australia. Methods: We used a database of 75,000 childhood asthma admissions from the Department ofHealth,Victoria,Australiain 1997-2009. Daily admissions over time were modeled using a semi parametric Generalized Additive Model (GAM) and by sex and age group. Predictions were also estimated by using these models. Results: N = 2401 asthma readmissions within 28 days occurred during study period. Of these, n = 1358 (57%) were boys. Overall, seasonal peaks occurred in winter (30.5%) followed by autumn (28.6%) and then spring (24.6%) (p展开更多
This paper proposes some additional moment conditions for the linear feedback model with explanatory variables being predetermined, which is proposed by [1] for the purpose of dealing with count panel data. The newly ...This paper proposes some additional moment conditions for the linear feedback model with explanatory variables being predetermined, which is proposed by [1] for the purpose of dealing with count panel data. The newly proposed moment conditions include those associated with the equidispersion, the Negbin I-type model and the stationarity. The GMM estimators are constructed incorporating the additional moment conditions. Some Monte Carlo experiments indicate that the GMM estimators incorporating the additional moment conditions perform well, compared to that using only the conventional moment conditions proposed by [2,3].展开更多
Count data that exhibit over dispersion (variance of counts is larger than its mean) are commonly analyzed using discrete distributions such as negative binomial, Poisson inverse Gaussian and other models. The Poisson...Count data that exhibit over dispersion (variance of counts is larger than its mean) are commonly analyzed using discrete distributions such as negative binomial, Poisson inverse Gaussian and other models. The Poisson is characterized by the equality of mean and variance whereas the Negative Binomial and the Poisson inverse Gaussian have variance larger than the mean and therefore are more appropriate to model over-dispersed count data. As an alternative to these two models, we shall use the generalized Poisson distribution for group comparisons in the presence of multiple covariates. This problem is known as the ANCOVA and is solved for continuous data. Our objectives were to develop ANCOVA using the generalized Poisson distribution, and compare its goodness of fit to that of the nonparametric Generalized Additive Models. We used real life data to show that the model performs quite satisfactorily when compared to the nonparametric Generalized Additive Models.展开更多
Stochastic models are derived to estimate the level of coliform count in terms of MPN index, one of the most important water quality characteristic in ground water based on a set of water source location and soil char...Stochastic models are derived to estimate the level of coliform count in terms of MPN index, one of the most important water quality characteristic in ground water based on a set of water source location and soil characteristics. The study is based on about twenty location and soil characteristics, majority of them are observed through laboratory analysis of soil and water samples collected from nearly thee hundred locations of drinking water sources, wells and bore wells selected at random from the district of Kasaragod. The water contamination in wells are found to be relatively more as compared to bore wells. The study reveals that only 7 % of the wells and 40 o~ of the bore wells of the district are within the permissible limit of WHO standard of drinking water quality. The level of contamination is very high in the hospital premises and is very low in the forest area. Two separate multiple ordinal logistic regression models are developed to predict the level of coliform count, one for well and the other for bore well. The significant feature of this study is that in addition to scientifically proving the dependence of the water quality on the distances from waste disposal area and septic tanks etc., it highlights the dependence of two other very significant soil characteristics, the soil organic carbon and soil porosity. The models enable to predict the quality of water in a location based on the set of soil and location characteristics. One of the important uses of the model is in fixing safe locations for waste dump area, septic tank, digging well etc. in town planning, designing residential layouts, industrial layouts, hospital/hostel construction etc. This is the first ever study to describe the ground water quality in terms of the location and soil characteristics.展开更多
文摘In this paper, a deep learning-based method is proposed for crowdcountingproblems. Specifically, by utilizing the convolution kernel densitymap, the ground truth is generated dynamically to enhance the featureextractingability of the generator model. Meanwhile, the “cross stage partial”module is integrated into congested scene recognition network (CSRNet) toobtain a lightweight network model. In addition, to compensate for the accuracydrop owing to the lightweight model, we take advantage of “structuredknowledge transfer” to train the model in an end-to-end manner. It aimsto accelerate the fitting speed and enhance the learning ability of the studentmodel. The crowd-counting system solution for edge computing is alsoproposed and implemented on an embedded device equipped with a neuralprocessing unit. Simulations demonstrate the performance improvement ofthe proposed solution in terms of model size, processing speed and accuracy.The performance on the Venice dataset shows that the mean absolute error(MAE) and the root mean squared error (RMSE) of our model drop by32.63% and 39.18% compared with CSRNet. Meanwhile, the performance onthe ShanghaiTech PartB dataset reveals that the MAE and the RMSE of ourmodel are close to those of CSRNet. Therefore, we provide a novel embeddedplatform system scheme for public safety pre-warning applications.
基金supported by the National Key Research and Development Program of China(No.2019YFD0901000)。
文摘In the process of aquaculture,monitoring the number of fish bait particles is of great significance to improve the growth and welfare of fish.Although the counting method based on onvolutional neural network(CNN)achieve good accuracy and applicability,it has a high amount of parameters and computation,which limit the deployment on resource-constrained hardware devices.In order to solve the above problems,this paper proposes a lightweight bait particle counting method based on shift quantization and model pruning strategies.Firstly,we take corresponding lightweight strategies for different layers to flexibly balance the counting accuracy and performance of the model.In order to deeply lighten the counting model,the redundant and less informative weights of the model are removed through the combination of model quantization and pruning.The experimental results show that the compression rate is nearly 9 times.Finally,the quantization candidate value is refined by introducing a power-of-two addition term,which improves the matches of the weight distribution.By analyzing the experimental results,the counting loss at 3 bit is reduced by 35.31%.In summary,the lightweight bait particle counting model proposed in this paper achieves lossless counting accuracy and reduces the storage and computational overhead required for running convolutional neural networks.
基金This research was funded by RECLAIM project“Remanufacturing and Refurbishment of Large Industrial Equipment”and received funding from the European Commission Horizon 2020 research and innovation program under Grant Agreement No.869884The authors also acknowledge the support of The Efficiency and Performance Engineering Network International Collaboration Fund Award 2022(TEPEN-ICF 2022)project“Intelligent Fault Diagnosis Method and System with Few-Shot Learning Technique under Small Sample Data Condition”.
文摘Machine learning,especially deep learning,has been highly successful in data-intensive applications;however,the performance of these models will drop significantly when the amount of the training data amount does not meet the requirement.This leads to the so-called few-shot learning(FSL)problem,which requires the model rapidly generalize to new tasks that containing only a few labeled samples.In this paper,we proposed a new deep model,called deep convolutional meta-learning networks,to address the low performance of generalization under limited data for bearing fault diagnosis.The essential of our approach is to learn a base model from the multiple learning tasks using a support dataset and finetune the learnt parameters using few-shot tasks before it can adapt to the new learning task based on limited training data.The proposed method was compared to several FSL methods,including methods with and without pre-training the embedding mapping,and methods with finetuning the classifier or the whole model by utilizing the few-shot data from the target domain.The comparisons are carried out on 1-shot and 10-shot tasks using the Case Western Reserve University bearing dataset and a cylindrical roller bearing dataset.The experimental result illustrates that our method has good performance on the bearing fault diagnosis across various few-shot conditions.In addition,we found that the pretraining process does not always improve the prediction accuracy.
基金Project supported by the National Natural Science Foundation of China(Grant No.60676053)Applied Material in Xi'an Innovation Funds(Grant No.XA-AM-200603)
文摘Different from the usual full counting statistics theoretical work that focuses on the higher order cumulants computation by using cumulant generating function in electrical structures, Monte Carlo simulation of single-barrier structure is performed to obtain time series for two types of widely applicable exclusion models, counter-flows model, and tunnel model. With high-order spectrum analysis of Matlab, the validation of Monte Carlo methods is shown through the extracted first four cumulants from the time series, which are in agreement with those from cumulant generating function. After the comparison between the counter-flows model and the tunnel model in a single barrier structure, it is found that the essential difference between them consists in the strictly holding of Pauli principle in the former and in the statistical consideration of Pauli principle in the latter.
基金The National Science Foundation by Changjiang Scholarship of Ministry of Education of China(No.BCS-0527508)the Joint Research Fund for Overseas Natural Science of China(No.51250110075)+1 种基金the Natural Science Foundation of Jiangsu Province(No.SBK200910046)the Postdoctoral Science Foundation of Jiangsu Province(No.0901005C)
文摘In order to improve crash occurrence models to account for the influence of various contributing factors, a conditional autoregressive negative binomial (CAR-NB) model is employed to allow for overdispersion (tackled by the NB component), unobserved heterogeneity and spatial autocorrelation (captured by the CAR process), using Markov chain Monte Carlo methods and the Gibbs sampler. Statistical tests suggest that the CAR-NB model is preferred over the CAR-Poisson, NB, zero-inflated Poisson, zero-inflated NB models, due to its lower prediction errors and more robust parameter inference. The study results show that crash frequency and fatalities are positively associated with the number of lanes, curve length, annual average daily traffic (AADT) per lane, as well as rainfall. Speed limit and the distances to the nearest hospitals have negative associations with segment-based crash counts but positive associations with fatality counts, presumably as a result of worsened collision impacts at higher speed and time loss during transporting crash victims.
基金supported by Hi-Tech Research and Development Program of China (863 Program) (No.2006AA02Z4B4)
文摘Objective: In our previous work, we incorporated complete blood count (CBC) into TNM stage to develop a new prognostic score model, which was validated to improve prediction efficiency of TNM stage for nasopharyngeal carcinoma (NPC). The purpose of this study was to revalidate the accuracy of the model, and its superiority to TNM stage, through data from a prospective study.Methods: CBC of 249 eligible patients from the 863 Program No. 2006AA02Z4B4 was evaluated. Prognostic index (PI) of each patient was calculated according to the score model. Then they were divided by the PI into three categories: the low-, intermediate-and high-risk patients. The 5-year disease-specific survival (DSS) of the three categories was compared by a log-rank test. The model and TNM stage (Tth edition) were compared on efficiency for predicting the 5-year DSS, through comparison of the area under curve (AUC) of their receiver-operating characteristic curves.Results: The 5-year DSS of the low-, intermediate- and high-risk patients were 96.0%, 79.1% and 62.2%, respectively. The low- and intermediate-risk patients had better DSS than the high-risk patients (P〈0.001 and P〈0.005, respectively). And there was a trend of better DSS in the low-risk patients, compared with the intermediate-risk patients (P=0.049). The AUC of the model was larger than that of TNM stage (0.726 vs. 0.661, P:0.023). Conclusions: A CBC-based prognostic score model was revalidated to be accurate and superior to TNM stage on predicting 5-year DSS of NPC.
文摘Survival of HIV/AIDS patients is crucially dependent on comprehensive and targeted medical interventions such as supply of antiretroviral therapy and monitoring disease progression with CD4 T-cell counts. Statistical modelling approaches are helpful towards this goal. This study aims at developing Bayesian joint models with assumed generalized error distribution (GED) for the longitudinal CD4 data and two accelerated failure time distributions, Lognormal and loglogistic, for the survival time of HIV/AIDS patients. Data are obtained from patients under antiretroviral therapy follow-up at Shashemene referral hospital during January 2006-January 2012 and at Bale Robe general hospital during January 2008-March 2015. The Bayesian joint models are defined through latent variables and association parameters and with specified non-informative prior distributions for the model parameters. Simulations are conducted using Gibbs sampler algorithm implemented in the WinBUGS software. The results of the analyses of the two different data sets show that distributions of measurement errors of the longitudinal CD4 variable follow the generalized error distribution with fatter tails than the normal distribution. The Bayesian joint GED loglogistic models fit better to the data sets compared to the lognormal cases. Findings reveal that patients’ health can be improved over time. Compared to the males, female patients gain more CD4 counts. Survival time of a patient is negatively affected by TB infection. Moreover, increase in number of opportunistic infection implies decline of CD4 counts. Patients’ age negatively affects the disease marker with no effects on survival time. Improving weight may improve survival time of patients. Bayesian joint models with GED and AFT distributions are found to be useful in modelling the longitudinal and survival processes. Thus we recommend the generalized error distributions for measurement errors of the longitudinal data under the Bayesian joint modelling. Further studies may investigate the models with various types of shared random effects and more covariates with predictions.
基金supported by R&D Grant from University of DelhiDU-DST PURSE GrantICMR Grant No.3/1/3/JRF-2010/HRD-122(35831)
文摘Often the lifecycle data occur as count of the vital events and are recorded as integers.The purpose of this article is to model the fertility behavior based on religious,educational,economic,and occupational characteristics.The responses of classified groups according to these determinants are examined for significant influence on fertility using Poisson regression model(PRM) based on the National Family Health Survey-3 dataset.The observed and predicted probabilities under PRM indicate modal value of two children for the Poisson distribution modeled data.Presence of dominance of two child in the data motivates the authors to adopt multinomial regression model(MRM) in order to link fertility with various socioeconomic indicators responsible for fertility variation.Choice of the explanatory factors is limited to the availability of data.Trends and patterns of preference for birth counts suggest that religion,caste,wealth,female education,and occupation are the dominant factors shaping the observed birth process.Empirical analysis suggests that both the models used in the study perform similarly on the sample data.However,fitting of MRM by taking birth count of two as comparison category shows improved Akaike information criterion and consistent Akaike information criterion values.Current work contributes to the existing literature as it attempts to provide more insight into the determinants of Indian fertility using Poisson and MRM.
文摘Several economists agree to say that the need for adjustment was essential for African countries over the decade of the 80’s. The econometric analysis of a sample of 28 sub-Saharan African countries, from variables regarded as “representatives” for the adjustment objectives, proves that this assertion cannot be completely rejected.
基金Project supported by the Natural Science Foundation of Zhejiang Province,China(Grant No.LY17F010022)the National Natural Science Foundation of China(Grant No.61372156)。
文摘Dark count is one of the inherent noise types in single-photon diodes,which may restrict the performances of detectors based on these diodes.To formulate better designs for peripheral circuits of such diodes,an accurate statistical behavioral model of dark current must be established.Research has shown that there are four main mechanisms that contribute to the dark count in single-photon avalanche diodes.However,in the existing dark count models only three models have been considered,thus leading to inaccuracies in these models.To resolve these shortcomings,the dark current caused by carrier diffusion in the neutral region is deduced by multiplying the carrier detection probability with the carrier particle current at the boundary of the depletion layer.Thus,a comprehensive dark current model is constructed by adding the dark current caused by carrier diffusion to the dark current caused by the other three mechanisms.To the best of our knowledge,this is the first dark count simulation model into which incorporated simultaneously are the thermal generation,trap-assisted tunneling,band-to-band tunneling mechanisms,and carrier diffusion in neutral regions to evaluate dark count behavior.The comparison between the measured data and the simulation results from the models shows that the proposed model is more accurate than other existing models,and the maximum of accuracy increases up to 31.48%when excess bias voltage equals 3.5 V and temperature is 50℃.
文摘Background: Daily paediatric asthma readmissions within 28 days are a good example of a low count time series and not easily amenable to common time series methods used in studies of asthma seasonality and time trends. We sought to model and predict daily trends of childhood asthma readmissions over time inVictoria,Australia. Methods: We used a database of 75,000 childhood asthma admissions from the Department ofHealth,Victoria,Australiain 1997-2009. Daily admissions over time were modeled using a semi parametric Generalized Additive Model (GAM) and by sex and age group. Predictions were also estimated by using these models. Results: N = 2401 asthma readmissions within 28 days occurred during study period. Of these, n = 1358 (57%) were boys. Overall, seasonal peaks occurred in winter (30.5%) followed by autumn (28.6%) and then spring (24.6%) (p
文摘This paper proposes some additional moment conditions for the linear feedback model with explanatory variables being predetermined, which is proposed by [1] for the purpose of dealing with count panel data. The newly proposed moment conditions include those associated with the equidispersion, the Negbin I-type model and the stationarity. The GMM estimators are constructed incorporating the additional moment conditions. Some Monte Carlo experiments indicate that the GMM estimators incorporating the additional moment conditions perform well, compared to that using only the conventional moment conditions proposed by [2,3].
文摘Count data that exhibit over dispersion (variance of counts is larger than its mean) are commonly analyzed using discrete distributions such as negative binomial, Poisson inverse Gaussian and other models. The Poisson is characterized by the equality of mean and variance whereas the Negative Binomial and the Poisson inverse Gaussian have variance larger than the mean and therefore are more appropriate to model over-dispersed count data. As an alternative to these two models, we shall use the generalized Poisson distribution for group comparisons in the presence of multiple covariates. This problem is known as the ANCOVA and is solved for continuous data. Our objectives were to develop ANCOVA using the generalized Poisson distribution, and compare its goodness of fit to that of the nonparametric Generalized Additive Models. We used real life data to show that the model performs quite satisfactorily when compared to the nonparametric Generalized Additive Models.
文摘Stochastic models are derived to estimate the level of coliform count in terms of MPN index, one of the most important water quality characteristic in ground water based on a set of water source location and soil characteristics. The study is based on about twenty location and soil characteristics, majority of them are observed through laboratory analysis of soil and water samples collected from nearly thee hundred locations of drinking water sources, wells and bore wells selected at random from the district of Kasaragod. The water contamination in wells are found to be relatively more as compared to bore wells. The study reveals that only 7 % of the wells and 40 o~ of the bore wells of the district are within the permissible limit of WHO standard of drinking water quality. The level of contamination is very high in the hospital premises and is very low in the forest area. Two separate multiple ordinal logistic regression models are developed to predict the level of coliform count, one for well and the other for bore well. The significant feature of this study is that in addition to scientifically proving the dependence of the water quality on the distances from waste disposal area and septic tanks etc., it highlights the dependence of two other very significant soil characteristics, the soil organic carbon and soil porosity. The models enable to predict the quality of water in a location based on the set of soil and location characteristics. One of the important uses of the model is in fixing safe locations for waste dump area, septic tank, digging well etc. in town planning, designing residential layouts, industrial layouts, hospital/hostel construction etc. This is the first ever study to describe the ground water quality in terms of the location and soil characteristics.