Decision forest is a well-renowned machine learning technique to address the detection and prediction problems related to clinical data.But,the tra-ditional decision forest(DF)algorithms have lower classification accu...Decision forest is a well-renowned machine learning technique to address the detection and prediction problems related to clinical data.But,the tra-ditional decision forest(DF)algorithms have lower classification accuracy and cannot handle high-dimensional feature space effectively.In this work,we pro-pose a bootstrap decision forest using penalizing attributes(BFPA)algorithm to predict heart disease with higher accuracy.This work integrates a significance-based attribute selection(SAS)algorithm with the BFPA classifier to improve the performance of the diagnostic system in identifying cardiac illness.The pro-posed SAS algorithm is used to determine the correlation among attributes and to select the optimum subset of feature space for learning and testing processes.BFPA selects the optimal number of learning and testing data points as well as the density of trees in the forest to realize higher prediction accuracy in classifying imbalanced datasets effectively.The effectiveness of the developed classifier is cautiously verified on the real-world database(i.e.,Heart disease dataset from UCI repository)by relating its enactment with many advanced approaches with respect to the accuracy,sensitivity,specificity,precision,and intersection over-union(IoU).The empirical results demonstrate that the intended classification approach outdoes other approaches with superior enactment regarding the accu-racy,precision,sensitivity,specificity,and IoU of 94.7%,99.2%,90.1%,91.1%,and 90.4%,correspondingly.Additionally,we carry out Wilcoxon’s rank-sum test to determine whether our proposed classifier with feature selection method enables a noteworthy enhancement related to other classifiers or not.From the experimental results,we can conclude that the integration of SAS and BFPA outperforms other classifiers recently reported in the literature.展开更多
The exact minimax penalty function method is used to solve a noncon- vex differentiable optimization problem with both inequality and equality constraints. The conditions for exactness of the penalization for the exac...The exact minimax penalty function method is used to solve a noncon- vex differentiable optimization problem with both inequality and equality constraints. The conditions for exactness of the penalization for the exact minimax penalty function method are established by assuming that the functions constituting the considered con- strained optimization problem are invex with respect to the same function η (with the exception of those equality constraints for which the associated Lagrange multipliers are negative these functions should be assumed to be incave with respect to η). Thus, a threshold of the penalty parameter is given such that, for all penalty parameters exceeding this threshold, equivalence holds between the set of optimal solutions in the considered constrained optimization problem and the set of minimizer in its associated penalized problem with an exact minimax penalty function. It is shown that coercivity is not suf- ficient to prove the results.展开更多
This paper consider the penalized least squares estimators with convex penalties or regularization norms.We provide sparsity oracle inequalities for the prediction error for a general convex penalty and for the partic...This paper consider the penalized least squares estimators with convex penalties or regularization norms.We provide sparsity oracle inequalities for the prediction error for a general convex penalty and for the particular cases of Lasso and Group Lasso estimators in a regression setting.The main contribution is that our oracle inequalities are established for the more general case where the observations noise is issued from probability measures that satisfy a weak spectral gap(or Poincaré)inequality instead of Gaussian distributions.We illustrate our results on a heavy tailed example and a sub Gaussian one;we especially give the explicit bounds of the oracle inequalities for these two special examples.展开更多
The solution properties of semiparametric model are analyzed, especially that penalized least squares for semiparametric model will be invalid when the matrix B^TPB is ill-posed or singular. According to the principle...The solution properties of semiparametric model are analyzed, especially that penalized least squares for semiparametric model will be invalid when the matrix B^TPB is ill-posed or singular. According to the principle of ridge estimate for linear parametric model, generalized penalized least squares for semiparametric model are put forward, and some formulae and statistical properties of estimates are derived. Finally according to simulation examples some helpful conclusions are drawn.展开更多
Disparities between the in situ and satellite values at the positions where in situ values are obtained have been the main handicap to the smooth modeling of the distribution of ocean chlorophyll. The blending techniq...Disparities between the in situ and satellite values at the positions where in situ values are obtained have been the main handicap to the smooth modeling of the distribution of ocean chlorophyll. The blending technique and the thin plate regression spline have so far been the main methods used in an attempt to calibrate ocean chlorophyll at positions where the in situ field could not provide value. In this paper, a combination of the two techniques has been used in order to provide improved and reliable estimates from the satellite field. The thin plate regression spline is applied to the blending technique by imposing a penalty on the differences between the satellite and in situ fields at positions where they both have observations. The objective of maximizing the use of the satellite field for prediction was outstanding in a validation study where the penalized blending method showed a remarkable improvement in its estimation potentials. It is hoped that most analysis on primary productivity and management in the ocean environment will be greatly affected by this result, since chlorophyll is one of the most important components in the formation of the ocean life cycle.展开更多
The Thin Plate Regression Spline (TPRS) was introduced as a means of smoothing off the differences between the satellite and in-situ observations during the two dimensional (2D) blending process in an attempt to calib...The Thin Plate Regression Spline (TPRS) was introduced as a means of smoothing off the differences between the satellite and in-situ observations during the two dimensional (2D) blending process in an attempt to calibrate ocean chlorophyll. The result was a remarkable improvement on the predictive capabilities of the penalized model making use of the satellite observation. In addition, the blending process has been extended to three dimensions (3D) since it is believed that most physical systems exist in the three dimensions (3D). In this article, an attempt to obtain more reliable and accurate predictions of ocean chlorophyll by extending the penalization process to three dimensional (3D) blending is presented. Penalty matrices were computed using the integrated least squares (ILS) and integrated squared derivative (ISD). Results obtained using the integrated least squares were not encouraging, but those obtained using the integrated squared derivative showed a reasonable improvement in predicting ocean chlorophyll especially where the validation datum was surrounded by available data from the satellite data set, however, the process appeared computationally expensive and the results matched the other methods on a general scale. In both case, the procedure for implementing the penalization process in three dimensional blending when penalty matrices were calculated using the two techniques has been well established and can be used in any similar three dimensional problem when it becomes necessary.展开更多
Penalized spline has largely been applied in many research studies not limited to disease modeling and epidemiology. However, due to spatial heterogeneity of the data because different smoothing parameter leads to dif...Penalized spline has largely been applied in many research studies not limited to disease modeling and epidemiology. However, due to spatial heterogeneity of the data because different smoothing parameter leads to different amount of smoothing in different regions the penalized spline has not been exclusively appropriate to fit the data. The study assessed the properties of penalized spline hierarchical model;the hierarchy penalty improves the fit as well as the accuracy of inference. The simulation demonstrates the potential benefits of using the hierarchical penalty, which is obtained by modelling the global smoothing parameter as another spline. The results showed that mixed model with penalized hierarchical penalty had a better fit than the mixed model without hierarchy this was demonstrated by the rapid convergence of the model posterior parameters and the smallest DIC value of the model. Therefore hierarchical model with fifteen sub-knots provides a better fit of the data.展开更多
When the total least squares(TLS)solution is used to solve the parameters in the errors-in-variables(EIV)model,the obtained parameter estimations will be unreliable in the observations containing systematic errors.To ...When the total least squares(TLS)solution is used to solve the parameters in the errors-in-variables(EIV)model,the obtained parameter estimations will be unreliable in the observations containing systematic errors.To solve this problem,we propose to add the nonparametric part(systematic errors)to the partial EIV model,and build the partial EIV model to weaken the influence of systematic errors.Then,having rewritten the model as a nonlinear model,we derive the formula of parameter estimations based on the penalized total least squares criterion.Furthermore,based on the second-order approximation method of precision estimation,we derive the second-order bias and covariance of parameter estimations and calculate the mean square error(MSE).Aiming at the selection of the smoothing factor,we propose to use the U curve method.The experiments show that the proposed method can mitigate the influence of systematic errors to a certain extent compared with the traditional method and get more reliable parameter estimations and its precision information,which validates the feasibility and effectiveness of the proposed method.展开更多
In this article, we use penalized spline to estimate the hazard function from a set of censored failure time data. A new approach to estimate the amount of smoothing is provided. Under regularity conditions we establi...In this article, we use penalized spline to estimate the hazard function from a set of censored failure time data. A new approach to estimate the amount of smoothing is provided. Under regularity conditions we establish the consistency and the asymptotic normality of the penalized likelihood estimators. Numerical studies and an example are conducted to evaluate the performances of the new procedure.展开更多
AIM:To compare the efficacies of patching and penalization therapies for the treatment of amblyopia patients.METHODS:The records of 64 eyes of 50 patients 7 to16y of age who had presented to our clinics with a diagnos...AIM:To compare the efficacies of patching and penalization therapies for the treatment of amblyopia patients.METHODS:The records of 64 eyes of 50 patients 7 to16y of age who had presented to our clinics with a diagnosis of amblyopia,were evaluated retrospectively.Forty eyes of 26 patients who had received patching therapy and 24 eyes of 24 patients who had received penalization therapy included in this study.The latencies and amplitudes of visual evoked potential(VEP)records and best corrected visual acuities(BCVA)of these two groups were compared before and six months after the treatment.RESULTS:In both patching and the penalization groups,the visual acuities increased significantly following the treatments(P【0.05).The latency measurements of the P100 wave obtained at 1.0°,15 arc min.Patterns of both groups significantly decreased following the 6-months-treatment.However,the amplitude measurements increased(P【0.05).CONCLUSION:The patching and the penalization methods,which are the main methods used in the treatment of amblyopia,were also effective over the age of 7y,which has been accepted as the critical age for the treatment of amblyopia.展开更多
We give an existence result of the obstacle parabolic equations3b(x,u) div(a(x,t,u, Vu))+div((x,t,u))=f in QT, 3twhere b(x,u) is bounded function ot u, the term atva,x,r,u, v u)) is a Letay type operat...We give an existence result of the obstacle parabolic equations3b(x,u) div(a(x,t,u, Vu))+div((x,t,u))=f in QT, 3twhere b(x,u) is bounded function ot u, the term atva,x,r,u, v u)) is a Letay type operator and the function is a nonlinear lower order and satisfy only the growth condition. The second term belongs to L1 (QT). The proof of an existence solution is based on the penalization methods.展开更多
The penalized least squares(PLS)method with appropriate weights has proved to be a successful baseline estimation method for various spectral analyses.It can extract the baseline from the spectrum while retaining the ...The penalized least squares(PLS)method with appropriate weights has proved to be a successful baseline estimation method for various spectral analyses.It can extract the baseline from the spectrum while retaining the signal peaks in the presence of random noise.The algorithm is implemented by iterating over the weights of the data points.In this study,we propose a new approach for assigning weights based on the Bayesian rule.The proposed method provides a self-consistent weighting formula and performs well,particularly for baselines with different curvature components.This method was applied to analyze Schottky spectra obtained in 86Kr projectile fragmentation measurements in the experimental Cooler Storage Ring(CSRe)at Lanzhou.It provides an accurate and reliable storage lifetime with a smaller error bar than existing PLS methods.It is also a universal baseline-subtraction algorithm that can be used for spectrum-related experiments,such as precision nuclear mass and lifetime measurements in storage rings.展开更多
Penalized spline has been a popular method for estimating an unknown function in the non-parametric regression due to their use of low-rank spline bases, which make computations tractable. However its performance is p...Penalized spline has been a popular method for estimating an unknown function in the non-parametric regression due to their use of low-rank spline bases, which make computations tractable. However its performance is poor when estimating functions that are rapidly varying in some regions and are smooth in other regions. This is contributed by the use of a global smoothing parameter that provides a constant amount of smoothing across the function. In order to make this spline spatially adaptive we have introduced hierarchical penalized splines which are obtained by modelling the global smoothing parameter as another spline.展开更多
Background:To date,compliance to atropine penalization in amblyopic children has only been assessed through self-report.The goal of this pilot study is to measure compliance to atropine penalization objectively.Method...Background:To date,compliance to atropine penalization in amblyopic children has only been assessed through self-report.The goal of this pilot study is to measure compliance to atropine penalization objectively.Methods:Seven amblyopic children(3-8 years;20/40-20/125 in the amblyopic eye) were enrolled.None had been treated with atropine previously.Children were prescribed either a twice per week or daily atropine regimen by their physicians.Compliance was defined as the percentage of days in which the atropine eye drop was taken compared to the number of doses prescribed.We used medication event monitoring system(MEMS) caps to objectively measure compliance.The MEMS caps are designed to electronically record the time and date when the bottle is opened.The parents of the children were provided a calendar log to subjectively report compliance.Participants were scheduled for return visits at 4 and 12 weeks.Weekly compliance was analyzed.Results:At 4 weeks,objective compliance averaged 88%(range,57-100%),while subjective compliance was 98%(range,90-100%).The actual dose in grams and visual acuity(VA) response relationship(r=0.79,P=0.03) was significantly better than the relationship between regimen and response(r=0.41,P>0.05),or the relationship between actual dose in drops and response(r=0.52,P>0.05).Conclusions:Objective compliance to atropine penalization instructions can be monitored with MEMS,which may facilitate our understanding of the dose-response relationship.Objective compliance with atropine penalization decreases over time and varies with regimen.On average,subjective parental reporting of compliance is overestimated.展开更多
Improving the ability to assess potential stroke deficit may aid the selection of patients most likely to benefit from acute stroke therapies. Methods based only on ‘at risk’ volumes or initial neurological conditio...Improving the ability to assess potential stroke deficit may aid the selection of patients most likely to benefit from acute stroke therapies. Methods based only on ‘at risk’ volumes or initial neurological condition do predict eventual outcome but not perfectly. Given the close relationship between anatomy and function in the brain, we propose the use of a modified version of partial least squares (PLS) regression to examine how well stroke outcome covary with infarct location. The modified version of PLS incorporates penalized regression and can handle either binary or ordinal data. This version is known as partial least squares with penalized logistic regression (PLS-PLR) and has been adapted from its original use for high-dimensional microarray data. We have adapted this algorithm for use in imaging data and demonstrate the use of this algorithm in a set of patients with aphasia (high level language disorder) following stroke.展开更多
<div style="text-align:justify;"> With the high speed development of information technology, contemporary data from a variety of fields becomes extremely large. The number of features in many datasets ...<div style="text-align:justify;"> With the high speed development of information technology, contemporary data from a variety of fields becomes extremely large. The number of features in many datasets is well above the sample size and is called high dimensional data. In statistics, variable selection approaches are required to extract the efficacious information from high dimensional data. The most popular approach is to add a penalty function coupled with a tuning parameter to the log likelihood function, which is called penalized likelihood method. However, almost all of penalized likelihood approaches only consider noise accumulation and supurious correlation whereas ignoring the endogeneity which also appeared frequently in high dimensional space. In this paper, we explore the cause of endogeneity and its influence on penalized likelihood approaches. Simulations based on five classical pe-nalized approaches are provided to vindicate their inconsistency under endogeneity. The results show that the positive selection rate of all five approaches increased gradually but the false selection rate does not consistently decrease when endogenous variables exist, that is, they do not satisfy the selection consistency. </div>展开更多
I. Getting help from lawyers and its realization according to international standards The right to defense for a person involving in a law suit is a universal human right. Article 11 of the Universal Declaration of H...I. Getting help from lawyers and its realization according to international standards The right to defense for a person involving in a law suit is a universal human right. Article 11 of the Universal Declaration of Human Rights provides: "Everyone charged with a penal offence has the right to be presumed innocent until proved guilty according to law in a public trial at which he has had all the guarantees necessary for his defence." This means (1) the right to defence is a basic human right due to all persons charged with a penal offence; (2) it is a basic requirement for the principle of presumption of innocence and fair trial; and (3) the realization of the fight needs practical and effective guarantees.展开更多
The paper is directly motivated by the pricing of vulnerable European and American options in a general hazard process setup and a related study of the corresponding pre-default backward stochastic differential equati...The paper is directly motivated by the pricing of vulnerable European and American options in a general hazard process setup and a related study of the corresponding pre-default backward stochastic differential equations(BSDE)and pre-default reflected backward stochastic differential equations(RBSDE).The goal of this work is twofold.First,we aim to establish the well-posedness results and comparison theorems for a generalized BSDE and a reflected generalized BSDE with a continuous and nondecreasing driver A.Second,we study penalization schemes for a generalized BSDE and a reflected generalized BSDE in which we penalize against the driver in order to obtain in the limit either a constrained optimal stopping problem or a constrained Dynkin game in which the set of minimizer's admissible exercise times is constrained to the right support of the measure generated by A.展开更多
The minimax concave penalty (MCP) has been demonstrated theoretically and practical- ly to be effective in nonconvex penalization for variable selection and parameter estimation. In this paper, we develop an efficie...The minimax concave penalty (MCP) has been demonstrated theoretically and practical- ly to be effective in nonconvex penalization for variable selection and parameter estimation. In this paper, we develop an efficient alternating direction method of multipliers (ADMM) with continuation algorithm for solving the MCP-penalized least squares problem in high dimensions. Under some mild conditions, we study the convergence properties and the Karush-Kuhn-Tucker (KKT) optimality con- ditions of the proposed method. A high-dimensional BIC is developed to select the optimal tuning parameters. Simulations and a real data example are presented to illustrate the efficiency and accuracy of the proposed method.展开更多
文摘Decision forest is a well-renowned machine learning technique to address the detection and prediction problems related to clinical data.But,the tra-ditional decision forest(DF)algorithms have lower classification accuracy and cannot handle high-dimensional feature space effectively.In this work,we pro-pose a bootstrap decision forest using penalizing attributes(BFPA)algorithm to predict heart disease with higher accuracy.This work integrates a significance-based attribute selection(SAS)algorithm with the BFPA classifier to improve the performance of the diagnostic system in identifying cardiac illness.The pro-posed SAS algorithm is used to determine the correlation among attributes and to select the optimum subset of feature space for learning and testing processes.BFPA selects the optimal number of learning and testing data points as well as the density of trees in the forest to realize higher prediction accuracy in classifying imbalanced datasets effectively.The effectiveness of the developed classifier is cautiously verified on the real-world database(i.e.,Heart disease dataset from UCI repository)by relating its enactment with many advanced approaches with respect to the accuracy,sensitivity,specificity,precision,and intersection over-union(IoU).The empirical results demonstrate that the intended classification approach outdoes other approaches with superior enactment regarding the accu-racy,precision,sensitivity,specificity,and IoU of 94.7%,99.2%,90.1%,91.1%,and 90.4%,correspondingly.Additionally,we carry out Wilcoxon’s rank-sum test to determine whether our proposed classifier with feature selection method enables a noteworthy enhancement related to other classifiers or not.From the experimental results,we can conclude that the integration of SAS and BFPA outperforms other classifiers recently reported in the literature.
文摘The exact minimax penalty function method is used to solve a noncon- vex differentiable optimization problem with both inequality and equality constraints. The conditions for exactness of the penalization for the exact minimax penalty function method are established by assuming that the functions constituting the considered con- strained optimization problem are invex with respect to the same function η (with the exception of those equality constraints for which the associated Lagrange multipliers are negative these functions should be assumed to be incave with respect to η). Thus, a threshold of the penalty parameter is given such that, for all penalty parameters exceeding this threshold, equivalence holds between the set of optimal solutions in the considered constrained optimization problem and the set of minimizer in its associated penalized problem with an exact minimax penalty function. It is shown that coercivity is not suf- ficient to prove the results.
基金This work has been(partially)supported by the Project EFI ANR-17-CE40-0030 of the French National Research Agency.
文摘This paper consider the penalized least squares estimators with convex penalties or regularization norms.We provide sparsity oracle inequalities for the prediction error for a general convex penalty and for the particular cases of Lasso and Group Lasso estimators in a regression setting.The main contribution is that our oracle inequalities are established for the more general case where the observations noise is issued from probability measures that satisfy a weak spectral gap(or Poincaré)inequality instead of Gaussian distributions.We illustrate our results on a heavy tailed example and a sub Gaussian one;we especially give the explicit bounds of the oracle inequalities for these two special examples.
基金Funded by the National Nature Science Foundation of China(No.40274005) .
文摘The solution properties of semiparametric model are analyzed, especially that penalized least squares for semiparametric model will be invalid when the matrix B^TPB is ill-posed or singular. According to the principle of ridge estimate for linear parametric model, generalized penalized least squares for semiparametric model are put forward, and some formulae and statistical properties of estimates are derived. Finally according to simulation examples some helpful conclusions are drawn.
文摘Disparities between the in situ and satellite values at the positions where in situ values are obtained have been the main handicap to the smooth modeling of the distribution of ocean chlorophyll. The blending technique and the thin plate regression spline have so far been the main methods used in an attempt to calibrate ocean chlorophyll at positions where the in situ field could not provide value. In this paper, a combination of the two techniques has been used in order to provide improved and reliable estimates from the satellite field. The thin plate regression spline is applied to the blending technique by imposing a penalty on the differences between the satellite and in situ fields at positions where they both have observations. The objective of maximizing the use of the satellite field for prediction was outstanding in a validation study where the penalized blending method showed a remarkable improvement in its estimation potentials. It is hoped that most analysis on primary productivity and management in the ocean environment will be greatly affected by this result, since chlorophyll is one of the most important components in the formation of the ocean life cycle.
文摘The Thin Plate Regression Spline (TPRS) was introduced as a means of smoothing off the differences between the satellite and in-situ observations during the two dimensional (2D) blending process in an attempt to calibrate ocean chlorophyll. The result was a remarkable improvement on the predictive capabilities of the penalized model making use of the satellite observation. In addition, the blending process has been extended to three dimensions (3D) since it is believed that most physical systems exist in the three dimensions (3D). In this article, an attempt to obtain more reliable and accurate predictions of ocean chlorophyll by extending the penalization process to three dimensional (3D) blending is presented. Penalty matrices were computed using the integrated least squares (ILS) and integrated squared derivative (ISD). Results obtained using the integrated least squares were not encouraging, but those obtained using the integrated squared derivative showed a reasonable improvement in predicting ocean chlorophyll especially where the validation datum was surrounded by available data from the satellite data set, however, the process appeared computationally expensive and the results matched the other methods on a general scale. In both case, the procedure for implementing the penalization process in three dimensional blending when penalty matrices were calculated using the two techniques has been well established and can be used in any similar three dimensional problem when it becomes necessary.
文摘Penalized spline has largely been applied in many research studies not limited to disease modeling and epidemiology. However, due to spatial heterogeneity of the data because different smoothing parameter leads to different amount of smoothing in different regions the penalized spline has not been exclusively appropriate to fit the data. The study assessed the properties of penalized spline hierarchical model;the hierarchy penalty improves the fit as well as the accuracy of inference. The simulation demonstrates the potential benefits of using the hierarchical penalty, which is obtained by modelling the global smoothing parameter as another spline. The results showed that mixed model with penalized hierarchical penalty had a better fit than the mixed model without hierarchy this was demonstrated by the rapid convergence of the model posterior parameters and the smallest DIC value of the model. Therefore hierarchical model with fifteen sub-knots provides a better fit of the data.
基金supported by the National Natural Science Foundation of China,Nos.41874001 and 41664001Support Program for Outstanding Youth Talents in Jiangxi Province,No.20162BCB23050National Key Research and Development Program,No.2016YFB0501405。
文摘When the total least squares(TLS)solution is used to solve the parameters in the errors-in-variables(EIV)model,the obtained parameter estimations will be unreliable in the observations containing systematic errors.To solve this problem,we propose to add the nonparametric part(systematic errors)to the partial EIV model,and build the partial EIV model to weaken the influence of systematic errors.Then,having rewritten the model as a nonlinear model,we derive the formula of parameter estimations based on the penalized total least squares criterion.Furthermore,based on the second-order approximation method of precision estimation,we derive the second-order bias and covariance of parameter estimations and calculate the mean square error(MSE).Aiming at the selection of the smoothing factor,we propose to use the U curve method.The experiments show that the proposed method can mitigate the influence of systematic errors to a certain extent compared with the traditional method and get more reliable parameter estimations and its precision information,which validates the feasibility and effectiveness of the proposed method.
基金supported by the Natural Science Foundation of China(10771017,10971015,10231030)Key Project to Ministry of Education of the People’s Republic of China(309007)
文摘In this article, we use penalized spline to estimate the hazard function from a set of censored failure time data. A new approach to estimate the amount of smoothing is provided. Under regularity conditions we establish the consistency and the asymptotic normality of the penalized likelihood estimators. Numerical studies and an example are conducted to evaluate the performances of the new procedure.
文摘AIM:To compare the efficacies of patching and penalization therapies for the treatment of amblyopia patients.METHODS:The records of 64 eyes of 50 patients 7 to16y of age who had presented to our clinics with a diagnosis of amblyopia,were evaluated retrospectively.Forty eyes of 26 patients who had received patching therapy and 24 eyes of 24 patients who had received penalization therapy included in this study.The latencies and amplitudes of visual evoked potential(VEP)records and best corrected visual acuities(BCVA)of these two groups were compared before and six months after the treatment.RESULTS:In both patching and the penalization groups,the visual acuities increased significantly following the treatments(P【0.05).The latency measurements of the P100 wave obtained at 1.0°,15 arc min.Patterns of both groups significantly decreased following the 6-months-treatment.However,the amplitude measurements increased(P【0.05).CONCLUSION:The patching and the penalization methods,which are the main methods used in the treatment of amblyopia,were also effective over the age of 7y,which has been accepted as the critical age for the treatment of amblyopia.
文摘We give an existence result of the obstacle parabolic equations3b(x,u) div(a(x,t,u, Vu))+div((x,t,u))=f in QT, 3twhere b(x,u) is bounded function ot u, the term atva,x,r,u, v u)) is a Letay type operator and the function is a nonlinear lower order and satisfy only the growth condition. The second term belongs to L1 (QT). The proof of an existence solution is based on the penalization methods.
基金supported by the National Key R&D Program of China(No.2018YFA0404401)CAS Project for Young Scientists in Basic Research(No.YSBR-002)Strategic Priority Research Program of the Chinese Academy of Sciences(No.XDB34000000).
文摘The penalized least squares(PLS)method with appropriate weights has proved to be a successful baseline estimation method for various spectral analyses.It can extract the baseline from the spectrum while retaining the signal peaks in the presence of random noise.The algorithm is implemented by iterating over the weights of the data points.In this study,we propose a new approach for assigning weights based on the Bayesian rule.The proposed method provides a self-consistent weighting formula and performs well,particularly for baselines with different curvature components.This method was applied to analyze Schottky spectra obtained in 86Kr projectile fragmentation measurements in the experimental Cooler Storage Ring(CSRe)at Lanzhou.It provides an accurate and reliable storage lifetime with a smaller error bar than existing PLS methods.It is also a universal baseline-subtraction algorithm that can be used for spectrum-related experiments,such as precision nuclear mass and lifetime measurements in storage rings.
文摘Penalized spline has been a popular method for estimating an unknown function in the non-parametric regression due to their use of low-rank spline bases, which make computations tractable. However its performance is poor when estimating functions that are rapidly varying in some regions and are smooth in other regions. This is contributed by the use of a global smoothing parameter that provides a constant amount of smoothing across the function. In order to make this spline spatially adaptive we have introduced hierarchical penalized splines which are obtained by modelling the global smoothing parameter as another spline.
基金supported by a pilot grant from Indiana Clinical and Translational Sciences Institute Project Development Teams(PDT) to J Wanga Research to Prevent Blindness(RPB) unrestricted grant to the Glick Eye Institute at Indiana University
文摘Background:To date,compliance to atropine penalization in amblyopic children has only been assessed through self-report.The goal of this pilot study is to measure compliance to atropine penalization objectively.Methods:Seven amblyopic children(3-8 years;20/40-20/125 in the amblyopic eye) were enrolled.None had been treated with atropine previously.Children were prescribed either a twice per week or daily atropine regimen by their physicians.Compliance was defined as the percentage of days in which the atropine eye drop was taken compared to the number of doses prescribed.We used medication event monitoring system(MEMS) caps to objectively measure compliance.The MEMS caps are designed to electronically record the time and date when the bottle is opened.The parents of the children were provided a calendar log to subjectively report compliance.Participants were scheduled for return visits at 4 and 12 weeks.Weekly compliance was analyzed.Results:At 4 weeks,objective compliance averaged 88%(range,57-100%),while subjective compliance was 98%(range,90-100%).The actual dose in grams and visual acuity(VA) response relationship(r=0.79,P=0.03) was significantly better than the relationship between regimen and response(r=0.41,P>0.05),or the relationship between actual dose in drops and response(r=0.52,P>0.05).Conclusions:Objective compliance to atropine penalization instructions can be monitored with MEMS,which may facilitate our understanding of the dose-response relationship.Objective compliance with atropine penalization decreases over time and varies with regimen.On average,subjective parental reporting of compliance is overestimated.
文摘Improving the ability to assess potential stroke deficit may aid the selection of patients most likely to benefit from acute stroke therapies. Methods based only on ‘at risk’ volumes or initial neurological condition do predict eventual outcome but not perfectly. Given the close relationship between anatomy and function in the brain, we propose the use of a modified version of partial least squares (PLS) regression to examine how well stroke outcome covary with infarct location. The modified version of PLS incorporates penalized regression and can handle either binary or ordinal data. This version is known as partial least squares with penalized logistic regression (PLS-PLR) and has been adapted from its original use for high-dimensional microarray data. We have adapted this algorithm for use in imaging data and demonstrate the use of this algorithm in a set of patients with aphasia (high level language disorder) following stroke.
文摘<div style="text-align:justify;"> With the high speed development of information technology, contemporary data from a variety of fields becomes extremely large. The number of features in many datasets is well above the sample size and is called high dimensional data. In statistics, variable selection approaches are required to extract the efficacious information from high dimensional data. The most popular approach is to add a penalty function coupled with a tuning parameter to the log likelihood function, which is called penalized likelihood method. However, almost all of penalized likelihood approaches only consider noise accumulation and supurious correlation whereas ignoring the endogeneity which also appeared frequently in high dimensional space. In this paper, we explore the cause of endogeneity and its influence on penalized likelihood approaches. Simulations based on five classical pe-nalized approaches are provided to vindicate their inconsistency under endogeneity. The results show that the positive selection rate of all five approaches increased gradually but the false selection rate does not consistently decrease when endogenous variables exist, that is, they do not satisfy the selection consistency. </div>
文摘I. Getting help from lawyers and its realization according to international standards The right to defense for a person involving in a law suit is a universal human right. Article 11 of the Universal Declaration of Human Rights provides: "Everyone charged with a penal offence has the right to be presumed innocent until proved guilty according to law in a public trial at which he has had all the guarantees necessary for his defence." This means (1) the right to defence is a basic human right due to all persons charged with a penal offence; (2) it is a basic requirement for the principle of presumption of innocence and fair trial; and (3) the realization of the fight needs practical and effective guarantees.
基金supported by the Australian Research Council Discovery Project (Grant No.DP220103106).
文摘The paper is directly motivated by the pricing of vulnerable European and American options in a general hazard process setup and a related study of the corresponding pre-default backward stochastic differential equations(BSDE)and pre-default reflected backward stochastic differential equations(RBSDE).The goal of this work is twofold.First,we aim to establish the well-posedness results and comparison theorems for a generalized BSDE and a reflected generalized BSDE with a continuous and nondecreasing driver A.Second,we study penalization schemes for a generalized BSDE and a reflected generalized BSDE in which we penalize against the driver in order to obtain in the limit either a constrained optimal stopping problem or a constrained Dynkin game in which the set of minimizer's admissible exercise times is constrained to the right support of the measure generated by A.
基金Supported by the National Natural Science Foundation of China(Grant Nos.11571263,11501579,11701571 and41572315)the Fundamental Research Funds for the Central Universities,China University of Geosciences(Wuhan)(Grant No.CUGW150809)
文摘The minimax concave penalty (MCP) has been demonstrated theoretically and practical- ly to be effective in nonconvex penalization for variable selection and parameter estimation. In this paper, we develop an efficient alternating direction method of multipliers (ADMM) with continuation algorithm for solving the MCP-penalized least squares problem in high dimensions. Under some mild conditions, we study the convergence properties and the Karush-Kuhn-Tucker (KKT) optimality con- ditions of the proposed method. A high-dimensional BIC is developed to select the optimal tuning parameters. Simulations and a real data example are presented to illustrate the efficiency and accuracy of the proposed method.