Most edge-detection methods rely on calculating gradient derivatives of the potential field, a process that is easily affected by noise and is therefore of low stability. We propose a new edge-detection method named c...Most edge-detection methods rely on calculating gradient derivatives of the potential field, a process that is easily affected by noise and is therefore of low stability. We propose a new edge-detection method named correlation coefficient of multidirectional standard deviations(CCMS) that is solely based on statistics. First, we prove the reliability of the proposed method using a single model and then a combination of models. The proposed method is evaluated by comparing the results with those obtained by other edge-detection methods. The CCMS method offers outstanding recognition, retains the sharpness of details, and has low sensitivity to noise. We also applied the CCMS method to Bouguer anomaly data of a potash deposit in Laos. The applicability of the CCMS method is shown by comparing the inferred tectonic framework to that inferred from remote sensing(RS) data.展开更多
AIM:To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation(PSD)of Humphrey visual field could be associated with visual evoked potential(VEP)parameters of patients having prim...AIM:To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation(PSD)of Humphrey visual field could be associated with visual evoked potential(VEP)parameters of patients having primary open angle glaucoma(POAG).METHODS:Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential(PRVEP)were assessed in 100 patients with POAG.The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated(full field)and displayed on VEP monitor(colour 14')by an electronic pattern regenerator inbuilt in an evoked potential recorder(RMS EMG EP MARK II).RESULTS:The results of our study indicate that there is a highly significant(P【0.001)negative correlation of P100 amplitude and a statistically significant(P【0.05)positive correlation of N70 latency,P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student’s t-test.CONCLUSION:Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values.Conversely,as PSD increases the magnitude of VEP excursions were found to be diminished.展开更多
X-ray fluorescence (XRF) analysis utilizes particle size which is resulted from milling of a material. The milling ensures uniform and fine grained powder. The finer and more uniform the particle size is, the better t...X-ray fluorescence (XRF) analysis utilizes particle size which is resulted from milling of a material. The milling ensures uniform and fine grained powder. The finer and more uniform the particle size is, the better the result and easier it is for material quality control. To ensure uniformity in particle size and finer powder, a comparative analysis was conducted with different grinding aids and pressed pellet method was used in obtaining analysis results. Pressed pellets of cement raw meal sample milled with different grinding aids (graphite, aspirin and lithium borate) were subjected to XRF. Graphite produced better particle size uniformity with a corresponding standard deviation that made quality control of raw meal easier and better than aspirin and lithium borate.展开更多
The current attempt is aimed to outline the geometrical framework of a well known statistical problem, concerning the explicit expression of the arithmetic mean standard deviation distribution. To this respect, after ...The current attempt is aimed to outline the geometrical framework of a well known statistical problem, concerning the explicit expression of the arithmetic mean standard deviation distribution. To this respect, after a short exposition, three steps are performed as 1) formulation of the arithmetic mean standard deviation, , as a function of the errors, , which, by themselves, are statistically independent;2) formulation of the arithmetic mean standard deviation distribution, , as a function of the errors,;3) formulation of the arithmetic mean standard deviation distribution, , as a function of the arithmetic mean standard deviation, , and the arithmetic mean rms error, . The integration domain can be expressed in canonical form after a change of reference frame in the n-space, which is recognized as an infinitely thin n-cylindrical corona where the symmetry axis coincides with a coordinate axis. Finally, the solution is presented and a number of (well known) related parameters are inferred for sake of completeness.展开更多
The size distribution of the broken top coal blocks is an important factor,affecting the recovery ratio and the efficiency of drawing top coal in longwall top coal caving(LTCC)mining panel.The standard deviation of to...The size distribution of the broken top coal blocks is an important factor,affecting the recovery ratio and the efficiency of drawing top coal in longwall top coal caving(LTCC)mining panel.The standard deviation of top coal block size(dt)is one of the main parameters to reflect the size distribution of top coal.To find the effect of dt on the caving mechanism,this study simulates experiments with 9 different dt by using discrete element software PFC.The dt is divided into two stages:uniform distribution stage(UDS)whose dt is less than 0.1(Schemes 1–5),and nonuniform distribution stage(NDS)whose dt is more than 0.1(Schemes 6–9).This research mainly investigates the variation of recovery ratio,drawing body shape,boundary of top coal,and contact force between particles in the two stages,respectively.The results showed that with the increasing dt,the recovery ratio of the panel increases first and then decreases in UDS.It is the largest in Scheme 3,which mainly increases the drawing volume at the side of starting drawing end.However,the recovery ratio decreases first and then increases quickly in NDS,and it is the largest in Scheme 9,where the drawing volume at the side of finishing drawing end are relatively higher.In UDS,the major size of top coal is basically medium,while in NDS,the size varies from medium to small,and then to large,with a distinct difference in shape and volume of the drawing body.When the major size of top coal is medium and small,the cross-section width of the initial boundary of top coal at each height is relatively small.Conversely,when the top coal size is large,the initial boundary of top coal has a larger opening range,the rotating angle of lower boundary is relatively small in the normal drawing stage,which is conducive to the development of drawing body and reduces the residual top coal,and the maximum particle velocity and the particles movement angle are both larger.This study lays a foundation for the prediction of recovery ratio,and suggests that the uniform top coal is more manageable and has a larger recovery ratio.展开更多
Regularization method is an effective method for solving ill\|posed equation. In this paper the unbiased estimation formula of unit weight standard deviation in the regularization solution is derived and the formula i...Regularization method is an effective method for solving ill\|posed equation. In this paper the unbiased estimation formula of unit weight standard deviation in the regularization solution is derived and the formula is verified with numerical case of 1 000 sample data by use of the typical ill\|posed equation, i.e. the Fredholm integration equation of the first kind.展开更多
The current attempt is aimed to extend previous results, concerning the explicit expression of the arithmetic mean standard deviation distribution, to the general case of the weighted mean standard deviation distribut...The current attempt is aimed to extend previous results, concerning the explicit expression of the arithmetic mean standard deviation distribution, to the general case of the weighted mean standard deviation distribution. To this respect, the integration domain is expressed in canonical form after a change of reference frame in the n-space, which is recognized as an infinitely thin n-cylindrical corona where the axis coincides with a coordinate axis and the orthogonal section is an infinitely thin, homotetic (n-1)-elliptical corona. The semiaxes are formulated in two different ways, namely in terms of (1) eigenvalues, via the eigenvalue equation, and (2) leading principal minors of the matrix of a quadratic form, via the Jacobi formulae. The distribution and related parameters have the same formal expression with respect to their counterparts in the special case where the weighted mean coincides with the arithmetic mean. The reduction of some results to ordinary geometry is also considered.展开更多
AIM: To compare four methods to approximate mean and standard deviation (SD) when only medians and interquartile ranges are provided.METHODS: We performed simulated meta-analyses on six datasets of 15, 30, 50, 100...AIM: To compare four methods to approximate mean and standard deviation (SD) when only medians and interquartile ranges are provided.METHODS: We performed simulated meta-analyses on six datasets of 15, 30, 50, 100, 500, and 1000 trials, respectively. Subjects were iteratively generated from one of the following seven scenarios: five theoreti-cal continuous distributions [Normal, Normal (0, 1), Gamma, Exponential, and Bimodal] and two real-life distributions of intensive care unit stay and hospital stay. For each simulation, we calculated the pooled estimates assembling the study-specific medians and SD approximations: Conservative SD, less conservativeSD, mean SD, or interquartile range. We provided a graphical evaluation of the standardized differences.To show which imputation method produced the best estimate, we ranked those differences and calculated the rate at which each estimate appeared as the best, second-best, third-best, or fourth-best.RESULTS: Our results demonstrated that the best pooled estimate for the overall mean and SD was provided by the median and interquartile range (mean standardized estimates: 4.5 ± 2.2, P = 0.14) or by the median and the SD conservative estimate (mean standardized estimates: 4.5 ± 3.5, P = 0.13). The less conservative approximation of SD appeared to be the worst method, exhibiting a significant difference from the reference method at the 90% confidence level. The method that ranked first most frequently is the interquartile range method (23/42 = 55%), particularly when data were generated according to the Standard Normal, Gamma, and Exponential distributions. The second-best is the conservative SD method (15/42 = 36%), particularly for data from a bimodal distributionand for the intensive care unit stay variable. CONCLUSION: Meta-analytic estimates are not signi-fcantly affected by approximating the missing values ofmean and SD with the correspondent values for medianand interquartile range.展开更多
Two additional features are particularly useful in pixelwise satellite data segmentation using neural networks: one results from local window averaging around each pixel (MWA) and another uses a standard deviation est...Two additional features are particularly useful in pixelwise satellite data segmentation using neural networks: one results from local window averaging around each pixel (MWA) and another uses a standard deviation estimator (MWSD) instead of the average. While the former’s complexity has already been solved to a satisfying minimum, the latter did not. This article proposes a new algorithm that can substitute a <i><span style="font-family:Verdana;">naive</span></i><span style="font-family:Verdana;"> MWSD, by making the complexi</span><span><span style="font-family:Verdana;">ty of the computational process fall from </span><i><span style="font-family:Verdana;">O</span></i><span style="font-family:Verdana;">(</span><i><span style="font-family:Verdana;">N</span></i><sup><span style="font-family:Verdana;">2</span></sup><i><span style="font-family:Verdana;">n</span></i><sup><span style="font-family:Verdana;">2</span></sup><span style="font-family:Verdana;">) to </span><i><span style="font-family:Verdana;">O</span></i><span><span style="font-family:Verdana;">(</span><i><span style="font-family:Verdana;">N</span></i></span><sup><span style="font-family:Verdana;">2</span></sup><i><span style="font-family:Verdana;">n</span></i><span style="font-family:Verdana;">)</span><span style="font-family:Verdana;">, where </span><i><span style="font-family:Verdana;">N</span></i><span style="font-family:Verdana;"> is a square</span></span><span style="font-family:Verdana;"> input array side, and </span><i><span style="font-family:Verdana;">n</span></i><span style="font-family:Verdana;"> is the moving window’s side length. The Num</span><span style="font-family:Verdana;">ba python compiler was used to make python a competitive high-performance</span> <span style="font-family:Verdana;">computing language in our optimizations. Our results show efficiency benchmars</span>展开更多
An adaptive contrast enhancement (ACE) algorithm is presented in this paper, in which the contrast gain is determined by mapping the local standard deviation (LSD) histogram of an image to a Gaussian distribution func...An adaptive contrast enhancement (ACE) algorithm is presented in this paper, in which the contrast gain is determined by mapping the local standard deviation (LSD) histogram of an image to a Gaussian distribution function. The contrast gain is nonlinearly adjusted to avoid noise overenhancement and ringing artifacts while improving the detail contrast with less computational burden. The effectiveness of our method is demonstrated with radiological images and compared with other algorithms.展开更多
Perceptual image quality assessment(IQA)is one of the most indispensable yet challenging problems in image processing and computer vision.It is quite necessary to develop automatic and efficient approaches that can ac...Perceptual image quality assessment(IQA)is one of the most indispensable yet challenging problems in image processing and computer vision.It is quite necessary to develop automatic and efficient approaches that can accurately predict perceptual image quality consistently with human subjective evaluation.To further improve the prediction accuracy for the distortion of color images,in this paper,we propose a novel effective and efficient IQA model,called perceptual gradient similarity deviation(PGSD).Based on the gradient magnitude similarity,we proposed a gradient direction selection method to automatically determine the pixel-wise perceptual gradient.The luminance and chrominance channels are both took into account to characterize the quality degradation caused by intensity and color distortions.Finally,a multi-scale strategy is utilized and pooled with different weights to incorporate image details at different resolutions.Experimental results on LIVE,CSIQ and TID2013 databases demonstrate the superior performances of the proposed algorithm.展开更多
Recursive algorithms are very useful for computing M-estimators of regression coefficients and scatter parameters. In this article, it is shown that for a nondecreasing ul (t), under some mild conditions the recursi...Recursive algorithms are very useful for computing M-estimators of regression coefficients and scatter parameters. In this article, it is shown that for a nondecreasing ul (t), under some mild conditions the recursive M-estimators of regression coefficients and scatter parameters are strongly consistent and the recursive M-estimator of the regression coefficients is also asymptotically normal distributed. Furthermore, optimal recursive M-estimators, asymptotic efficiencies of recursive M-estimators and asymptotic relative efficiencies between recursive M-estimators of regression coefficients are studied.展开更多
AIM:To develop a subset of simple outcome measures to quantify prosthetic gait deviation without needing three-dimensional gait analysis(3DGA).METHODS:Eight unilateral,transfemoral amputees and 12 unilateral,transtibi...AIM:To develop a subset of simple outcome measures to quantify prosthetic gait deviation without needing three-dimensional gait analysis(3DGA).METHODS:Eight unilateral,transfemoral amputees and 12 unilateral,transtibial amputees were recruited.Twenty-eight able-bodied controls were recruited.All participants underwent 3DGA,the timed-up-and-go test and the six-minute walk test(6MWT).The lowerlimb amputees also completed the Prosthesis Evaluation Questionnaire.Results from 3DGA were summarised using the gait deviation index(GDI),which was subsequently regressed,using stepwise regression,against the other measures.RESULTS:Step-length(SL),self-selected walking speed(SSWS) and the distance walked during the 6MWT(6MWD) were significantly correlated with GDI.The 6MWD was the strongest,single predictor of the GDI,followed by SL and SSWS.The predictive ability of the regression equations were improved following inclusion of self-report data related to mobility and prosthetic utility.CONCLUSION:This study offers a practicable alternative to quantifying kinematic deviation without the need to conduct complete 3DGA.展开更多
Fuzzy regression provides more approaches for us to deal with imprecise or vague problems. Traditional fuzzy regression is established on triangular fuzzy numbers, which can be represented by trapezoidal numbers. The ...Fuzzy regression provides more approaches for us to deal with imprecise or vague problems. Traditional fuzzy regression is established on triangular fuzzy numbers, which can be represented by trapezoidal numbers. The independent variables, coefficients of independent variables and dependent variable in the regression model are fuzzy numbers in different times and TW, the shape preserving operator, is the only T-norm which induces a shape preserving multiplication of LL-type of fuzzy numbers. So, in this paper, we propose a new fuzzy regression model based on LL-type of trapezoidal fuzzy numbers and TW. Firstly, we introduce the basic fuzzy set theories, the basic arithmetic propositions of the shape preserving operator and a new distance measure between trapezoidal numbers. Secondly, we investigate the specific model algorithms for FIFCFO model (fuzzy input-fuzzy coefficient-fuzzy output model) and introduce three advantages of fit criteria, Error Index, Similarity Measure and Distance Criterion. Thirdly, we use a design set and two reference sets to make a comparison between our proposed model and the reference models and determine their goodness with the above three criteria. Finally, we draw the conclusion that our proposed model is reasonable and has better prediction accuracy, but short of robust, comparing to the reference models by the three goodness of fit criteria. So, we can expand our traditional fuzzy regression model to our proposed new model.展开更多
Binary logistic regression models are commonly used to assess the association between outcomes and covariates. Many covariates are inherently continuous, and have a variety of distributions, including those that are h...Binary logistic regression models are commonly used to assess the association between outcomes and covariates. Many covariates are inherently continuous, and have a variety of distributions, including those that are heavily skewed to the left or right. Existing theoretical formulas, criteria, and simulation programs cannot accurately estimate the sample size and power of non-standard distributions. Therefore, we have developed a simulation program that uses Monte Carlo methods to estimate the exact power of a binary logistic regression model. This power calculation can be used for distributions of any shape and covariates of any type (continuous, ordinal, and nominal), and can account for nonlinear relationships between covariates and outcomes. For illustrative purposes, this simulation program is applied to real data obtained from a study on the influence of smoking on 90-day outcomes after acute atherothrombotic stroke. Our program is applicable to all effect sizes and makes it possible to apply various statistical methods, logistic regression and related simulations such as Bayesian inference with some modifications.展开更多
Background: Basal ileal endogenous amino acid(AA) losses(IAAend) and standardized ileal digestibility(SID) values of cereal grains, such as barley, are apparently underestimated when determined according to the...Background: Basal ileal endogenous amino acid(AA) losses(IAAend) and standardized ileal digestibility(SID) values of cereal grains, such as barley, are apparently underestimated when determined according to the nitrogen(N)-free method. Regression analysis between the dietary apparent ileal digestible content(c AID) and total crude protein(CP) and AA can be considered as alternative approach to obtain more accurate values for IAAendand SID of AA in cereal grains.Methods: Eight hulled barley genotypes were used, with barley being the only source of CP and AA in the assay diets. The diets contained 95 % as-fed of these eight barley genotypes each, ranging in CP content between 109.1 and 123.8 g/kg dry matter(DM). Nine ileally T-cannulated barrows, average body weight(BW) 30 ± 2 kg, were allotted to a row-column design comprising eight periods with 6 d each and nine pigs. On d 5 and the night of d 6 of every period, ileal digesta were collected for a total of 12 h. The IAAend and the SID were determined by linear regression analysis between c AID and total dietary CP and AA.Results: There exist linear relationships between cA ID and total CP and AA(P 〈 0.001). The IAAend of CP, Lys, Met, Thr and Trp amounted to 35.34, 1.08, 0.25, 1.02 and 0.38 g/kg DM intake(DMI), respectively, which are greater compared to average IAAend determined previously under N-free feeding conditions. The SID of CP, Lys, Met, Thr and Trp was 90,79, 85, 79 and 86 %, respectively, and was greater when compared to tabulated values. Moreover, these SID values were greater than those reported in literature, based on correction of apparent ileal digestibility(AID) of CP and AA for their IAAendvalues. Summarized, the results of the present regression analysis indicate greater IAAendin barley-based diets compared to those obtained by N-free feeding.Conclusions: For low-protein feed ingredients like barley the regression method may be preferred over correction of AID values for their IAAenddetermined under N-free feeding conditions, as intercepts and slopes of the linear regression equations between cA ID and total dietary CP and AA provide direct estimates of IAAendand SID of CP and AA in the presence of the assay feed ingredient.展开更多
基金supported by the National Hi-Tech Research and Development Program of China(863 Program)(No.2006AA06Z107)the National Natural Science Foundation of China(No.40930314)
文摘Most edge-detection methods rely on calculating gradient derivatives of the potential field, a process that is easily affected by noise and is therefore of low stability. We propose a new edge-detection method named correlation coefficient of multidirectional standard deviations(CCMS) that is solely based on statistics. First, we prove the reliability of the proposed method using a single model and then a combination of models. The proposed method is evaluated by comparing the results with those obtained by other edge-detection methods. The CCMS method offers outstanding recognition, retains the sharpness of details, and has low sensitivity to noise. We also applied the CCMS method to Bouguer anomaly data of a potash deposit in Laos. The applicability of the CCMS method is shown by comparing the inferred tectonic framework to that inferred from remote sensing(RS) data.
文摘AIM:To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation(PSD)of Humphrey visual field could be associated with visual evoked potential(VEP)parameters of patients having primary open angle glaucoma(POAG).METHODS:Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential(PRVEP)were assessed in 100 patients with POAG.The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated(full field)and displayed on VEP monitor(colour 14')by an electronic pattern regenerator inbuilt in an evoked potential recorder(RMS EMG EP MARK II).RESULTS:The results of our study indicate that there is a highly significant(P【0.001)negative correlation of P100 amplitude and a statistically significant(P【0.05)positive correlation of N70 latency,P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student’s t-test.CONCLUSION:Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values.Conversely,as PSD increases the magnitude of VEP excursions were found to be diminished.
文摘X-ray fluorescence (XRF) analysis utilizes particle size which is resulted from milling of a material. The milling ensures uniform and fine grained powder. The finer and more uniform the particle size is, the better the result and easier it is for material quality control. To ensure uniformity in particle size and finer powder, a comparative analysis was conducted with different grinding aids and pressed pellet method was used in obtaining analysis results. Pressed pellets of cement raw meal sample milled with different grinding aids (graphite, aspirin and lithium borate) were subjected to XRF. Graphite produced better particle size uniformity with a corresponding standard deviation that made quality control of raw meal easier and better than aspirin and lithium borate.
文摘The current attempt is aimed to outline the geometrical framework of a well known statistical problem, concerning the explicit expression of the arithmetic mean standard deviation distribution. To this respect, after a short exposition, three steps are performed as 1) formulation of the arithmetic mean standard deviation, , as a function of the errors, , which, by themselves, are statistically independent;2) formulation of the arithmetic mean standard deviation distribution, , as a function of the errors,;3) formulation of the arithmetic mean standard deviation distribution, , as a function of the arithmetic mean standard deviation, , and the arithmetic mean rms error, . The integration domain can be expressed in canonical form after a change of reference frame in the n-space, which is recognized as an infinitely thin n-cylindrical corona where the symmetry axis coincides with a coordinate axis. Finally, the solution is presented and a number of (well known) related parameters are inferred for sake of completeness.
基金supported by the National Key R&D Plan of China,China(Grant No.2018YFC0604501)the Natural Science Foundation of China,China(Grant Nos.51934008,51674264,51904305)the Research Fund of the State Key Laboratory of Coal Resources and Safe Mining,CUMT,China(Grant No.SKLCRSM19KF023).
文摘The size distribution of the broken top coal blocks is an important factor,affecting the recovery ratio and the efficiency of drawing top coal in longwall top coal caving(LTCC)mining panel.The standard deviation of top coal block size(dt)is one of the main parameters to reflect the size distribution of top coal.To find the effect of dt on the caving mechanism,this study simulates experiments with 9 different dt by using discrete element software PFC.The dt is divided into two stages:uniform distribution stage(UDS)whose dt is less than 0.1(Schemes 1–5),and nonuniform distribution stage(NDS)whose dt is more than 0.1(Schemes 6–9).This research mainly investigates the variation of recovery ratio,drawing body shape,boundary of top coal,and contact force between particles in the two stages,respectively.The results showed that with the increasing dt,the recovery ratio of the panel increases first and then decreases in UDS.It is the largest in Scheme 3,which mainly increases the drawing volume at the side of starting drawing end.However,the recovery ratio decreases first and then increases quickly in NDS,and it is the largest in Scheme 9,where the drawing volume at the side of finishing drawing end are relatively higher.In UDS,the major size of top coal is basically medium,while in NDS,the size varies from medium to small,and then to large,with a distinct difference in shape and volume of the drawing body.When the major size of top coal is medium and small,the cross-section width of the initial boundary of top coal at each height is relatively small.Conversely,when the top coal size is large,the initial boundary of top coal has a larger opening range,the rotating angle of lower boundary is relatively small in the normal drawing stage,which is conducive to the development of drawing body and reduces the residual top coal,and the maximum particle velocity and the particles movement angle are both larger.This study lays a foundation for the prediction of recovery ratio,and suggests that the uniform top coal is more manageable and has a larger recovery ratio.
文摘Regularization method is an effective method for solving ill\|posed equation. In this paper the unbiased estimation formula of unit weight standard deviation in the regularization solution is derived and the formula is verified with numerical case of 1 000 sample data by use of the typical ill\|posed equation, i.e. the Fredholm integration equation of the first kind.
文摘The current attempt is aimed to extend previous results, concerning the explicit expression of the arithmetic mean standard deviation distribution, to the general case of the weighted mean standard deviation distribution. To this respect, the integration domain is expressed in canonical form after a change of reference frame in the n-space, which is recognized as an infinitely thin n-cylindrical corona where the axis coincides with a coordinate axis and the orthogonal section is an infinitely thin, homotetic (n-1)-elliptical corona. The semiaxes are formulated in two different ways, namely in terms of (1) eigenvalues, via the eigenvalue equation, and (2) leading principal minors of the matrix of a quadratic form, via the Jacobi formulae. The distribution and related parameters have the same formal expression with respect to their counterparts in the special case where the weighted mean coincides with the arithmetic mean. The reduction of some results to ordinary geometry is also considered.
文摘AIM: To compare four methods to approximate mean and standard deviation (SD) when only medians and interquartile ranges are provided.METHODS: We performed simulated meta-analyses on six datasets of 15, 30, 50, 100, 500, and 1000 trials, respectively. Subjects were iteratively generated from one of the following seven scenarios: five theoreti-cal continuous distributions [Normal, Normal (0, 1), Gamma, Exponential, and Bimodal] and two real-life distributions of intensive care unit stay and hospital stay. For each simulation, we calculated the pooled estimates assembling the study-specific medians and SD approximations: Conservative SD, less conservativeSD, mean SD, or interquartile range. We provided a graphical evaluation of the standardized differences.To show which imputation method produced the best estimate, we ranked those differences and calculated the rate at which each estimate appeared as the best, second-best, third-best, or fourth-best.RESULTS: Our results demonstrated that the best pooled estimate for the overall mean and SD was provided by the median and interquartile range (mean standardized estimates: 4.5 ± 2.2, P = 0.14) or by the median and the SD conservative estimate (mean standardized estimates: 4.5 ± 3.5, P = 0.13). The less conservative approximation of SD appeared to be the worst method, exhibiting a significant difference from the reference method at the 90% confidence level. The method that ranked first most frequently is the interquartile range method (23/42 = 55%), particularly when data were generated according to the Standard Normal, Gamma, and Exponential distributions. The second-best is the conservative SD method (15/42 = 36%), particularly for data from a bimodal distributionand for the intensive care unit stay variable. CONCLUSION: Meta-analytic estimates are not signi-fcantly affected by approximating the missing values ofmean and SD with the correspondent values for medianand interquartile range.
文摘Two additional features are particularly useful in pixelwise satellite data segmentation using neural networks: one results from local window averaging around each pixel (MWA) and another uses a standard deviation estimator (MWSD) instead of the average. While the former’s complexity has already been solved to a satisfying minimum, the latter did not. This article proposes a new algorithm that can substitute a <i><span style="font-family:Verdana;">naive</span></i><span style="font-family:Verdana;"> MWSD, by making the complexi</span><span><span style="font-family:Verdana;">ty of the computational process fall from </span><i><span style="font-family:Verdana;">O</span></i><span style="font-family:Verdana;">(</span><i><span style="font-family:Verdana;">N</span></i><sup><span style="font-family:Verdana;">2</span></sup><i><span style="font-family:Verdana;">n</span></i><sup><span style="font-family:Verdana;">2</span></sup><span style="font-family:Verdana;">) to </span><i><span style="font-family:Verdana;">O</span></i><span><span style="font-family:Verdana;">(</span><i><span style="font-family:Verdana;">N</span></i></span><sup><span style="font-family:Verdana;">2</span></sup><i><span style="font-family:Verdana;">n</span></i><span style="font-family:Verdana;">)</span><span style="font-family:Verdana;">, where </span><i><span style="font-family:Verdana;">N</span></i><span style="font-family:Verdana;"> is a square</span></span><span style="font-family:Verdana;"> input array side, and </span><i><span style="font-family:Verdana;">n</span></i><span style="font-family:Verdana;"> is the moving window’s side length. The Num</span><span style="font-family:Verdana;">ba python compiler was used to make python a competitive high-performance</span> <span style="font-family:Verdana;">computing language in our optimizations. Our results show efficiency benchmars</span>
基金the National Natural Science Foundation of China(No:3 963 0 1 1 0 ) the National Key Technologies R&D Programme under Con-tract96-92 0 -1 2 -0 1
文摘An adaptive contrast enhancement (ACE) algorithm is presented in this paper, in which the contrast gain is determined by mapping the local standard deviation (LSD) histogram of an image to a Gaussian distribution function. The contrast gain is nonlinearly adjusted to avoid noise overenhancement and ringing artifacts while improving the detail contrast with less computational burden. The effectiveness of our method is demonstrated with radiological images and compared with other algorithms.
文摘Perceptual image quality assessment(IQA)is one of the most indispensable yet challenging problems in image processing and computer vision.It is quite necessary to develop automatic and efficient approaches that can accurately predict perceptual image quality consistently with human subjective evaluation.To further improve the prediction accuracy for the distortion of color images,in this paper,we propose a novel effective and efficient IQA model,called perceptual gradient similarity deviation(PGSD).Based on the gradient magnitude similarity,we proposed a gradient direction selection method to automatically determine the pixel-wise perceptual gradient.The luminance and chrominance channels are both took into account to characterize the quality degradation caused by intensity and color distortions.Finally,a multi-scale strategy is utilized and pooled with different weights to incorporate image details at different resolutions.Experimental results on LIVE,CSIQ and TID2013 databases demonstrate the superior performances of the proposed algorithm.
基金supported by the Natural Sciences and Engineering Research Council of Canadathe National Natural Science Foundation of China+2 种基金the Doctorial Fund of Education Ministry of Chinasupported by the Natural Sciences and Engineering Research Council of Canadasupported by the National Natural Science Foundation of China
文摘Recursive algorithms are very useful for computing M-estimators of regression coefficients and scatter parameters. In this article, it is shown that for a nondecreasing ul (t), under some mild conditions the recursive M-estimators of regression coefficients and scatter parameters are strongly consistent and the recursive M-estimator of the regression coefficients is also asymptotically normal distributed. Furthermore, optimal recursive M-estimators, asymptotic efficiencies of recursive M-estimators and asymptotic relative efficiencies between recursive M-estimators of regression coefficients are studied.
文摘AIM:To develop a subset of simple outcome measures to quantify prosthetic gait deviation without needing three-dimensional gait analysis(3DGA).METHODS:Eight unilateral,transfemoral amputees and 12 unilateral,transtibial amputees were recruited.Twenty-eight able-bodied controls were recruited.All participants underwent 3DGA,the timed-up-and-go test and the six-minute walk test(6MWT).The lowerlimb amputees also completed the Prosthesis Evaluation Questionnaire.Results from 3DGA were summarised using the gait deviation index(GDI),which was subsequently regressed,using stepwise regression,against the other measures.RESULTS:Step-length(SL),self-selected walking speed(SSWS) and the distance walked during the 6MWT(6MWD) were significantly correlated with GDI.The 6MWD was the strongest,single predictor of the GDI,followed by SL and SSWS.The predictive ability of the regression equations were improved following inclusion of self-report data related to mobility and prosthetic utility.CONCLUSION:This study offers a practicable alternative to quantifying kinematic deviation without the need to conduct complete 3DGA.
文摘Fuzzy regression provides more approaches for us to deal with imprecise or vague problems. Traditional fuzzy regression is established on triangular fuzzy numbers, which can be represented by trapezoidal numbers. The independent variables, coefficients of independent variables and dependent variable in the regression model are fuzzy numbers in different times and TW, the shape preserving operator, is the only T-norm which induces a shape preserving multiplication of LL-type of fuzzy numbers. So, in this paper, we propose a new fuzzy regression model based on LL-type of trapezoidal fuzzy numbers and TW. Firstly, we introduce the basic fuzzy set theories, the basic arithmetic propositions of the shape preserving operator and a new distance measure between trapezoidal numbers. Secondly, we investigate the specific model algorithms for FIFCFO model (fuzzy input-fuzzy coefficient-fuzzy output model) and introduce three advantages of fit criteria, Error Index, Similarity Measure and Distance Criterion. Thirdly, we use a design set and two reference sets to make a comparison between our proposed model and the reference models and determine their goodness with the above three criteria. Finally, we draw the conclusion that our proposed model is reasonable and has better prediction accuracy, but short of robust, comparing to the reference models by the three goodness of fit criteria. So, we can expand our traditional fuzzy regression model to our proposed new model.
文摘Binary logistic regression models are commonly used to assess the association between outcomes and covariates. Many covariates are inherently continuous, and have a variety of distributions, including those that are heavily skewed to the left or right. Existing theoretical formulas, criteria, and simulation programs cannot accurately estimate the sample size and power of non-standard distributions. Therefore, we have developed a simulation program that uses Monte Carlo methods to estimate the exact power of a binary logistic regression model. This power calculation can be used for distributions of any shape and covariates of any type (continuous, ordinal, and nominal), and can account for nonlinear relationships between covariates and outcomes. For illustrative purposes, this simulation program is applied to real data obtained from a study on the influence of smoking on 90-day outcomes after acute atherothrombotic stroke. Our program is applicable to all effect sizes and makes it possible to apply various statistical methods, logistic regression and related simulations such as Bayesian inference with some modifications.
基金supported in the framework of Grain Up by funds of the Federal Ministry of Food,AgricultureConsumer Protection(BMELV)based on a decision of the Parliament of the Federal Republic of Germany via the Federal Office for Agriculture Food and(BLE)under the innovation support program
文摘Background: Basal ileal endogenous amino acid(AA) losses(IAAend) and standardized ileal digestibility(SID) values of cereal grains, such as barley, are apparently underestimated when determined according to the nitrogen(N)-free method. Regression analysis between the dietary apparent ileal digestible content(c AID) and total crude protein(CP) and AA can be considered as alternative approach to obtain more accurate values for IAAendand SID of AA in cereal grains.Methods: Eight hulled barley genotypes were used, with barley being the only source of CP and AA in the assay diets. The diets contained 95 % as-fed of these eight barley genotypes each, ranging in CP content between 109.1 and 123.8 g/kg dry matter(DM). Nine ileally T-cannulated barrows, average body weight(BW) 30 ± 2 kg, were allotted to a row-column design comprising eight periods with 6 d each and nine pigs. On d 5 and the night of d 6 of every period, ileal digesta were collected for a total of 12 h. The IAAend and the SID were determined by linear regression analysis between c AID and total dietary CP and AA.Results: There exist linear relationships between cA ID and total CP and AA(P 〈 0.001). The IAAend of CP, Lys, Met, Thr and Trp amounted to 35.34, 1.08, 0.25, 1.02 and 0.38 g/kg DM intake(DMI), respectively, which are greater compared to average IAAend determined previously under N-free feeding conditions. The SID of CP, Lys, Met, Thr and Trp was 90,79, 85, 79 and 86 %, respectively, and was greater when compared to tabulated values. Moreover, these SID values were greater than those reported in literature, based on correction of apparent ileal digestibility(AID) of CP and AA for their IAAendvalues. Summarized, the results of the present regression analysis indicate greater IAAendin barley-based diets compared to those obtained by N-free feeding.Conclusions: For low-protein feed ingredients like barley the regression method may be preferred over correction of AID values for their IAAenddetermined under N-free feeding conditions, as intercepts and slopes of the linear regression equations between cA ID and total dietary CP and AA provide direct estimates of IAAendand SID of CP and AA in the presence of the assay feed ingredient.