Most edge-detection methods rely on calculating gradient derivatives of the potential field, a process that is easily affected by noise and is therefore of low stability. We propose a new edge-detection method named c...Most edge-detection methods rely on calculating gradient derivatives of the potential field, a process that is easily affected by noise and is therefore of low stability. We propose a new edge-detection method named correlation coefficient of multidirectional standard deviations(CCMS) that is solely based on statistics. First, we prove the reliability of the proposed method using a single model and then a combination of models. The proposed method is evaluated by comparing the results with those obtained by other edge-detection methods. The CCMS method offers outstanding recognition, retains the sharpness of details, and has low sensitivity to noise. We also applied the CCMS method to Bouguer anomaly data of a potash deposit in Laos. The applicability of the CCMS method is shown by comparing the inferred tectonic framework to that inferred from remote sensing(RS) data.展开更多
AIM:To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation(PSD)of Humphrey visual field could be associated with visual evoked potential(VEP)parameters of patients having prim...AIM:To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation(PSD)of Humphrey visual field could be associated with visual evoked potential(VEP)parameters of patients having primary open angle glaucoma(POAG).METHODS:Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential(PRVEP)were assessed in 100 patients with POAG.The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated(full field)and displayed on VEP monitor(colour 14')by an electronic pattern regenerator inbuilt in an evoked potential recorder(RMS EMG EP MARK II).RESULTS:The results of our study indicate that there is a highly significant(P【0.001)negative correlation of P100 amplitude and a statistically significant(P【0.05)positive correlation of N70 latency,P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student’s t-test.CONCLUSION:Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values.Conversely,as PSD increases the magnitude of VEP excursions were found to be diminished.展开更多
X-ray fluorescence (XRF) analysis utilizes particle size which is resulted from milling of a material. The milling ensures uniform and fine grained powder. The finer and more uniform the particle size is, the better t...X-ray fluorescence (XRF) analysis utilizes particle size which is resulted from milling of a material. The milling ensures uniform and fine grained powder. The finer and more uniform the particle size is, the better the result and easier it is for material quality control. To ensure uniformity in particle size and finer powder, a comparative analysis was conducted with different grinding aids and pressed pellet method was used in obtaining analysis results. Pressed pellets of cement raw meal sample milled with different grinding aids (graphite, aspirin and lithium borate) were subjected to XRF. Graphite produced better particle size uniformity with a corresponding standard deviation that made quality control of raw meal easier and better than aspirin and lithium borate.展开更多
The current attempt is aimed to outline the geometrical framework of a well known statistical problem, concerning the explicit expression of the arithmetic mean standard deviation distribution. To this respect, after ...The current attempt is aimed to outline the geometrical framework of a well known statistical problem, concerning the explicit expression of the arithmetic mean standard deviation distribution. To this respect, after a short exposition, three steps are performed as 1) formulation of the arithmetic mean standard deviation, , as a function of the errors, , which, by themselves, are statistically independent;2) formulation of the arithmetic mean standard deviation distribution, , as a function of the errors,;3) formulation of the arithmetic mean standard deviation distribution, , as a function of the arithmetic mean standard deviation, , and the arithmetic mean rms error, . The integration domain can be expressed in canonical form after a change of reference frame in the n-space, which is recognized as an infinitely thin n-cylindrical corona where the symmetry axis coincides with a coordinate axis. Finally, the solution is presented and a number of (well known) related parameters are inferred for sake of completeness.展开更多
The size distribution of the broken top coal blocks is an important factor,affecting the recovery ratio and the efficiency of drawing top coal in longwall top coal caving(LTCC)mining panel.The standard deviation of to...The size distribution of the broken top coal blocks is an important factor,affecting the recovery ratio and the efficiency of drawing top coal in longwall top coal caving(LTCC)mining panel.The standard deviation of top coal block size(dt)is one of the main parameters to reflect the size distribution of top coal.To find the effect of dt on the caving mechanism,this study simulates experiments with 9 different dt by using discrete element software PFC.The dt is divided into two stages:uniform distribution stage(UDS)whose dt is less than 0.1(Schemes 1–5),and nonuniform distribution stage(NDS)whose dt is more than 0.1(Schemes 6–9).This research mainly investigates the variation of recovery ratio,drawing body shape,boundary of top coal,and contact force between particles in the two stages,respectively.The results showed that with the increasing dt,the recovery ratio of the panel increases first and then decreases in UDS.It is the largest in Scheme 3,which mainly increases the drawing volume at the side of starting drawing end.However,the recovery ratio decreases first and then increases quickly in NDS,and it is the largest in Scheme 9,where the drawing volume at the side of finishing drawing end are relatively higher.In UDS,the major size of top coal is basically medium,while in NDS,the size varies from medium to small,and then to large,with a distinct difference in shape and volume of the drawing body.When the major size of top coal is medium and small,the cross-section width of the initial boundary of top coal at each height is relatively small.Conversely,when the top coal size is large,the initial boundary of top coal has a larger opening range,the rotating angle of lower boundary is relatively small in the normal drawing stage,which is conducive to the development of drawing body and reduces the residual top coal,and the maximum particle velocity and the particles movement angle are both larger.This study lays a foundation for the prediction of recovery ratio,and suggests that the uniform top coal is more manageable and has a larger recovery ratio.展开更多
Regularization method is an effective method for solving ill\|posed equation. In this paper the unbiased estimation formula of unit weight standard deviation in the regularization solution is derived and the formula i...Regularization method is an effective method for solving ill\|posed equation. In this paper the unbiased estimation formula of unit weight standard deviation in the regularization solution is derived and the formula is verified with numerical case of 1 000 sample data by use of the typical ill\|posed equation, i.e. the Fredholm integration equation of the first kind.展开更多
The current attempt is aimed to extend previous results, concerning the explicit expression of the arithmetic mean standard deviation distribution, to the general case of the weighted mean standard deviation distribut...The current attempt is aimed to extend previous results, concerning the explicit expression of the arithmetic mean standard deviation distribution, to the general case of the weighted mean standard deviation distribution. To this respect, the integration domain is expressed in canonical form after a change of reference frame in the n-space, which is recognized as an infinitely thin n-cylindrical corona where the axis coincides with a coordinate axis and the orthogonal section is an infinitely thin, homotetic (n-1)-elliptical corona. The semiaxes are formulated in two different ways, namely in terms of (1) eigenvalues, via the eigenvalue equation, and (2) leading principal minors of the matrix of a quadratic form, via the Jacobi formulae. The distribution and related parameters have the same formal expression with respect to their counterparts in the special case where the weighted mean coincides with the arithmetic mean. The reduction of some results to ordinary geometry is also considered.展开更多
AIM: To compare four methods to approximate mean and standard deviation (SD) when only medians and interquartile ranges are provided.METHODS: We performed simulated meta-analyses on six datasets of 15, 30, 50, 100...AIM: To compare four methods to approximate mean and standard deviation (SD) when only medians and interquartile ranges are provided.METHODS: We performed simulated meta-analyses on six datasets of 15, 30, 50, 100, 500, and 1000 trials, respectively. Subjects were iteratively generated from one of the following seven scenarios: five theoreti-cal continuous distributions [Normal, Normal (0, 1), Gamma, Exponential, and Bimodal] and two real-life distributions of intensive care unit stay and hospital stay. For each simulation, we calculated the pooled estimates assembling the study-specific medians and SD approximations: Conservative SD, less conservativeSD, mean SD, or interquartile range. We provided a graphical evaluation of the standardized differences.To show which imputation method produced the best estimate, we ranked those differences and calculated the rate at which each estimate appeared as the best, second-best, third-best, or fourth-best.RESULTS: Our results demonstrated that the best pooled estimate for the overall mean and SD was provided by the median and interquartile range (mean standardized estimates: 4.5 ± 2.2, P = 0.14) or by the median and the SD conservative estimate (mean standardized estimates: 4.5 ± 3.5, P = 0.13). The less conservative approximation of SD appeared to be the worst method, exhibiting a significant difference from the reference method at the 90% confidence level. The method that ranked first most frequently is the interquartile range method (23/42 = 55%), particularly when data were generated according to the Standard Normal, Gamma, and Exponential distributions. The second-best is the conservative SD method (15/42 = 36%), particularly for data from a bimodal distributionand for the intensive care unit stay variable. CONCLUSION: Meta-analytic estimates are not signi-fcantly affected by approximating the missing values ofmean and SD with the correspondent values for medianand interquartile range.展开更多
Two additional features are particularly useful in pixelwise satellite data segmentation using neural networks: one results from local window averaging around each pixel (MWA) and another uses a standard deviation est...Two additional features are particularly useful in pixelwise satellite data segmentation using neural networks: one results from local window averaging around each pixel (MWA) and another uses a standard deviation estimator (MWSD) instead of the average. While the former’s complexity has already been solved to a satisfying minimum, the latter did not. This article proposes a new algorithm that can substitute a <i><span style="font-family:Verdana;">naive</span></i><span style="font-family:Verdana;"> MWSD, by making the complexi</span><span><span style="font-family:Verdana;">ty of the computational process fall from </span><i><span style="font-family:Verdana;">O</span></i><span style="font-family:Verdana;">(</span><i><span style="font-family:Verdana;">N</span></i><sup><span style="font-family:Verdana;">2</span></sup><i><span style="font-family:Verdana;">n</span></i><sup><span style="font-family:Verdana;">2</span></sup><span style="font-family:Verdana;">) to </span><i><span style="font-family:Verdana;">O</span></i><span><span style="font-family:Verdana;">(</span><i><span style="font-family:Verdana;">N</span></i></span><sup><span style="font-family:Verdana;">2</span></sup><i><span style="font-family:Verdana;">n</span></i><span style="font-family:Verdana;">)</span><span style="font-family:Verdana;">, where </span><i><span style="font-family:Verdana;">N</span></i><span style="font-family:Verdana;"> is a square</span></span><span style="font-family:Verdana;"> input array side, and </span><i><span style="font-family:Verdana;">n</span></i><span style="font-family:Verdana;"> is the moving window’s side length. The Num</span><span style="font-family:Verdana;">ba python compiler was used to make python a competitive high-performance</span> <span style="font-family:Verdana;">computing language in our optimizations. Our results show efficiency benchmars</span>展开更多
An adaptive contrast enhancement (ACE) algorithm is presented in this paper, in which the contrast gain is determined by mapping the local standard deviation (LSD) histogram of an image to a Gaussian distribution func...An adaptive contrast enhancement (ACE) algorithm is presented in this paper, in which the contrast gain is determined by mapping the local standard deviation (LSD) histogram of an image to a Gaussian distribution function. The contrast gain is nonlinearly adjusted to avoid noise overenhancement and ringing artifacts while improving the detail contrast with less computational burden. The effectiveness of our method is demonstrated with radiological images and compared with other algorithms.展开更多
Perceptual image quality assessment(IQA)is one of the most indispensable yet challenging problems in image processing and computer vision.It is quite necessary to develop automatic and efficient approaches that can ac...Perceptual image quality assessment(IQA)is one of the most indispensable yet challenging problems in image processing and computer vision.It is quite necessary to develop automatic and efficient approaches that can accurately predict perceptual image quality consistently with human subjective evaluation.To further improve the prediction accuracy for the distortion of color images,in this paper,we propose a novel effective and efficient IQA model,called perceptual gradient similarity deviation(PGSD).Based on the gradient magnitude similarity,we proposed a gradient direction selection method to automatically determine the pixel-wise perceptual gradient.The luminance and chrominance channels are both took into account to characterize the quality degradation caused by intensity and color distortions.Finally,a multi-scale strategy is utilized and pooled with different weights to incorporate image details at different resolutions.Experimental results on LIVE,CSIQ and TID2013 databases demonstrate the superior performances of the proposed algorithm.展开更多
密度峰值聚类(clustering by fast search and find of density peaks,DPC)算法是一种基于密度的聚类算法,它可以发现任意形状和维度的类簇,是具有里程碑意义的聚类算法。然而,DPC算法的样本局部密度定义不适用于同时发现数据集的稠密...密度峰值聚类(clustering by fast search and find of density peaks,DPC)算法是一种基于密度的聚类算法,它可以发现任意形状和维度的类簇,是具有里程碑意义的聚类算法。然而,DPC算法的样本局部密度定义不适用于同时发现数据集的稠密簇和稀疏簇;此外,DPC算法的一步分配策略使得一旦有一个样本分配错误,将导致更多样本的错误分配,产生“多米诺骨牌效应”。针对这些问题,提出一种新的样本局部密度定义,采用局部标准差指数定义样本局部密度,克服DPC的密度定义缺陷;采用两步分配策略代替DPC的一步分配策略,克服DPC的“多米诺骨牌效应”,得到ESDTS-DPC算法。与DPC及其改进算法KNN-DPC、FKNN-DPC、DPC-CE和经典密度聚类算法DBSCAN的实验比较显示,提出的ESDTS-DPC算法具有更好的聚类准确性。展开更多
电网换相换流器型高压直流输电(line commutated converter-based high voltage direct current,LCC-HVDC)系统若发生后续换相失败,将严重影响交直流混联电网的安全稳定运行。文中首先针对LCC-HVDC系统故障恢复过程中电流偏差控制作用...电网换相换流器型高压直流输电(line commutated converter-based high voltage direct current,LCC-HVDC)系统若发生后续换相失败,将严重影响交直流混联电网的安全稳定运行。文中首先针对LCC-HVDC系统故障恢复过程中电流偏差控制作用阶段易再次发生换相失败的问题,对电流偏差控制参数与换相失败之间的关系进行理论分析,发现此阶段系统若不发生换相失败,逆变侧LCC直流电压和交流换相电压须满足一定的约束关系,且该约束关系受电流偏差控制参数的直接影响。然后,基于理论分析结果,提出一种电流偏差控制参数整定方法,可改善系统故障恢复过程中对直流电压恢复速度和程度的控制要求,使系统更易满足直流电压与交流换相电压稳定运行约束关系,以降低后续换相失败概率。最后,利用PSCAD/EMTDC仿真平台CIGRE标准测试模型验证了理论分析的正确性以及参数整定方法的有效性。展开更多
基金supported by the National Hi-Tech Research and Development Program of China(863 Program)(No.2006AA06Z107)the National Natural Science Foundation of China(No.40930314)
文摘Most edge-detection methods rely on calculating gradient derivatives of the potential field, a process that is easily affected by noise and is therefore of low stability. We propose a new edge-detection method named correlation coefficient of multidirectional standard deviations(CCMS) that is solely based on statistics. First, we prove the reliability of the proposed method using a single model and then a combination of models. The proposed method is evaluated by comparing the results with those obtained by other edge-detection methods. The CCMS method offers outstanding recognition, retains the sharpness of details, and has low sensitivity to noise. We also applied the CCMS method to Bouguer anomaly data of a potash deposit in Laos. The applicability of the CCMS method is shown by comparing the inferred tectonic framework to that inferred from remote sensing(RS) data.
文摘AIM:To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation(PSD)of Humphrey visual field could be associated with visual evoked potential(VEP)parameters of patients having primary open angle glaucoma(POAG).METHODS:Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential(PRVEP)were assessed in 100 patients with POAG.The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated(full field)and displayed on VEP monitor(colour 14')by an electronic pattern regenerator inbuilt in an evoked potential recorder(RMS EMG EP MARK II).RESULTS:The results of our study indicate that there is a highly significant(P【0.001)negative correlation of P100 amplitude and a statistically significant(P【0.05)positive correlation of N70 latency,P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student’s t-test.CONCLUSION:Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values.Conversely,as PSD increases the magnitude of VEP excursions were found to be diminished.
文摘X-ray fluorescence (XRF) analysis utilizes particle size which is resulted from milling of a material. The milling ensures uniform and fine grained powder. The finer and more uniform the particle size is, the better the result and easier it is for material quality control. To ensure uniformity in particle size and finer powder, a comparative analysis was conducted with different grinding aids and pressed pellet method was used in obtaining analysis results. Pressed pellets of cement raw meal sample milled with different grinding aids (graphite, aspirin and lithium borate) were subjected to XRF. Graphite produced better particle size uniformity with a corresponding standard deviation that made quality control of raw meal easier and better than aspirin and lithium borate.
文摘The current attempt is aimed to outline the geometrical framework of a well known statistical problem, concerning the explicit expression of the arithmetic mean standard deviation distribution. To this respect, after a short exposition, three steps are performed as 1) formulation of the arithmetic mean standard deviation, , as a function of the errors, , which, by themselves, are statistically independent;2) formulation of the arithmetic mean standard deviation distribution, , as a function of the errors,;3) formulation of the arithmetic mean standard deviation distribution, , as a function of the arithmetic mean standard deviation, , and the arithmetic mean rms error, . The integration domain can be expressed in canonical form after a change of reference frame in the n-space, which is recognized as an infinitely thin n-cylindrical corona where the symmetry axis coincides with a coordinate axis. Finally, the solution is presented and a number of (well known) related parameters are inferred for sake of completeness.
基金supported by the National Key R&D Plan of China,China(Grant No.2018YFC0604501)the Natural Science Foundation of China,China(Grant Nos.51934008,51674264,51904305)the Research Fund of the State Key Laboratory of Coal Resources and Safe Mining,CUMT,China(Grant No.SKLCRSM19KF023).
文摘The size distribution of the broken top coal blocks is an important factor,affecting the recovery ratio and the efficiency of drawing top coal in longwall top coal caving(LTCC)mining panel.The standard deviation of top coal block size(dt)is one of the main parameters to reflect the size distribution of top coal.To find the effect of dt on the caving mechanism,this study simulates experiments with 9 different dt by using discrete element software PFC.The dt is divided into two stages:uniform distribution stage(UDS)whose dt is less than 0.1(Schemes 1–5),and nonuniform distribution stage(NDS)whose dt is more than 0.1(Schemes 6–9).This research mainly investigates the variation of recovery ratio,drawing body shape,boundary of top coal,and contact force between particles in the two stages,respectively.The results showed that with the increasing dt,the recovery ratio of the panel increases first and then decreases in UDS.It is the largest in Scheme 3,which mainly increases the drawing volume at the side of starting drawing end.However,the recovery ratio decreases first and then increases quickly in NDS,and it is the largest in Scheme 9,where the drawing volume at the side of finishing drawing end are relatively higher.In UDS,the major size of top coal is basically medium,while in NDS,the size varies from medium to small,and then to large,with a distinct difference in shape and volume of the drawing body.When the major size of top coal is medium and small,the cross-section width of the initial boundary of top coal at each height is relatively small.Conversely,when the top coal size is large,the initial boundary of top coal has a larger opening range,the rotating angle of lower boundary is relatively small in the normal drawing stage,which is conducive to the development of drawing body and reduces the residual top coal,and the maximum particle velocity and the particles movement angle are both larger.This study lays a foundation for the prediction of recovery ratio,and suggests that the uniform top coal is more manageable and has a larger recovery ratio.
文摘Regularization method is an effective method for solving ill\|posed equation. In this paper the unbiased estimation formula of unit weight standard deviation in the regularization solution is derived and the formula is verified with numerical case of 1 000 sample data by use of the typical ill\|posed equation, i.e. the Fredholm integration equation of the first kind.
文摘The current attempt is aimed to extend previous results, concerning the explicit expression of the arithmetic mean standard deviation distribution, to the general case of the weighted mean standard deviation distribution. To this respect, the integration domain is expressed in canonical form after a change of reference frame in the n-space, which is recognized as an infinitely thin n-cylindrical corona where the axis coincides with a coordinate axis and the orthogonal section is an infinitely thin, homotetic (n-1)-elliptical corona. The semiaxes are formulated in two different ways, namely in terms of (1) eigenvalues, via the eigenvalue equation, and (2) leading principal minors of the matrix of a quadratic form, via the Jacobi formulae. The distribution and related parameters have the same formal expression with respect to their counterparts in the special case where the weighted mean coincides with the arithmetic mean. The reduction of some results to ordinary geometry is also considered.
文摘AIM: To compare four methods to approximate mean and standard deviation (SD) when only medians and interquartile ranges are provided.METHODS: We performed simulated meta-analyses on six datasets of 15, 30, 50, 100, 500, and 1000 trials, respectively. Subjects were iteratively generated from one of the following seven scenarios: five theoreti-cal continuous distributions [Normal, Normal (0, 1), Gamma, Exponential, and Bimodal] and two real-life distributions of intensive care unit stay and hospital stay. For each simulation, we calculated the pooled estimates assembling the study-specific medians and SD approximations: Conservative SD, less conservativeSD, mean SD, or interquartile range. We provided a graphical evaluation of the standardized differences.To show which imputation method produced the best estimate, we ranked those differences and calculated the rate at which each estimate appeared as the best, second-best, third-best, or fourth-best.RESULTS: Our results demonstrated that the best pooled estimate for the overall mean and SD was provided by the median and interquartile range (mean standardized estimates: 4.5 ± 2.2, P = 0.14) or by the median and the SD conservative estimate (mean standardized estimates: 4.5 ± 3.5, P = 0.13). The less conservative approximation of SD appeared to be the worst method, exhibiting a significant difference from the reference method at the 90% confidence level. The method that ranked first most frequently is the interquartile range method (23/42 = 55%), particularly when data were generated according to the Standard Normal, Gamma, and Exponential distributions. The second-best is the conservative SD method (15/42 = 36%), particularly for data from a bimodal distributionand for the intensive care unit stay variable. CONCLUSION: Meta-analytic estimates are not signi-fcantly affected by approximating the missing values ofmean and SD with the correspondent values for medianand interquartile range.
文摘Two additional features are particularly useful in pixelwise satellite data segmentation using neural networks: one results from local window averaging around each pixel (MWA) and another uses a standard deviation estimator (MWSD) instead of the average. While the former’s complexity has already been solved to a satisfying minimum, the latter did not. This article proposes a new algorithm that can substitute a <i><span style="font-family:Verdana;">naive</span></i><span style="font-family:Verdana;"> MWSD, by making the complexi</span><span><span style="font-family:Verdana;">ty of the computational process fall from </span><i><span style="font-family:Verdana;">O</span></i><span style="font-family:Verdana;">(</span><i><span style="font-family:Verdana;">N</span></i><sup><span style="font-family:Verdana;">2</span></sup><i><span style="font-family:Verdana;">n</span></i><sup><span style="font-family:Verdana;">2</span></sup><span style="font-family:Verdana;">) to </span><i><span style="font-family:Verdana;">O</span></i><span><span style="font-family:Verdana;">(</span><i><span style="font-family:Verdana;">N</span></i></span><sup><span style="font-family:Verdana;">2</span></sup><i><span style="font-family:Verdana;">n</span></i><span style="font-family:Verdana;">)</span><span style="font-family:Verdana;">, where </span><i><span style="font-family:Verdana;">N</span></i><span style="font-family:Verdana;"> is a square</span></span><span style="font-family:Verdana;"> input array side, and </span><i><span style="font-family:Verdana;">n</span></i><span style="font-family:Verdana;"> is the moving window’s side length. The Num</span><span style="font-family:Verdana;">ba python compiler was used to make python a competitive high-performance</span> <span style="font-family:Verdana;">computing language in our optimizations. Our results show efficiency benchmars</span>
基金the National Natural Science Foundation of China(No:3 963 0 1 1 0 ) the National Key Technologies R&D Programme under Con-tract96-92 0 -1 2 -0 1
文摘An adaptive contrast enhancement (ACE) algorithm is presented in this paper, in which the contrast gain is determined by mapping the local standard deviation (LSD) histogram of an image to a Gaussian distribution function. The contrast gain is nonlinearly adjusted to avoid noise overenhancement and ringing artifacts while improving the detail contrast with less computational burden. The effectiveness of our method is demonstrated with radiological images and compared with other algorithms.
文摘Perceptual image quality assessment(IQA)is one of the most indispensable yet challenging problems in image processing and computer vision.It is quite necessary to develop automatic and efficient approaches that can accurately predict perceptual image quality consistently with human subjective evaluation.To further improve the prediction accuracy for the distortion of color images,in this paper,we propose a novel effective and efficient IQA model,called perceptual gradient similarity deviation(PGSD).Based on the gradient magnitude similarity,we proposed a gradient direction selection method to automatically determine the pixel-wise perceptual gradient.The luminance and chrominance channels are both took into account to characterize the quality degradation caused by intensity and color distortions.Finally,a multi-scale strategy is utilized and pooled with different weights to incorporate image details at different resolutions.Experimental results on LIVE,CSIQ and TID2013 databases demonstrate the superior performances of the proposed algorithm.
文摘密度峰值聚类(clustering by fast search and find of density peaks,DPC)算法是一种基于密度的聚类算法,它可以发现任意形状和维度的类簇,是具有里程碑意义的聚类算法。然而,DPC算法的样本局部密度定义不适用于同时发现数据集的稠密簇和稀疏簇;此外,DPC算法的一步分配策略使得一旦有一个样本分配错误,将导致更多样本的错误分配,产生“多米诺骨牌效应”。针对这些问题,提出一种新的样本局部密度定义,采用局部标准差指数定义样本局部密度,克服DPC的密度定义缺陷;采用两步分配策略代替DPC的一步分配策略,克服DPC的“多米诺骨牌效应”,得到ESDTS-DPC算法。与DPC及其改进算法KNN-DPC、FKNN-DPC、DPC-CE和经典密度聚类算法DBSCAN的实验比较显示,提出的ESDTS-DPC算法具有更好的聚类准确性。
文摘电网换相换流器型高压直流输电(line commutated converter-based high voltage direct current,LCC-HVDC)系统若发生后续换相失败,将严重影响交直流混联电网的安全稳定运行。文中首先针对LCC-HVDC系统故障恢复过程中电流偏差控制作用阶段易再次发生换相失败的问题,对电流偏差控制参数与换相失败之间的关系进行理论分析,发现此阶段系统若不发生换相失败,逆变侧LCC直流电压和交流换相电压须满足一定的约束关系,且该约束关系受电流偏差控制参数的直接影响。然后,基于理论分析结果,提出一种电流偏差控制参数整定方法,可改善系统故障恢复过程中对直流电压恢复速度和程度的控制要求,使系统更易满足直流电压与交流换相电压稳定运行约束关系,以降低后续换相失败概率。最后,利用PSCAD/EMTDC仿真平台CIGRE标准测试模型验证了理论分析的正确性以及参数整定方法的有效性。