A maximum test in lieu of forcing a choice between the two dependent samples t-test and Wilcoxon signed-ranks test is proposed. The maximum test, which requires a new table of critical values, maintains nominal α whi...A maximum test in lieu of forcing a choice between the two dependent samples t-test and Wilcoxon signed-ranks test is proposed. The maximum test, which requires a new table of critical values, maintains nominal α while guaranteeing the maximum power of the two constituent tests. Critical values, obtained via Monte Carlo methods, are uniformly smaller than the Bonferroni-Dunn adjustment, giving it power superiority when testing for treatment alternatives of shift in location parameter when data are sampled from non-normal distributions.展开更多
We propose a new nonparametric test based on the rank difference between the paired sample for testing the equality of the marginal distributions from a bivariate distribution. We also consider a modification of the n...We propose a new nonparametric test based on the rank difference between the paired sample for testing the equality of the marginal distributions from a bivariate distribution. We also consider a modification of the novel nonparametric test based on the test proposed by Baumgartern, Weiβ, and Schindler (1998). An extensive numerical power comparison for various parametric and nonparametric tests was conducted under a wide range of bivariate distributions for small sample sizes. The two new nonparametric tests have comparable power to the paired t test for the data simulated from bivariate normal distributions, and are generally more powerful than the paired t test and other commonly used nonparametric tests in several important bivariate distributions.展开更多
小世界神经网络具有较快的收敛速度和优越的容错性,近年来得到广泛关注.然而,在网络构造过程中,随机重连可能造成重要信息丢失,进而导致网络精度下降.针对该问题,基于Watts-Strogatz(WS)型小世界神经网络,提出了一种基于突触巩固机制的...小世界神经网络具有较快的收敛速度和优越的容错性,近年来得到广泛关注.然而,在网络构造过程中,随机重连可能造成重要信息丢失,进而导致网络精度下降.针对该问题,基于Watts-Strogatz(WS)型小世界神经网络,提出了一种基于突触巩固机制的前馈小世界神经网络(Feedforward small-world neural network based on synaptic consolidation,FSWNN-SC).首先,使用网络正则化方法对规则前馈神经网络进行预训练,基于突触巩固机制,断开网络不重要的权值连接,保留重要的连接权值;其次,设计重连规则构造小世界神经网络,在保证网络小世界属性的同时实现网络稀疏化,并使用梯度下降算法训练网络;最后,通过4个UCI基准数据集和2个真实数据集进行模型性能测试,并使用Wilcoxon符号秩检验对对比模型进行显著性差异检验.实验结果表明:所提出的FSWNN-SC模型在获得紧凑的网络结构的同时,其精度显著优于规则前馈神经网络及其他WS型小世界神经网络.展开更多
灰狼优化算法(Grey Wolf Optimization,GWO)是一种新型的群智能优化算法。与其他智能优化算法类似,该算法仍存在收敛速度慢、容易陷入局部极小点的缺点。针对这一问题,提出了具有自适应搜索策略的改进算法。为了提高算法的收敛速度和优...灰狼优化算法(Grey Wolf Optimization,GWO)是一种新型的群智能优化算法。与其他智能优化算法类似,该算法仍存在收敛速度慢、容易陷入局部极小点的缺点。针对这一问题,提出了具有自适应搜索策略的改进算法。为了提高算法的收敛速度和优化精度,通过适应度值控制智能个体位置,并引入了最优引导搜索方程;另一方面,为提高GWO的种群多样性,改进算法利用位置矢量差随机跳出局部最优。最后对10个标准测试函数进行了仿真实验,并与其他4种算法进行了比较,统计结果和Wilcoxon符号秩检验结果均表明,所提出的改进算法在收敛速度以及搜索精度方面具有明显优势。展开更多
文摘A maximum test in lieu of forcing a choice between the two dependent samples t-test and Wilcoxon signed-ranks test is proposed. The maximum test, which requires a new table of critical values, maintains nominal α while guaranteeing the maximum power of the two constituent tests. Critical values, obtained via Monte Carlo methods, are uniformly smaller than the Bonferroni-Dunn adjustment, giving it power superiority when testing for treatment alternatives of shift in location parameter when data are sampled from non-normal distributions.
文摘We propose a new nonparametric test based on the rank difference between the paired sample for testing the equality of the marginal distributions from a bivariate distribution. We also consider a modification of the novel nonparametric test based on the test proposed by Baumgartern, Weiβ, and Schindler (1998). An extensive numerical power comparison for various parametric and nonparametric tests was conducted under a wide range of bivariate distributions for small sample sizes. The two new nonparametric tests have comparable power to the paired t test for the data simulated from bivariate normal distributions, and are generally more powerful than the paired t test and other commonly used nonparametric tests in several important bivariate distributions.
文摘小世界神经网络具有较快的收敛速度和优越的容错性,近年来得到广泛关注.然而,在网络构造过程中,随机重连可能造成重要信息丢失,进而导致网络精度下降.针对该问题,基于Watts-Strogatz(WS)型小世界神经网络,提出了一种基于突触巩固机制的前馈小世界神经网络(Feedforward small-world neural network based on synaptic consolidation,FSWNN-SC).首先,使用网络正则化方法对规则前馈神经网络进行预训练,基于突触巩固机制,断开网络不重要的权值连接,保留重要的连接权值;其次,设计重连规则构造小世界神经网络,在保证网络小世界属性的同时实现网络稀疏化,并使用梯度下降算法训练网络;最后,通过4个UCI基准数据集和2个真实数据集进行模型性能测试,并使用Wilcoxon符号秩检验对对比模型进行显著性差异检验.实验结果表明:所提出的FSWNN-SC模型在获得紧凑的网络结构的同时,其精度显著优于规则前馈神经网络及其他WS型小世界神经网络.
文摘灰狼优化算法(Grey Wolf Optimization,GWO)是一种新型的群智能优化算法。与其他智能优化算法类似,该算法仍存在收敛速度慢、容易陷入局部极小点的缺点。针对这一问题,提出了具有自适应搜索策略的改进算法。为了提高算法的收敛速度和优化精度,通过适应度值控制智能个体位置,并引入了最优引导搜索方程;另一方面,为提高GWO的种群多样性,改进算法利用位置矢量差随机跳出局部最优。最后对10个标准测试函数进行了仿真实验,并与其他4种算法进行了比较,统计结果和Wilcoxon符号秩检验结果均表明,所提出的改进算法在收敛速度以及搜索精度方面具有明显优势。