摘要
为改善遗传算法对大规模多变量求解的性能,提出一种基于图形处理器(GPU)加速细粒度并行遗传算法的实现方法.将并行遗传算法求解过程转化为GPU纹理渲染过程,使得遗传算法在GPU中加速执行.实验结果表明,该算法抑制了早熟现象,增大了并行遗传算法的种群规模,提高了算法的运算速度,并为普通用户研究并行遗传算法提供了一种可行的方法.
An algorithm based on GPU(graphics processing unit) acceleration fine-grained parallel genetic algorithm (PGA) is proposed to improve the performance of genetic algorithm for application to large-scale problems and multivariable solutions. The process of parallel genetic algorithms is converted into that of texture-rendering based on GPU, which maks PGA greatly accelerated in it. The experimental results show that the algorithm inhibits the phenomenon of premature efficiently, increases the particle population in the PGA, speeds up its running and provides ordinary user with a feasible PGA solution.
出处
《控制与决策》
EI
CSCD
北大核心
2008年第6期697-700,704,共5页
Control and Decision
关键词
遗传算法
并行处理
图形处理器
细粒度
Genetic algorithm
Parallel process
Graphics processing unit(GPU)~ Fine-grained