摘要
神经网络是人工智能应用研究的重要领域,具有较强的容错性,出色的自适应能力和非线性映射能力。时延神经网络(TDNN)因为时间延迟单元的加入,使网络增加了记忆功能,更加适合处理序列信息,具有较大的应用价值。但延时单元的加入导致算法分类过程中运算量过大,针对上述问题,提出了一种适合于硬件实现的快速算法。采用序贯处理流程,通过适当的结构分解并存储中间变量,从而最大限度的减少了实现过程中的重复计算,有效地降低了运算量。仿真结果表明,在一定的维数范围内,与批处理实现方法相比,所提出的快速算法在运算量和存储量上都具有一定的优势。
ANN is one of the important research categories of artificial intelligence. It exihibits powerful error tolerant ability, excellent adaptivity and outstanding non - linear mapping capability. TDNN has the ability of memorizing by integrating time - delay units into ANN, is more adaptable to sequential information processing and therefore of more practical application value. A fast time - delay neural network (TDNN) algorithm suitable for hardware implementation is proposed to tackle the tremendous computational complexity in TDNN. The algorithm adopts a sequential process. By appropriate structure decomposition and intermediate variables storage, it decreases the repeated computation to the utmost degree and thus lowering the computational complexity effectively. It is shown that in certain dimension range the algorithm is, compared with the batch processing method, advantageous both in computational load and memory space.
出处
《计算机仿真》
CSCD
北大核心
2009年第12期133-135,140,共4页
Computer Simulation
基金
国家自然科学基金资助项目(60572138)