期刊文献+

一种新的语言模型判别训练方法 被引量:2

One New Discriminative Training Method for Language Modeling
下载PDF
导出
摘要 已有的一些判别训练(discriminative training)方法如Boosting[1]为了提高算法的效率,要求损失函数(loss function)是可以求导的,这样的损失函数无法体现最直接的优化目标.而根据最直接优化目标定义的损失函数通常是不可导的阶梯函数的形式.为了解决上述问题,文章提出了一种新的判别训练的方法GAP(Greedy Approximation Processing).这种方法具有很强的通用性,只要满足阶梯函数形式的损失函数都可以通过此算法进行训练.由于阶梯形式的损失函数是不可导的,无法使用梯度下降的方式计算极值并获得特征权值.因此,GAP采用'贪心'算法的方式,顺序地从特征集合中选取特征,通过穷举搜索的方式确定其权值.为了提高GAP算法的速度,作者在GAP算法中引入了特征之间独立的假设,固定特征的更新顺序,提出了GAP的改进算法FGAP(Fast Greedy Approximation Processing).为了证明FGAP算法的有效性,该文将FGAP算法训练的模型应用到日文输入法中.实验结果表明通过FGAP算法训练的语言模型优于Boosting算法训练的模型,与基础模型相比相对错误率下降了15%~19%. Existing discriminative training methods such as boosting and perceptron in language modeling always use the loss function that can be optimized directly. But such loss functions do not represent the most direct optimal objective although they can simplify the training procedure. This paper proposes a new discriminative training algorithm, Greedy Approximation Processing (GAP). GAP does a little constrain with the loss function as long as it is one type of step function. Since minimizing such functions is NP hard and gradient of those can not be computed directly, GAP adopts a greedy approach. At each iteration, it selects the feature from the feature set that most contributes to the loss function reduction under the current model parameters and updates the feature weight by exhaustive search through the loss function. Searching this unsmoothed function is of course prohibitively expensive. Consequently, this paper proposes a faster algorithm, Fast Greedy Approximation Processing (FGAP). It is a variant of GAP that first orders all of the features based on each feature's gain, the difference in the value of the loss function before and after adding the feature. FGAP achieves a substantial improvement in training speed as well as almost the same performance as GAP. FGAP is applied to training language models by using the conversion errors from Japanese Kana to Kanji as the loss function. Performance of these models is compared on the task of Japanese Kana-Kanji conversion. Experimental results, show that FGAP achieves better results than that of boosting, and the results are substantially better than that of the traditional n-gram model.
出处 《计算机学报》 EI CSCD 北大核心 2005年第10期1708-1715,共8页 Chinese Journal of Computers
关键词 语言模型 判别训练 损失函数 日文输入法 language model discriminative training loss function Japanese input
  • 相关文献

参考文献13

  • 1Collins Michael, Koo Terry. Discriminative reranking for natural language parsing. 2002, to appear.
  • 2Jelinek Fred. Self-organized language modeling for speech recognition. In: Waibel A., Lee K.F. eds.. Speech Recognition, San Mateo, CA: Morgan-Kaufmann, 1990, 450~506.
  • 3Gao Jian-Feng, Hisamini Suzuki,Yang Wen. Exploiting headword dependency and predictive clustering for language modeling. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, Philadelphia, PA, USA, 2002,248~256.
  • 4Collins Michael. Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods. In: Harry Bunt, John Carroll, Giorgio Satta eds.. New Developments in Parsing Technology, Kluwer, 2004.
  • 5Gao Jian-Feng, Hisamini Suzuki. Capturing long distance dependency in language modeling: An empirical study. In: Proceedings of FIJCNP-2004, Sanya, China, 2004, 200~208.
  • 6Gao Jian-Feng, Joshua Goodman, Li Ming-Jin, Lee Kai-Fu. Toward a unified approach to statistical language modeling for Chinese. ACM TALIP, 2002, 1(1): 3~33.
  • 7Juang Biing-Hwang, Wu Chou, Lee Chin-Hui. Minimum classification error rate methods for speech recognition. IEEE Transactions on Speech and Audio Processing. 1997, 5(3): 257~265.
  • 8Duda Richard O., Hart Peter E., Stock David G.. Pattern Classification. John Wiley & Sons, Inc., 2001.
  • 9Hoffgen Klauss U., van Horn Kevin S., Simon Hans U.. Robust trainability of single neurons. Journal of Computer and System Sciences, 1995, 50(1): 114~125.
  • 10Och Franz. Minimum error rate training in statistical machine translation. ACL 2003.

同被引文献5

引证文献2

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部