An N-gram Chinese language model incorporating linguistic rules is presented. By constructing elements lattice, rules information is incorporated in statistical frame. To facilitate the hybrid modeling, novel methods ...An N-gram Chinese language model incorporating linguistic rules is presented. By constructing elements lattice, rules information is incorporated in statistical frame. To facilitate the hybrid modeling, novel methods such as MI-based rule evaluating, weighted rule quantification and element-based n-gram probability approximation are presented. Dynamic Viterbi algorithm is adopted to search the best path in lattice. To strengthen the model, transformation-based error-driven rules learning is adopted. Applying proposed model to Chinese Pinyin-to-character conversion, high performance has been achieved in accuracy, flexibility and robustness simultaneously. Tests show correct rate achieves 94.81% instead of 90.53% using bi-gram Markov model alone. Many long-distance dependency and recursion in language can be processed effectively.展开更多
This paper applied Maximum Entropy (ME) model to Pinyin-To-Character (PTC) conversion in-stead of Hidden Markov Model (HMM) that could not include complicated and long-distance lexical informa-tion. Two ME models were...This paper applied Maximum Entropy (ME) model to Pinyin-To-Character (PTC) conversion in-stead of Hidden Markov Model (HMM) that could not include complicated and long-distance lexical informa-tion. Two ME models were built based on simple and complex templates respectively, and the complex one gave better conversion result. Furthermore, conversion trigger pair of y A → y B cBwas proposed to extract the long-distance constrain feature from the corpus; and then Average Mutual Information (AMI) was used to se-lect conversion trigger pair features which were added to the ME model. The experiment shows that conver-sion error of the ME with conversion trigger pairs is reduced by 4% on a small training corpus, comparing with HMM smoothed by absolute smoothing.展开更多
文摘An N-gram Chinese language model incorporating linguistic rules is presented. By constructing elements lattice, rules information is incorporated in statistical frame. To facilitate the hybrid modeling, novel methods such as MI-based rule evaluating, weighted rule quantification and element-based n-gram probability approximation are presented. Dynamic Viterbi algorithm is adopted to search the best path in lattice. To strengthen the model, transformation-based error-driven rules learning is adopted. Applying proposed model to Chinese Pinyin-to-character conversion, high performance has been achieved in accuracy, flexibility and robustness simultaneously. Tests show correct rate achieves 94.81% instead of 90.53% using bi-gram Markov model alone. Many long-distance dependency and recursion in language can be processed effectively.
基金Supported by the National Natural Science Foundation of China as key program (No.60435020) and The HighTechnology Research and Development Programme of China (2002AA117010-09).
文摘This paper applied Maximum Entropy (ME) model to Pinyin-To-Character (PTC) conversion in-stead of Hidden Markov Model (HMM) that could not include complicated and long-distance lexical informa-tion. Two ME models were built based on simple and complex templates respectively, and the complex one gave better conversion result. Furthermore, conversion trigger pair of y A → y B cBwas proposed to extract the long-distance constrain feature from the corpus; and then Average Mutual Information (AMI) was used to se-lect conversion trigger pair features which were added to the ME model. The experiment shows that conver-sion error of the ME with conversion trigger pairs is reduced by 4% on a small training corpus, comparing with HMM smoothed by absolute smoothing.