Due to the lack of parallel data in current grammatical error correction(GEC)task,models based on sequence to sequence framework cannot be adequately trained to obtain higher performance.We propose two data synthesis ...Due to the lack of parallel data in current grammatical error correction(GEC)task,models based on sequence to sequence framework cannot be adequately trained to obtain higher performance.We propose two data synthesis methods which can control the error rate and the ratio of error types on synthetic data.The first approach is to corrupt each word in the monolingual corpus with a fixed probability,including replacement,insertion and deletion.Another approach is to train error generation models and further filtering the decoding results of the models.The experiments on different synthetic data show that the error rate is 40%and that the ratio of error types is the same can improve the model performance better.Finally,we synthesize about 100 million data and achieve comparable performance as the state of the art,which uses twice as much data as we use.展开更多
基金was supported by the funds of Bejing Advanced Innovation Center for Language Resources.(TYZ19005)Research Program of State Language Commission(ZDI135-105,YB135-89).
文摘Due to the lack of parallel data in current grammatical error correction(GEC)task,models based on sequence to sequence framework cannot be adequately trained to obtain higher performance.We propose two data synthesis methods which can control the error rate and the ratio of error types on synthetic data.The first approach is to corrupt each word in the monolingual corpus with a fixed probability,including replacement,insertion and deletion.Another approach is to train error generation models and further filtering the decoding results of the models.The experiments on different synthetic data show that the error rate is 40%and that the ratio of error types is the same can improve the model performance better.Finally,we synthesize about 100 million data and achieve comparable performance as the state of the art,which uses twice as much data as we use.
文摘针对英语文章语法错误自动纠正(Grammatical Error Correction,GEC)问题中的冠词和介词错误,该文提出一种基于LSTM(Long Short-Term Memory,长短时记忆)的序列标注GEC方法;针对名词单复数错误、动词形式错误和主谓不一致错误,因其混淆集为开放集合,该文提出一种基于ESL(English as Second Lauguage)和新闻语料的N-gram投票策略的GEC方法。该文方法在2013年CoNLL的GEC数据上实验的整体F1值为33.87%,超过第一名UIUC的F1值31.20%。其中,冠词错误纠正的F1值为38.05%,超过UIUC冠词错误纠正的F1值33.40%,介词错误的纠正F1为28.89%,超过UIUC的介词错误纠正F1值7.22%。