Crowdsourcing has been used recently as an alternative to traditional costly annotation by many natural language processing groups. In this paper, we explore the use of Wechat Official Account Platform (WOAP) in order...Crowdsourcing has been used recently as an alternative to traditional costly annotation by many natural language processing groups. In this paper, we explore the use of Wechat Official Account Platform (WOAP) in order to build a speech corpus and to assess the feasibility of using WOAP followers (also known as contributors) to assemble speech corpus of Mongolian. A Mongolian language qualification test was used to filter out potential non-qualified participants. We gathered natural speech recordings in our daily life, and constructed a Chinese-Mongolian Speech Corpus (CMSC) of 31472 utterances from 296 native speakers who are fluent in Mongolian, totalling 30.8 h of speech. Then,an evaluation experiment was performed, in where the contributors were asked to choose a correct sentence from a multiple choice list to ensure the high-quality of corpus. The results obtained so far showed that crowdsourcing for constructing CMSC with an evaluation mechanism could be more effective than traditional experiments requiring expertise.展开更多
文摘Crowdsourcing has been used recently as an alternative to traditional costly annotation by many natural language processing groups. In this paper, we explore the use of Wechat Official Account Platform (WOAP) in order to build a speech corpus and to assess the feasibility of using WOAP followers (also known as contributors) to assemble speech corpus of Mongolian. A Mongolian language qualification test was used to filter out potential non-qualified participants. We gathered natural speech recordings in our daily life, and constructed a Chinese-Mongolian Speech Corpus (CMSC) of 31472 utterances from 296 native speakers who are fluent in Mongolian, totalling 30.8 h of speech. Then,an evaluation experiment was performed, in where the contributors were asked to choose a correct sentence from a multiple choice list to ensure the high-quality of corpus. The results obtained so far showed that crowdsourcing for constructing CMSC with an evaluation mechanism could be more effective than traditional experiments requiring expertise.