摘要
The existing abstractive text summarisation models only consider the word sequence correlations between the source document and the reference summary,and the summary generated by models lacks the cover of the subject of source document due to models'small perspective.In order to make up these disadvantages,a multi‐domain attention pointer(MDA‐Pointer)abstractive summarisation model is proposed in this work.First,the model uses bidirectional long short‐term memory to encode,respectively,the word and sentence sequence of source document for obtaining the semantic representations at word and sentence level.Furthermore,the multi‐domain attention mechanism between the semantic representations and the summary word is established,and the proposed model can generate summary words under the proposed attention mechanism based on the words and sen-tences.Then,the words are extracted from the vocabulary or the original word sequences through the pointer network to form the summary,and the coverage mechanism is introduced,respectively,into word and sentence level to reduce the redundancy of sum-mary content.Finally,experiment validation is conducted on CNN/Daily Mail dataset.ROUGE evaluation indexes of the model without and with the coverage mechanism are improved respectively,and the results verify the validation of model proposed by this paper.
基金
supported by the National Social Science Foundation of China(2017CG29)
the Science and Technology Research Project of Chongqing Municipal Education Commission(2019CJ50)
the Natural Science Foundation of Chongqing(2017CC29).