期刊文献+

A visual-textual fused approach to automated tagging of flood-related tweets during a flood event 被引量:1

原文传递
导出
摘要 In recent years,social media such as Twitter have received much attention as a new data source for rapid flood awareness.The timely response and large coverage provided by citizen sensors significantly compensate the limitations of non-timely remote sensing data and spatially isolated river gauges.However,automatic extraction of flood tweets from a massive tweets pool remains a challenge.Taking the Houston Flood in 2017 as a study case,this paper presents an automated flood tweets extraction approach by mining both visual and textual information a tweet contains.A CNN architecture was designed to classify the visual content of flood pictures during the Houston Flood.A sensitivity test was then applied to extract flood-sensitive keywords that were further used to refine the CNN classified results.A duplication test was finally performed to trim the database by removing the duplicated pictures to create the flood tweets pool for the flood event.The results indicated that coupling CNN classification results with flood-sensitive words in tweets allows a significant increase in precision while keeps the recall rate in a high level.The elimination of tweets containing duplicated pictures greatly contributes to higher spatio-temporal relevance to the flood.
出处 《International Journal of Digital Earth》 SCIE EI 2019年第11期1248-1264,共17页 国际数字地球学报(英文)
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部