期刊文献+

Self-Supervised Time Series Classification Based on LSTM and Contrastive Transformer

原文传递
导出
摘要 Time series data has attached extensive attention as multi-domain data, but it is difficult to analyze due to its high dimension and few labels. Self-supervised representation learning provides an effective way for processing such data. Considering the frequency domain features of the time series data itself and the contextual feature in the classification task, this paper proposes an unsupervised Long Short-Term Memory(LSTM) and contrastive transformer-based time series representation model using contrastive learning. Firstly, transforming data with frequency domainbased augmentation increases the ability to represent features in the frequency domain. Secondly, the encoder module with three layers of LSTM and convolution maps the augmented data to the latent space and calculates the temporal loss with a contrastive transformer module and contextual loss. Finally, after selfsupervised training, the representation vector of the original data can be got from the pre-trained encoder. Our model achieves satisfied performances on Human Activity Recognition(HAR) and sleepEDF real-life datasets.
出处 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2022年第6期521-530,共10页 武汉大学学报(自然科学英文版)
基金 Supported by the National Key Research and Development Program of China(2019YFB1706401)。
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部