Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models.However,limited research has been conducted on...Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models.However,limited research has been conducted on generating adversarial samples for multivariate time series classification models.Adversarial samples could become a security concern in systems with complex sets of sensors.This study proposes extending the existing gradient adversarial transformation network(GATN)in combination with adversarial autoencoders to attack multivariate time series classification models.The proposed model attacks classification models by utilizing a distilled model to imitate the output of the multivariate time series classification model.In addition,the adversarial generator function is replaced with a variational autoencoder to enhance the adversarial samples.The developed methodology is tested on two multivariate time series classification models:1-nearest neighbor dynamic time warping(1-NN DTW)and a fully convolutional network(FCN).This study utilizes 30 multivariate time series benchmarks provided by the University of East Anglia(UEA)and University of California Riverside(UCR).The use of adversarial autoencoders shows an increase in the fraction of successful adversaries generated on multivariate time series.To the best of our knowledge,this is the first study to explore adversarial attacks on multivariate time series.Additionally,we recommend future research utilizing the generated latent space from the variational autoencoders.展开更多
文摘Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models.However,limited research has been conducted on generating adversarial samples for multivariate time series classification models.Adversarial samples could become a security concern in systems with complex sets of sensors.This study proposes extending the existing gradient adversarial transformation network(GATN)in combination with adversarial autoencoders to attack multivariate time series classification models.The proposed model attacks classification models by utilizing a distilled model to imitate the output of the multivariate time series classification model.In addition,the adversarial generator function is replaced with a variational autoencoder to enhance the adversarial samples.The developed methodology is tested on two multivariate time series classification models:1-nearest neighbor dynamic time warping(1-NN DTW)and a fully convolutional network(FCN).This study utilizes 30 multivariate time series benchmarks provided by the University of East Anglia(UEA)and University of California Riverside(UCR).The use of adversarial autoencoders shows an increase in the fraction of successful adversaries generated on multivariate time series.To the best of our knowledge,this is the first study to explore adversarial attacks on multivariate time series.Additionally,we recommend future research utilizing the generated latent space from the variational autoencoders.