期刊文献+

Optimal Flame Detection of Fires in Videos Based on Deep Learning and the Use of Various Optimizers

Optimal Flame Detection of Fires in Videos Based on Deep Learning and the Use of Various Optimizers
下载PDF
导出
摘要 Deep learning has recently attracted a lot of attention with the aim of developing a fast, automatic and accurate system for image identification and classification. In this work, the focus was on transfer learning and evaluation of state-of-the-art VGG16 and 19 deep convolutional neural networks for fire image classification from fire images. In this study, five different approaches (Adagrad, Adam, AdaMax</span><span style="font-family:"">, </span><span style="font-family:"">Nadam and Rmsprop) based on the gradient descent methods used in parameter updating were studied. By selecting specific <span>learning rates, training image base proportions, number of recursion (epochs</span>), the advantages and disadvantages of each approach are compared with each <span>other in order to achieve the minimum cost function. The results of the comparison</span> are presented in the tables. In our experiment, Adam optimizers with the VGG16 architecture with 300 and 500 epochs tend to steadily improve their accuracy with increasing number of epochs without deteriorating performance. The optimizers were evaluated on the basis of their AUC of the ROC curve. It achieves a test accuracy of 96%, which puts it ahead of other architectures. Deep learning has recently attracted a lot of attention with the aim of developing a fast, automatic and accurate system for image identification and classification. In this work, the focus was on transfer learning and evaluation of state-of-the-art VGG16 and 19 deep convolutional neural networks for fire image classification from fire images. In this study, five different approaches (Adagrad, Adam, AdaMax</span><span style="font-family:"">, </span><span style="font-family:"">Nadam and Rmsprop) based on the gradient descent methods used in parameter updating were studied. By selecting specific <span>learning rates, training image base proportions, number of recursion (epochs</span>), the advantages and disadvantages of each approach are compared with each <span>other in order to achieve the minimum cost function. The results of the comparison</span> are presented in the tables. In our experiment, Adam optimizers with the VGG16 architecture with 300 and 500 epochs tend to steadily improve their accuracy with increasing number of epochs without deteriorating performance. The optimizers were evaluated on the basis of their AUC of the ROC curve. It achieves a test accuracy of 96%, which puts it ahead of other architectures.
作者 Tidiane Fofana Sié Ouattara Alain Clement Tidiane Fofana;Sié Ouattara;Alain Clement(Laboratoire des Sciences et Technologies de la Communication et de l’Information (LSTCI), Yamoussoukro, C&#244te d’Ivoire;Institut National Polytechnique Houphou&#235t Boigny (INPHB), Yamoussoukro, C&#244te d’Ivoire;Ecole Supérieure des Technologies de l’Information et de la Communication (ESATIC), Abidjan, C&#244te d’Ivoire;LARIS, SFR MATHSTIC, Université d’Angers, Angers, France)
出处 《Open Journal of Applied Sciences》 2021年第11期1240-1255,共16页 应用科学(英文)
关键词 Image Classification Optimizers Transfer Learning VGG16 VGG19 Image Classification Optimizers Transfer Learning VGG16 VGG19
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部