期刊文献+

SmokerViT: A Transformer-Based Method for Smoker Recognition

下载PDF
导出
摘要 Smoking has an economic and environmental impact on society due to the toxic substances it emits.Convolutional Neural Networks(CNNs)need help describing low-level features and can miss important information.Moreover,accurate smoker detection is vital with minimum false alarms.To answer the issue,the researchers of this paper have turned to a self-attention mechanism inspired by the ViT,which has displayed state-of-the-art performance in the classification task.To effectively enforce the smoking prohibition in non-smoking locations,this work presents a Vision Transformer-inspired model called SmokerViT for detecting smokers.Moreover,this research utilizes a locally curated dataset of 1120 images evenly distributed among the two classes(Smoking and NotSmoking).Further,this research performs augmentations on the smoker detection dataset to have many images with various representations to overcome the dataset size limitation.Unlike convolutional operations used in most existing works,the proposed SmokerViT model employs a self-attention mechanism in the Transformer block,making it suitable for the smoker classification problem.Besides,this work integrates the multi-layer perceptron head block in the SmokerViT model,which contains dense layers with rectified linear activation and linear kernel regularizer with L2 for the recognition task.This work presents an exhaustive analysis to prove the efficiency of the proposed SmokerViT model.The performance of the proposed SmokerViT performance is evaluated and compared with the existing methods,where it achieves an overall classification accuracy of 97.77%,with 98.21%recall and 97.35%precision,outperforming the state-of-the-art deep learning models,including convolutional neural networks(CNNs)and other vision transformer-based models.
出处 《Computers, Materials & Continua》 SCIE EI 2023年第10期403-424,共22页 计算机、材料和连续体(英文)
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部