In recent years,early detection and warning of fires have posed a significant challenge to environmental protection and human safety.Deep learning models such as Faster R-CNN(Faster Region based Convolutional Neural N...In recent years,early detection and warning of fires have posed a significant challenge to environmental protection and human safety.Deep learning models such as Faster R-CNN(Faster Region based Convolutional Neural Network),YOLO(You Only Look Once),and their variants have demonstrated superiority in quickly detecting objects from images and videos,creating new opportunities to enhance automatic and efficient fire detection.The YOLO model,especially newer versions like YOLOv10,stands out for its fast processing capability,making it suitable for low-latency applications.However,when applied to real-world datasets,the accuracy of fire prediction is still not high.This study improves the accuracy of YOLOv10 for real-time applications through model fine-tuning techniques and data augmentation.The core work of the research involves creating a diverse fire image dataset specifically suited for fire detection applications in buildings and factories,freezing the initial layers of the model to retain general features learned from the dataset by applying the Squeeze and Excitation attention mechanism and employing the Stochastic Gradient Descent(SGD)with a momentum optimization algorithm to enhance accuracy while ensuring real-time fire detection.Experimental results demonstrate the effectiveness of the proposed fire prediction approach,where the YOLOv10 small model exhibits the best balance compared to other YOLO family models such as nano,medium,and balanced.Additionally,the study provides an experimental evaluation to highlight the effectiveness of model fine-tuning compared to the YOLOv10 baseline,YOLOv8 and Faster R-CNN based on two criteria:accuracy and prediction time.展开更多
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir...Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.展开更多
文摘In recent years,early detection and warning of fires have posed a significant challenge to environmental protection and human safety.Deep learning models such as Faster R-CNN(Faster Region based Convolutional Neural Network),YOLO(You Only Look Once),and their variants have demonstrated superiority in quickly detecting objects from images and videos,creating new opportunities to enhance automatic and efficient fire detection.The YOLO model,especially newer versions like YOLOv10,stands out for its fast processing capability,making it suitable for low-latency applications.However,when applied to real-world datasets,the accuracy of fire prediction is still not high.This study improves the accuracy of YOLOv10 for real-time applications through model fine-tuning techniques and data augmentation.The core work of the research involves creating a diverse fire image dataset specifically suited for fire detection applications in buildings and factories,freezing the initial layers of the model to retain general features learned from the dataset by applying the Squeeze and Excitation attention mechanism and employing the Stochastic Gradient Descent(SGD)with a momentum optimization algorithm to enhance accuracy while ensuring real-time fire detection.Experimental results demonstrate the effectiveness of the proposed fire prediction approach,where the YOLOv10 small model exhibits the best balance compared to other YOLO family models such as nano,medium,and balanced.Additionally,the study provides an experimental evaluation to highlight the effectiveness of model fine-tuning compared to the YOLOv10 baseline,YOLOv8 and Faster R-CNN based on two criteria:accuracy and prediction time.
文摘Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.