摘要
随着对老年人及其他有障碍群体的重视,视频识别技术在预防摔倒中发挥了越来越重要的作用。本项目旨在提高行人摔倒预防的精度和实时性,通过优化卷积神经网络结构、改进训练数据的方式、构建更合理的损失函数,最终形成一种能够精确识别和报警的智能监控体系。本文介绍了一种基于Faster R-CNN和Mediapipe框架的摔倒检测方法,结合改进的时空图卷积神经网络(DCST-GCN)模型,对多种行为进行了识别和分类。通过NTU-RGB+D60数据集和公共数据库Kinetics-400的实验验证,该方法在摔倒检测中表现出较高的准确性和鲁棒性。本项目为摔倒预防与控制方法的发展奠定了坚实基础,并提供了新的预防摔倒事故的途径和方法。As attention to the elderly and other disabled groups increases, video recognition technology plays an increasingly important role in fall prevention. This project aims to enhance the accuracy and real-time performance of pedestrian fall prevention by optimizing the convolutional neural network structure, improving the training data methods, and constructing more reasonable loss functions, ultimately forming an intelligent monitoring system capable of precise identification and alerting. This paper introduces a fall detection method based on Faster R-CNN and the Mediapipe framework, combined with an improved spatiotemporal graph convolutional network (DCST-GCN) model for recognizing and classifying various behaviors. Experiments conducted on the NTU-RGB+D60 dataset and the public Kinetics-400 database demonstrate that the proposed method shows high accuracy and robustness in fall detection. This project lays a solid foundation for the development of fall prevention and control methods and provides new approaches for preventing fall accidents.
出处
《计算机科学与应用》
2024年第9期121-129,共9页
Computer Science and Application