摘要
对鱼类的行为进行智能监测,精准地量化与识别其健康状态,已成为研究热点。为实现养殖鳗鲡行为状态精准识别,提出一种基于DenseNet双流卷积神经网络的鳗鲡行为状态检测方法。利用混合高斯背景模型进行前景提取构建数据集,针对传统卷积神经网络对于时间动态信息提取能力有限的问题,搭建关联空间特征与时间特征的双流网络结构(Two-stream),并使用DenseNet-121网络替换原网络,对比VGGNet、ResNet等网络,通过密集连接实现特征重用,在搭建更深的网络结构基础上加强了运动特征传递并减少了参数量,更好地提取具有代表性的行为特征。传统双流网络在两端的softmax层仅作简单的决策层平均融合,无法更深程度关联时空高级特征,提出在网络卷积层提取空间特征与时间特征后,加上一层卷积层将时空特征进行卷积融合以提升模型识别精度。实验结果表明:文中提出的基于DenseNet双流卷积神经网络对6种鳗鲡行为状态检测方法准确率达到96.8%,相较于单通道的空间流与时间流网络,准确率分别提升了10.1%和9.5%;相较于以VGGNet、ResNet搭建的双流网络,准确率分别提升了12.4%和4.2%;与决策层平均融合、特征层拼接融合的方式相比,时空特征卷积融合方式准确率分别提升了2.5%和1.7%。
In order to achieve accurate identification of behavioral states of cultured eel Anguilla,a recognition of behavioral state of eel in recirculating aquaculture system was proposed based on the DenseNet double-flow convolutional neural network.The Gaussian Mixture Model was used for foreground extraction to construct the dataset.Aiming at the problem in traditional convolutional neural network whose ability to extract time dynamic information is limited,a double-flow network structure associated with spatial and temporal features was built,in which the DenseNet-121 network was used to replace the original network.Compared to VGGNet,ResNet,and other networks,the proposed system can reuse the characteristic attributes via dense connection by building a deeper network structure,by which the motion feature transmission can be strengthened and the number of parameters reduced,thus achieving better extraction of representative behavioral features.The softmax layer at both ends of a traditional two-stream network makes simple average fusion of decision layers only,which cannot correlate spatio-temporal high-level features to a deeper extent.Therefore,after the network convolution layer extracts the spatial and temporal features,a convolution layer is added to the convolutional fusion of spatio-temporal features to improve the recognition accuracy.Results show that the accuracy of the proposed system for the behavior state detection of six eels reached 96.8%,which is 10.1%and 9.5%greater than that of the single-channel spatial stream and temporal stream networks.Compared with the two-stream network built with VGGNet and ResNet,the accuracy was increased by 12.4%and 4.2%,respectively.Compared with the average fusion method of decision layer and the concatenation fusion method of feature layer,the accuracy of the spatio-temporal feature convolution fusion was increased by 2.5%and 1.7%,respectively.
作者
许志扬
江兴龙
林茜
李凯
XU Zhi-Yang;JIANG Xing-Long;LIN Qian;LI Kai(Fisheries College,Jimei University,Xiamen 361021,China;Engineering Research Center of the Modern Technology for Eel Industry,Ministry of Education,Xiamen 361021,China)
出处
《海洋与湖沼》
CAS
CSCD
北大核心
2023年第6期1746-1755,共10页
Oceanologia Et Limnologia Sinica
基金
国家重点研发计划“特色鱼类精准高效养殖关键技术集成与示范”,2020YFD0900102号
福建省科技厅高校产学合作项目,2020N5009号。