The separation of individual pigs from the pigpen scenes is crucial for precision farming,and the technology based on convolutional neural networks can provide a low-cost,non-contact,non-invasive method of pig image s...The separation of individual pigs from the pigpen scenes is crucial for precision farming,and the technology based on convolutional neural networks can provide a low-cost,non-contact,non-invasive method of pig image segmentation.However,two factors limit the development of this field.On the one hand,the individual pigs are easy to stick together,and the occlusion of debris such as pigpens can easily make the model misjudgment.On the other hand,manual labeling of group-raised pig data is time-consuming and labor-intensive and is prone to labeling errors.Therefore,it is urgent for an individual pig image segmentation model that can perform well in individual scenarios and can be easily migrated to a group-raised environment.In order to solve the above problems,taking individual pigs as research objects,an individual pig image segmentation dataset containing 2066 images was constructed,and a series of algorithms based on fully convolutional networks were proposed to solve the pig image segmentation problem.In order to capture the long-range dependencies and weaken the background information such as pigpens while enhancing the information of individual parts of pigs,the channel and spatial attention blocks were introduced into the best-performing decoders UNet and LinkNet.Experiments show that using ResNext50 as the encoder and Unet as the decoder as the basic model,adding two attention blocks at the same time achieves 98.30%and 96.71%on the F1 and IOU metrics,respectively.Compared with the model adding channel attention block alone,the two metrics are improved by 0.13%and 0.22%,respectively.The experiment of introducing channel and spatial attention alone shows that spatial attention is more effective than channel attention.Taking VGG16-LinkNet as an example,compared with channel attention,spatial attention improves the F1 and IOU metrics by 0.16%and 0.30%,respectively.Furthermore,the heatmap of the feature of different layers of the decoder after adding different attention information proves that with the increase of layers,the boundary of pig image segmentation is clearer.In order to verify the effectiveness of the individual pig image segmentation model in group-raised scenes,the transfer performance of the model is verified in three scenarios of high separation,deep adhesion,and pigpen occlusion.The experiments show that the segmentation results of adding attention information,especially the simultaneous fusion of channel and spatial attention blocks,are more refined and complete.The attention-based individual pig image segmentation model can be effectively transferred to the field of group-raised pigs and can provide a reference for its pre-segmentation.展开更多
The area of the pig’s face contains rich biological information,such as eyes,nose,and ear.The high-precision detection of pig face postures is crucial to the identification of pigs,and it can also provide fundamental...The area of the pig’s face contains rich biological information,such as eyes,nose,and ear.The high-precision detection of pig face postures is crucial to the identification of pigs,and it can also provide fundamental archival information for the study of abnormal behavioral characteristics and regularities.In this study,a series of attention blocks were embedded in Feature Pyramid Network(FPN)for automatic detection of the pig face posture in group-breeding environments.Firstly,the Channel Attention Block(CAB)and Position Attention Block(PAB)were proposed to capture the channel dependencies and the pixel-level long-range relationships,respectively.Secondly,a variety of attention modules are proposed to effectively combine the two kinds of attention information,specifically including Parallel Channel Position(PCP),Cascade Position Channel(CPC),and Cascade Channel Position(CCP),which fuse the channel and position attention information in both parallel or cascade ways.Finally,the verification experiments on three task networks with two backbone networks were conducted for different attention blocks or modules.A total of 45 pigs in 8 pigpens were used as the research objects.Experimental results show that attention-based models perform better.Especially,with Faster Region Convolutional Neural Network(Faster R-CNN)as the task network and ResNet101 as the backbone network,after the introduction of the PCP module,the Average Precision(AP)indicators of the face poses of Downward with head-on face(D-O),Downward with lateral face(D-L),Level with head-on face(L-O),Level with lateral face(L-L),Upward with head-on face(U-O),and Upward with lateral face(U-L)achieve 91.55%,90.36%,90.10%,90.05%,85.96%,and 87.92%,respectively.Ablation experiments show that the PAB attention block is not as effective as the CAB attention block,and the parallel combination method is better than the cascade manner.Taking Faster R-CNN as the task network and ResNet101 as the backbone network,the heatmap visualization of different layers of FPN before and after adding PCP shows that,compared with the non-PCP module,the PCP module can more easily aggregate denser and richer contextual information,this,in turn,enhances long-range dependencies to improve feature representation.At the same time,the model based on PCP attention can effectively detect the pig face posture of different ages,different scenes,and different light intensities,which can help lay the foundation for subsequent individual identification and behavior analysis of pigs.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.31671571)the Shanxi Province Basic Research Program Project(Free Exploration)(No.20210302124523,20210302123408,202103021224149,and 202103021223141)the Youth Agricultural Science and Technology Innovation Fund of Shanxi Agricultural University(Grant No.2019027)。
文摘The separation of individual pigs from the pigpen scenes is crucial for precision farming,and the technology based on convolutional neural networks can provide a low-cost,non-contact,non-invasive method of pig image segmentation.However,two factors limit the development of this field.On the one hand,the individual pigs are easy to stick together,and the occlusion of debris such as pigpens can easily make the model misjudgment.On the other hand,manual labeling of group-raised pig data is time-consuming and labor-intensive and is prone to labeling errors.Therefore,it is urgent for an individual pig image segmentation model that can perform well in individual scenarios and can be easily migrated to a group-raised environment.In order to solve the above problems,taking individual pigs as research objects,an individual pig image segmentation dataset containing 2066 images was constructed,and a series of algorithms based on fully convolutional networks were proposed to solve the pig image segmentation problem.In order to capture the long-range dependencies and weaken the background information such as pigpens while enhancing the information of individual parts of pigs,the channel and spatial attention blocks were introduced into the best-performing decoders UNet and LinkNet.Experiments show that using ResNext50 as the encoder and Unet as the decoder as the basic model,adding two attention blocks at the same time achieves 98.30%and 96.71%on the F1 and IOU metrics,respectively.Compared with the model adding channel attention block alone,the two metrics are improved by 0.13%and 0.22%,respectively.The experiment of introducing channel and spatial attention alone shows that spatial attention is more effective than channel attention.Taking VGG16-LinkNet as an example,compared with channel attention,spatial attention improves the F1 and IOU metrics by 0.16%and 0.30%,respectively.Furthermore,the heatmap of the feature of different layers of the decoder after adding different attention information proves that with the increase of layers,the boundary of pig image segmentation is clearer.In order to verify the effectiveness of the individual pig image segmentation model in group-raised scenes,the transfer performance of the model is verified in three scenarios of high separation,deep adhesion,and pigpen occlusion.The experiments show that the segmentation results of adding attention information,especially the simultaneous fusion of channel and spatial attention blocks,are more refined and complete.The attention-based individual pig image segmentation model can be effectively transferred to the field of group-raised pigs and can provide a reference for its pre-segmentation.
基金supported by the National Natural Science Foundation of China(Grant No.31671571)the Shanxi Province Basic Research Program Project(Free Exploration)(Grant No.20210302124523,20210302123408,202103021224149,202103021223141)the Youth Agricultural Science and Technology Innovation Fund of Shanxi Agricultural University(Grant No.2019027).
文摘The area of the pig’s face contains rich biological information,such as eyes,nose,and ear.The high-precision detection of pig face postures is crucial to the identification of pigs,and it can also provide fundamental archival information for the study of abnormal behavioral characteristics and regularities.In this study,a series of attention blocks were embedded in Feature Pyramid Network(FPN)for automatic detection of the pig face posture in group-breeding environments.Firstly,the Channel Attention Block(CAB)and Position Attention Block(PAB)were proposed to capture the channel dependencies and the pixel-level long-range relationships,respectively.Secondly,a variety of attention modules are proposed to effectively combine the two kinds of attention information,specifically including Parallel Channel Position(PCP),Cascade Position Channel(CPC),and Cascade Channel Position(CCP),which fuse the channel and position attention information in both parallel or cascade ways.Finally,the verification experiments on three task networks with two backbone networks were conducted for different attention blocks or modules.A total of 45 pigs in 8 pigpens were used as the research objects.Experimental results show that attention-based models perform better.Especially,with Faster Region Convolutional Neural Network(Faster R-CNN)as the task network and ResNet101 as the backbone network,after the introduction of the PCP module,the Average Precision(AP)indicators of the face poses of Downward with head-on face(D-O),Downward with lateral face(D-L),Level with head-on face(L-O),Level with lateral face(L-L),Upward with head-on face(U-O),and Upward with lateral face(U-L)achieve 91.55%,90.36%,90.10%,90.05%,85.96%,and 87.92%,respectively.Ablation experiments show that the PAB attention block is not as effective as the CAB attention block,and the parallel combination method is better than the cascade manner.Taking Faster R-CNN as the task network and ResNet101 as the backbone network,the heatmap visualization of different layers of FPN before and after adding PCP shows that,compared with the non-PCP module,the PCP module can more easily aggregate denser and richer contextual information,this,in turn,enhances long-range dependencies to improve feature representation.At the same time,the model based on PCP attention can effectively detect the pig face posture of different ages,different scenes,and different light intensities,which can help lay the foundation for subsequent individual identification and behavior analysis of pigs.