Accurate and continuous identification of individual cattle is crucial to precision farming in recent years.It is also the prerequisite to monitor the individual feed intake and feeding time of beef cattle at medium t...Accurate and continuous identification of individual cattle is crucial to precision farming in recent years.It is also the prerequisite to monitor the individual feed intake and feeding time of beef cattle at medium to long distances over different cameras.However,beef cattle can tend to frequently move and change their feeding position during feeding.Furthermore,the great variations in their head direction and complex environments(light,occlusion,and background)can also lead to some difficulties in the recognition,particularly for the bio-similarities among individual cattle.Among them,AlignedReID++model is characterized by both global and local information for image matching.In particular,the dynamically matching local information(DMLI)algorithm has been introduced into the local branch to automatically align the horizontal local information.In this research,the AlignedReID++model was utilized and improved to achieve the better performance in cattle re-identification(ReID).Initially,triplet attention(TA)modules were integrated into the BottleNecks of ResNet50 Backbone.The feature extraction was then enhanced through cross-dimensional interactions with the minimal computational overhead.Since the TA modules in AlignedReID++baseline model increased the model size and floating point operations(FLOPs)by 0.005 M and 0.05 G,the rank-1 accuracy and mean average precision(mAP)were improved by 1.0 percentage points and 2.94 percentage points,respectively.Specifically,the rank-1 accuracies were outperformed by 0.86 percentage points and 0.12 percentage points,respectively,compared with the convolution block attention module(CBAM)and efficient channel attention(ECA)modules,although 0.94 percentage points were lower than that of squeeze-and-excitation(SE)modules.The mAP metric values were exceeded by 0.22,0.86 and 0.12 percentage points,respectively,compared with the SE,CBAM,and ECA modules.Additionally,the Cross-Entropy Loss function was replaced with the CosFace Loss function in the global branch of baseline model.CosFace Loss and Hard Triplet Loss were jointly employed to train the baseline model for the better identification on the similar individuals.AlignedReID++with CosFace Loss was outperformed the baseline model by 0.24 and 0.92 percentage points in the rank-1 accuracy and mAP,respectively,whereas,AlignedReID++with ArcFace Loss was exceeded by 0.36 and 0.56 percentage points,respectively.The improved model with the TA modules and CosFace Loss was achieved in a rank-1 accuracy of 94.42%,rank-5 accuracy of 98.78%,rank-10 accuracy of 99.34%,mAP of 63.90%,FLOPs of 5.45 G,frames per second(FPS)of 5.64,and model size of 23.78 M.The rank-1 accuracies were exceeded by 1.84,4.72,0.76 and 5.36 percentage points,respectively,compared with the baseline model,part-based convolutional baseline(PCB),multiple granularity network(MGN),and relation-aware global attention(RGA),while the mAP metrics were surpassed 6.42,5.86,4.30 and 7.38 percentage points,respectively.Meanwhile,the rank-1 accuracy was 0.98 percentage points lower than TransReID,but the mAP metric was exceeded by 3.90 percentage points.Moreover,the FLOPs of improved model were only 0.05 G larger than that of baseline model,while smaller than those of PCB,MGN,RGA,and TransReID by 0.68,6.51,25.4,and 16.55 G,respectively.The model size of improved model was 23.78 M,which was smaller than those of the baseline model,PCB,MGN,RGA,and TransReID by 0.03,2.33,45.06,14.53 and 62.85 M,respectively.The inference speed of improved model on a CPU was lower than those of PCB,MGN,and baseline model,but higher than TransReID and RGA.The t-SNE feature embedding visualization demonstrated that the global and local features were achieve in the better intra-class compactness and inter-class variability.Therefore,the improved model can be expected to effectively re-identify the beef cattle in natural environments of breeding farm,in order to monitor the individual feed intake and feeding time.展开更多
基金National Key Research and Development Program(2023YFD1301801)National Natural Science Foundation of China(32272931)+1 种基金Shaanxi Province Agricultural Key Core Technology Project(2024NYGG005)Shaanxi Province Key R&D Program(2024NC-ZDCYL-05-12)。
文摘Accurate and continuous identification of individual cattle is crucial to precision farming in recent years.It is also the prerequisite to monitor the individual feed intake and feeding time of beef cattle at medium to long distances over different cameras.However,beef cattle can tend to frequently move and change their feeding position during feeding.Furthermore,the great variations in their head direction and complex environments(light,occlusion,and background)can also lead to some difficulties in the recognition,particularly for the bio-similarities among individual cattle.Among them,AlignedReID++model is characterized by both global and local information for image matching.In particular,the dynamically matching local information(DMLI)algorithm has been introduced into the local branch to automatically align the horizontal local information.In this research,the AlignedReID++model was utilized and improved to achieve the better performance in cattle re-identification(ReID).Initially,triplet attention(TA)modules were integrated into the BottleNecks of ResNet50 Backbone.The feature extraction was then enhanced through cross-dimensional interactions with the minimal computational overhead.Since the TA modules in AlignedReID++baseline model increased the model size and floating point operations(FLOPs)by 0.005 M and 0.05 G,the rank-1 accuracy and mean average precision(mAP)were improved by 1.0 percentage points and 2.94 percentage points,respectively.Specifically,the rank-1 accuracies were outperformed by 0.86 percentage points and 0.12 percentage points,respectively,compared with the convolution block attention module(CBAM)and efficient channel attention(ECA)modules,although 0.94 percentage points were lower than that of squeeze-and-excitation(SE)modules.The mAP metric values were exceeded by 0.22,0.86 and 0.12 percentage points,respectively,compared with the SE,CBAM,and ECA modules.Additionally,the Cross-Entropy Loss function was replaced with the CosFace Loss function in the global branch of baseline model.CosFace Loss and Hard Triplet Loss were jointly employed to train the baseline model for the better identification on the similar individuals.AlignedReID++with CosFace Loss was outperformed the baseline model by 0.24 and 0.92 percentage points in the rank-1 accuracy and mAP,respectively,whereas,AlignedReID++with ArcFace Loss was exceeded by 0.36 and 0.56 percentage points,respectively.The improved model with the TA modules and CosFace Loss was achieved in a rank-1 accuracy of 94.42%,rank-5 accuracy of 98.78%,rank-10 accuracy of 99.34%,mAP of 63.90%,FLOPs of 5.45 G,frames per second(FPS)of 5.64,and model size of 23.78 M.The rank-1 accuracies were exceeded by 1.84,4.72,0.76 and 5.36 percentage points,respectively,compared with the baseline model,part-based convolutional baseline(PCB),multiple granularity network(MGN),and relation-aware global attention(RGA),while the mAP metrics were surpassed 6.42,5.86,4.30 and 7.38 percentage points,respectively.Meanwhile,the rank-1 accuracy was 0.98 percentage points lower than TransReID,but the mAP metric was exceeded by 3.90 percentage points.Moreover,the FLOPs of improved model were only 0.05 G larger than that of baseline model,while smaller than those of PCB,MGN,RGA,and TransReID by 0.68,6.51,25.4,and 16.55 G,respectively.The model size of improved model was 23.78 M,which was smaller than those of the baseline model,PCB,MGN,RGA,and TransReID by 0.03,2.33,45.06,14.53 and 62.85 M,respectively.The inference speed of improved model on a CPU was lower than those of PCB,MGN,and baseline model,but higher than TransReID and RGA.The t-SNE feature embedding visualization demonstrated that the global and local features were achieve in the better intra-class compactness and inter-class variability.Therefore,the improved model can be expected to effectively re-identify the beef cattle in natural environments of breeding farm,in order to monitor the individual feed intake and feeding time.