The Global Wheat Head Detection(GWHD)dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions.With an ass...The Global Wheat Head Detection(GWHD)dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions.With an associated competition hosted in Kaggle,GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities.From this first experience,a few avenues for improvements have been identified regarding data size,head diversity,and label reliability.To address these issues,the 2020 dataset has been reexamined,relabeled,and complemented by adding 1722 images from 5 additional countries,allowing for 81,553 additional wheat heads.We now release in 2021 a new version of the Global Wheat Head Detection dataset,which is bigger,more diverse,and less noisy than the GWHD_2020 version.展开更多
Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest.We have developed the SegVeg approach fo...Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest.We have developed the SegVeg approach for semantic segmentation of RGB images into three classes(background,green,and senescent vegetation).This is achieved in two steps:A U-net model is first trained on a very large dataset to separate whole vegetation from background.The green and senescent vegetation pixels are then separated using SVM,a shallow machine learning technique,trained over a selection of pixels extracted from images.The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks.Results show that the SegVeg approach allows to segment accurately the three classes.However,some confusion is observed mainly between the background and senescent vegetation,particularly over the dark and bright regions of the images.The U-net model achieves similar performances,with slight degradation over the green vegetation:the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net.The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent.Finally,the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels.Results show that green fraction is very well estimated(R^(2)=0.94)by the SegVeg model,while the senescent and background fractions show slightly degraded performances(R^(2)=0.70 and 0.73,respectively)with a mean 95%confidence error interval of 2.7%and 2.1%for the senescent vegetation and background,versus 1%for green vegetation.We have made SegVeg publicly available as a ready-to-use script and model,along with the entire annotated grid-pixels dataset.We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or,at least,offering a pretrained model for more specific use.展开更多
基金the French National Research Agency under the Investments for the Future Program,referred as ANR-16-CONV-0004 PIA#Digitag.Institut Convergences Agriculture Numérique,Hiphen supported the organization of the competition.Japan:Kubota supported the organization of the competi-tion.Australia:Grains Research and Development Corpora-tion(UOQ2002-008RTX machine learning applied to high-throughput feature extraction from imagery to map spatial variability and UOQ2003-011RTX INVITA-a technology and analytics platform for improving variety selection)sup-ported competition.
文摘The Global Wheat Head Detection(GWHD)dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions.With an associated competition hosted in Kaggle,GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities.From this first experience,a few avenues for improvements have been identified regarding data size,head diversity,and label reliability.To address these issues,the 2020 dataset has been reexamined,relabeled,and complemented by adding 1722 images from 5 additional countries,allowing for 81,553 additional wheat heads.We now release in 2021 a new version of the Global Wheat Head Detection dataset,which is bigger,more diverse,and less noisy than the GWHD_2020 version.
基金The study was partly supported by several projects including ANR PHENOME(Programme d’investissement d’avenir),Digitag(PIA Institut Convergences Agriculture Numérique ANR-16-CONV-0004),CASDAR LITERAL,and P2S2 funded by CNES.
文摘Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest.We have developed the SegVeg approach for semantic segmentation of RGB images into three classes(background,green,and senescent vegetation).This is achieved in two steps:A U-net model is first trained on a very large dataset to separate whole vegetation from background.The green and senescent vegetation pixels are then separated using SVM,a shallow machine learning technique,trained over a selection of pixels extracted from images.The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks.Results show that the SegVeg approach allows to segment accurately the three classes.However,some confusion is observed mainly between the background and senescent vegetation,particularly over the dark and bright regions of the images.The U-net model achieves similar performances,with slight degradation over the green vegetation:the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net.The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent.Finally,the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels.Results show that green fraction is very well estimated(R^(2)=0.94)by the SegVeg model,while the senescent and background fractions show slightly degraded performances(R^(2)=0.70 and 0.73,respectively)with a mean 95%confidence error interval of 2.7%and 2.1%for the senescent vegetation and background,versus 1%for green vegetation.We have made SegVeg publicly available as a ready-to-use script and model,along with the entire annotated grid-pixels dataset.We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or,at least,offering a pretrained model for more specific use.