Early-stage plant density is an essential trait that determines the fate of a genotype under given environmental conditions and management practices.The use of RGB images taken from UAVs may replace the traditional vi...Early-stage plant density is an essential trait that determines the fate of a genotype under given environmental conditions and management practices.The use of RGB images taken from UAVs may replace the traditional visual counting in fields with improved throughput,accuracy,and access to plant localization.However,high-resolution images are required to detect the small plants present at the early stages.This study explores the impact of image ground sampling distance(GSD)on the performances of maize plant detection at three-to-five leaves stage using Faster-RCNN object detection algorithm.Data collected at high resolution(GSD≈0:3 cm)over six contrasted sites were used for model training.Two additional sites with images acquired both at high and low(GSD≈0:6 cm)resolutions were used to evaluate the model performances.Results show that Faster-RCNN achieved very good plant detection and counting(rRMSE=0:08)performances when native high-resolution images are used both for training and validation.Similarly,good performances were observed(rRMSE=0:11)when the model is trained over synthetic low-resolution images obtained by downsampling the native training high-resolution images and applied to the synthetic low-resolution validation images.Conversely,poor performances are obtained when the model is trained on a given spatial resolution and applied to another spatial resolution.Training on a mix of high-and low-resolution images allows to get very good performances on the native high-resolution(rRMSE=0:06)and synthetic low-resolution(rRMSE=0:10)images.However,very low performances are still observed over the native low-resolution images(rRMSE=0:48),mainly due to the poor quality of the native low-resolution images.Finally,an advanced super resolution method based on GAN(generative adversarial network)that introduces additional textural information derived from the native high-resolution images was applied to the native low-resolution validation images.Results show some significant improvement(rRMSE=0:22)compared to bicubic upsampling approach,while still far below the performances achieved over the native high-resolution images.展开更多
Selection of sugar beet(Beta vulgaris L.)cultivars that are resistant to Cercospora Leaf Spot(CLS)disease is critical to increase yield.Such selection requires an automatic,fast,and objective method to assess CLS seve...Selection of sugar beet(Beta vulgaris L.)cultivars that are resistant to Cercospora Leaf Spot(CLS)disease is critical to increase yield.Such selection requires an automatic,fast,and objective method to assess CLS severity on thousands of cultivars in the field.For this purpose,we compare the use of submillimeter scale RGB imagery acquired from an Unmanned Ground Vehicle(UGV)under active illumination and centimeter scale multispectral imagery acquired from an Unmanned Aerial Vehicle(UAV)under passive illumination.Several variables are extracted from the images(spot density and spot size for UGV,green fraction for UGV and UAV)and related to visual scores assessed by an expert.Results show that spot density and green fraction are critical variables to assess low and high CLS severities,respectively,which emphasizes the importance of having submillimeter images to early detect CLS in field conditions.Genotype sensitivity to CLS can then be accurately retrieved based on time integrals of UGV-and UAV-derived scores.While UGV shows the best estimation performance,UAV can show accurate estimates of cultivar sensitivity if the data are properly acquired.Advantages and limitations of UGV,UAV,and visual scoring methods are finally discussed in the perspective of high-throughput phenotyping.展开更多
文摘Early-stage plant density is an essential trait that determines the fate of a genotype under given environmental conditions and management practices.The use of RGB images taken from UAVs may replace the traditional visual counting in fields with improved throughput,accuracy,and access to plant localization.However,high-resolution images are required to detect the small plants present at the early stages.This study explores the impact of image ground sampling distance(GSD)on the performances of maize plant detection at three-to-five leaves stage using Faster-RCNN object detection algorithm.Data collected at high resolution(GSD≈0:3 cm)over six contrasted sites were used for model training.Two additional sites with images acquired both at high and low(GSD≈0:6 cm)resolutions were used to evaluate the model performances.Results show that Faster-RCNN achieved very good plant detection and counting(rRMSE=0:08)performances when native high-resolution images are used both for training and validation.Similarly,good performances were observed(rRMSE=0:11)when the model is trained over synthetic low-resolution images obtained by downsampling the native training high-resolution images and applied to the synthetic low-resolution validation images.Conversely,poor performances are obtained when the model is trained on a given spatial resolution and applied to another spatial resolution.Training on a mix of high-and low-resolution images allows to get very good performances on the native high-resolution(rRMSE=0:06)and synthetic low-resolution(rRMSE=0:10)images.However,very low performances are still observed over the native low-resolution images(rRMSE=0:48),mainly due to the poor quality of the native low-resolution images.Finally,an advanced super resolution method based on GAN(generative adversarial network)that introduces additional textural information derived from the native high-resolution images was applied to the native low-resolution validation images.Results show some significant improvement(rRMSE=0:22)compared to bicubic upsampling approach,while still far below the performances achieved over the native high-resolution images.
基金The authors would like to thank Catherine Zanotto and Mathieu Hemmerléfor their help in the experiments.This work was supported by the French National Research Agency in the framework of the“Investissements d’avenir”program AKER(ANR-11-BTBR-0007).
文摘Selection of sugar beet(Beta vulgaris L.)cultivars that are resistant to Cercospora Leaf Spot(CLS)disease is critical to increase yield.Such selection requires an automatic,fast,and objective method to assess CLS severity on thousands of cultivars in the field.For this purpose,we compare the use of submillimeter scale RGB imagery acquired from an Unmanned Ground Vehicle(UGV)under active illumination and centimeter scale multispectral imagery acquired from an Unmanned Aerial Vehicle(UAV)under passive illumination.Several variables are extracted from the images(spot density and spot size for UGV,green fraction for UGV and UAV)and related to visual scores assessed by an expert.Results show that spot density and green fraction are critical variables to assess low and high CLS severities,respectively,which emphasizes the importance of having submillimeter images to early detect CLS in field conditions.Genotype sensitivity to CLS can then be accurately retrieved based on time integrals of UGV-and UAV-derived scores.While UGV shows the best estimation performance,UAV can show accurate estimates of cultivar sensitivity if the data are properly acquired.Advantages and limitations of UGV,UAV,and visual scoring methods are finally discussed in the perspective of high-throughput phenotyping.