期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
High-Throughput Measurements of Stem Characteristics to Estimate Ear Density and Above-Ground Biomass 被引量:9
1
作者 Xiuliang Jin Simon Madec +3 位作者 Dan Dutartre Benoit de Solan Alexis Comar Frédéric Baret 《Plant Phenomics》 2019年第1期80-89,共10页
Total above-ground biomass at harvest and ear density are two important traits that characterize wheat genotypes.Two experiments were carried out in two different sites where several genotypes were grown under contras... Total above-ground biomass at harvest and ear density are two important traits that characterize wheat genotypes.Two experiments were carried out in two different sites where several genotypes were grown under contrasted irrigation and nitrogen treatments.A high spatial resolution RGB camera was used to capture the residual stems standing straight after the cutting by the combine machine during harvest.It provided a ground spatial resolution better than 0.2 mm.A Faster Regional Convolutional Neural Network(Faster-RCNN)deep-learning model was first trained to identify the stems cross section.Results showed that the identification provided precision and recall close to 95%.Further,the balance between precision and recall allowed getting accurate estimates of the stem density with a relative RMSE close to 7%and robustness across the two experimental sites.The estimated stem density was also compared with the ear density measured in the field with traditional methods.A very high correlation was found with almost no bias,indicating that the stem density could be a good proxy of the ear density.The heritability/repeatability evaluated over 16 genotypes in one of the two experiments was slightly higher(80%)than that of the ear density(78%).The diameter of each stem was computed from the profile of gray values in the extracts of the stem cross section.Results show that the stem diameters follow a gamma distribution over eachmicroplot with an average diameter close to 2.0mm.Finally,the biovolume computed as the product of the average stem diameter,the stem density,and plant height is closely related to the above-ground biomass at harvest with a relative RMSE of 6%.Possible limitations of the findings and future applications are finally discussed. 展开更多
关键词 BIOMASS STRAIGHT finally
原文传递
Estimates of Maize Plant Density from UAV RGB Images Using Faster-RCNN Detection Model:Impact of the Spatial Resolution 被引量:10
2
作者 K.Velumani R.Lopez-Lozano +4 位作者 S.Madec W.Guo J.Gillet A.Comar F.Baret 《Plant Phenomics》 SCIE 2021年第1期181-196,共16页
Early-stage plant density is an essential trait that determines the fate of a genotype under given environmental conditions and management practices.The use of RGB images taken from UAVs may replace the traditional vi... Early-stage plant density is an essential trait that determines the fate of a genotype under given environmental conditions and management practices.The use of RGB images taken from UAVs may replace the traditional visual counting in fields with improved throughput,accuracy,and access to plant localization.However,high-resolution images are required to detect the small plants present at the early stages.This study explores the impact of image ground sampling distance(GSD)on the performances of maize plant detection at three-to-five leaves stage using Faster-RCNN object detection algorithm.Data collected at high resolution(GSD≈0:3 cm)over six contrasted sites were used for model training.Two additional sites with images acquired both at high and low(GSD≈0:6 cm)resolutions were used to evaluate the model performances.Results show that Faster-RCNN achieved very good plant detection and counting(rRMSE=0:08)performances when native high-resolution images are used both for training and validation.Similarly,good performances were observed(rRMSE=0:11)when the model is trained over synthetic low-resolution images obtained by downsampling the native training high-resolution images and applied to the synthetic low-resolution validation images.Conversely,poor performances are obtained when the model is trained on a given spatial resolution and applied to another spatial resolution.Training on a mix of high-and low-resolution images allows to get very good performances on the native high-resolution(rRMSE=0:06)and synthetic low-resolution(rRMSE=0:10)images.However,very low performances are still observed over the native low-resolution images(rRMSE=0:48),mainly due to the poor quality of the native low-resolution images.Finally,an advanced super resolution method based on GAN(generative adversarial network)that introduces additional textural information derived from the native high-resolution images was applied to the native low-resolution validation images.Results show some significant improvement(rRMSE=0:22)compared to bicubic upsampling approach,while still far below the performances achieved over the native high-resolution images. 展开更多
关键词 RCNN FASTER IMAGE
原文传递
Scoring Cercospora Leaf Spot on Sugar Beet:Comparison of UGV and UAV Phenotyping Systems 被引量:3
3
作者 S.Jay A.Comar +9 位作者 R.Benicio J.Beauvois D.Dutartre G.Daubige W.Li J.Labrosse S.Thomas N.Henry M.Weiss F.Baret 《Plant Phenomics》 2020年第1期225-242,共18页
Selection of sugar beet(Beta vulgaris L.)cultivars that are resistant to Cercospora Leaf Spot(CLS)disease is critical to increase yield.Such selection requires an automatic,fast,and objective method to assess CLS seve... Selection of sugar beet(Beta vulgaris L.)cultivars that are resistant to Cercospora Leaf Spot(CLS)disease is critical to increase yield.Such selection requires an automatic,fast,and objective method to assess CLS severity on thousands of cultivars in the field.For this purpose,we compare the use of submillimeter scale RGB imagery acquired from an Unmanned Ground Vehicle(UGV)under active illumination and centimeter scale multispectral imagery acquired from an Unmanned Aerial Vehicle(UAV)under passive illumination.Several variables are extracted from the images(spot density and spot size for UGV,green fraction for UGV and UAV)and related to visual scores assessed by an expert.Results show that spot density and green fraction are critical variables to assess low and high CLS severities,respectively,which emphasizes the importance of having submillimeter images to early detect CLS in field conditions.Genotype sensitivity to CLS can then be accurately retrieved based on time integrals of UGV-and UAV-derived scores.While UGV shows the best estimation performance,UAV can show accurate estimates of cultivar sensitivity if the data are properly acquired.Advantages and limitations of UGV,UAV,and visual scoring methods are finally discussed in the perspective of high-throughput phenotyping. 展开更多
关键词 ILLUMINATION MILLIMETER critical
原文传递
SegVeg: Segmenting RGB Images into Green and Senescent Vegetation by Combining Deep and Shallow Methods 被引量:3
4
作者 Mario Serouart Simon Madec +4 位作者 Etienne David Kaaviya Velumani Raul LopezLozano Marie Weiss Frederic Baret 《Plant Phenomics》 SCIE EI 2022年第1期26-42,共17页
Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest.We have developed the SegVeg approach fo... Pixel segmentation of high-resolution RGB images into chlorophyll-active or nonactive vegetation classes is a first step often required before estimating key traits of interest.We have developed the SegVeg approach for semantic segmentation of RGB images into three classes(background,green,and senescent vegetation).This is achieved in two steps:A U-net model is first trained on a very large dataset to separate whole vegetation from background.The green and senescent vegetation pixels are then separated using SVM,a shallow machine learning technique,trained over a selection of pixels extracted from images.The performances of the SegVeg approach is then compared to a 3-class U-net model trained using weak supervision over RGB images segmented with SegVeg as groundtruth masks.Results show that the SegVeg approach allows to segment accurately the three classes.However,some confusion is observed mainly between the background and senescent vegetation,particularly over the dark and bright regions of the images.The U-net model achieves similar performances,with slight degradation over the green vegetation:the SVM pixel-based approach provides more precise delineation of the green and senescent patches as compared to the convolutional nature of U-net.The use of the components of several color spaces allows to better classify the vegetation pixels into green and senescent.Finally,the models are used to predict the fraction of three classes over whole images or regularly spaced grid-pixels.Results show that green fraction is very well estimated(R^(2)=0.94)by the SegVeg model,while the senescent and background fractions show slightly degraded performances(R^(2)=0.70 and 0.73,respectively)with a mean 95%confidence error interval of 2.7%and 2.1%for the senescent vegetation and background,versus 1%for green vegetation.We have made SegVeg publicly available as a ready-to-use script and model,along with the entire annotated grid-pixels dataset.We thus hope to render segmentation accessible to a broad audience by requiring neither manual annotation nor knowledge or,at least,offering a pretrained model for more specific use. 展开更多
关键词 DEEP offering RENDER
原文传递
A Double Swath Configuration for Improving Throughput and Accuracy of Trait Estimate from UAV Images 被引量:1
5
作者 Wenjuan Li Alexis Comar +5 位作者 Marie Weiss Sylvain Jay Gallian Colombeau Raul Lopez-Lozano Simon Madec Frédéric Baret 《Plant Phenomics》 SCIE 2021年第1期378-388,共11页
Multispectral observations from unmanned aerial vehicles(UAVs)are currently used for precision agriculture and crop phenotyping applications to monitor a series of traits allowing the characterization of the vegetatio... Multispectral observations from unmanned aerial vehicles(UAVs)are currently used for precision agriculture and crop phenotyping applications to monitor a series of traits allowing the characterization of the vegetation status.However,the limited autonomy of UAVs makes the completion of flights difficult when sampling large areas.Increasing the throughput of data acquisition while not degrading the ground sample distance(GSD)is,therefore,a critical issue to be solved.We propose here a new image acquisition configuration based on the combination of two focal length(f)optics:an optics with f=4:2 mm is added to the standard f=8 mm(SS:single swath)of the multispectral camera(DS:double swath,double of the standard one).Two flights were completed consecutively in 2018 over a maize field using the AIRPHEN multispectral camera at 52 m altitude.The DS flight plan was designed to get 80%overlap with the 4.2 mm optics,while the SS one was designed to get 80%overlap with the 8 mm optics.As a result,the time required to cover the same area is halved for the DS as compared to the SS.The georeferencing accuracy was improved for the DS configuration,particularly for the Z dimension due to the larger view angles available with the small focal length optics.Application to plant height estimates demonstrates that the DS configuration provides similar results as the SS one.However,for both the DS and SS configurations,degrading the quality level used to generate the 3D point cloud significantly decreases the plant height estimates. 展开更多
关键词 optics OVERLAP ALTITUDE
原文传递
Global Wheat Head Detection 2021:An Improved Dataset for Benchmarking Wheat Head Detection Methods 被引量:8
6
作者 Etienne David Mario Serouart +34 位作者 Daniel Smith Simon Madec Kaaviya Velumani Shouyang Liu Xu Wang Francisco Pinto Shahameh Shafiee Izzat SATahir Hisashi Tsujimoto Shuhei Nasuda Bangyou Zheng Norbert Kirchgessner Helge Aasen Andreas Hund Pouria Sadhegi-Tehran Koichi Nagasawa Goro Ishikawa Sébastien Dandrifosse Alexis Carlier Benjamin Dumont Benoit Mercatoris Byron Evers Ken Kuroki Haozhou Wang Masanori Ishii Minhajul ABadhon Curtis Pozniak David Shaner LeBauer Morten Lillemo Jesse Poland Scott Chapman Benoit de Solan Frédéric Baret Ian Stavness Wei Guo 《Plant Phenomics》 SCIE 2021年第1期277-285,共9页
The Global Wheat Head Detection(GWHD)dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions.With an ass... The Global Wheat Head Detection(GWHD)dataset was created in 2020 and has assembled 193,634 labelled wheat heads from 4700 RGB images acquired from various acquisition platforms and 7 countries/institutions.With an associated competition hosted in Kaggle,GWHD_2020 has successfully attracted attention from both the computer vision and agricultural science communities.From this first experience,a few avenues for improvements have been identified regarding data size,head diversity,and label reliability.To address these issues,the 2020 dataset has been reexamined,relabeled,and complemented by adding 1722 images from 5 additional countries,allowing for 81,553 additional wheat heads.We now release in 2021 a new version of the Global Wheat Head Detection dataset,which is bigger,more diverse,and less noisy than the GWHD_2020 version. 展开更多
关键词 WHEAT adding RELEASE
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部