Background The prevalence of thyroid cancer is growing rapidly.Early and precise diagnosis is critical in thy-roid cancer caring.An automatic thyroid cancer diagnostic tool can be valuable to achieve early detection a...Background The prevalence of thyroid cancer is growing rapidly.Early and precise diagnosis is critical in thy-roid cancer caring.An automatic thyroid cancer diagnostic tool can be valuable to achieve early detection and diagnostic consistency.Only the follicular areas in the sample contain useful information to the thyroid cancer diagnosis based on fine needle aspiration(FNA).This study aimed to develop a highly efficient accurate method for follicular cell areas segmentation(FCAS)of thyroid cytopathological whole slide images(WSIs).Methods A total of 96 cell samples from July 2017 to July 2018 were collected in one hospital in Beijing,China.Forty-three WSIs were selected and manually labeled,including 17 cases of papillary thyroid carci-noma sample and 26 cases of benign sample.Six thousand and nine hundred cropped typical image patches(available on https://github.com/bupt-ai-cz/Hybrid-Model-Enabling-Highly-Efficient-Follicular-Segmentation)of 1024×1024 pixels from 13 large WSIs were used for patch-level model training and testing and all of the 13 large WSIs were papillary thyroid carcinoma samples.Thirty testing WSIs with an average size 36,217×29,400(from 10,240×10,240 to 81,920×61,440)were used to test the effectiveness of the hybrid model.Based on the traditional semantic segmentation model deeplabv3,we constructed a hybrid segmentation architecture by adding a classification branch into the segmentation scheme to improve efficiency.Accuracy was used to measure the performance of the classification model;pixel accuracy(pAcc),mean accuracy(mAcc),mean intersection over union(mIoU),and frequency weighted intersection over union(fwIoU)were used to measure the performance of the segmentation model,respectively.Results Using this method,up to 93%WSI segmentation time was reduced by skipping the colloidal areas and the blank background areas.The average processing time of 30 WSI was 49.49 s.On the patch dataset,this hybrid model might reach pAcc=98.65%,mAcc=85.60%,mIoU=79.61%,and fwIoU=97.54%.On the WSI dataset,this model might reach pAcc=99.30%,mAcc=68.94%,mIoU=58.21%,and fwIoU=99.50%.Conclusion The proposed hybrid method might significantly improve previous solutions and achieve the superior performance of efficiency and accuracy.展开更多
A panoptic driving perception system is an essential part of autonomous driving.A high-precision and real-time perception system can assist the vehicle in making reasonable decisions while driving.We present a panopti...A panoptic driving perception system is an essential part of autonomous driving.A high-precision and real-time perception system can assist the vehicle in making reasonable decisions while driving.We present a panoptic driving perception network(you only look once for panoptic(YOLOP))to perform traffic object detection,drivable area segmentation,and lane detection simultaneously.It is composed of one encoder for feature extraction and three decoders to handle the specific tasks.Our model performs extremely well on the challenging BDD100K dataset,achieving state-of-the-art on all three tasks in terms of accuracy and speed.Besides,we verify the effectiveness of our multi-task learning model for joint training via ablative studies.To our best knowledge,this is the first work that can process these three visual perception tasks simultaneously in real-time on an embedded device Jetson TX2(23 FPS),and maintain excellent accuracy.To facilitate further research,the source codes and pre-trained models are released at https://github.com/hustvl/YOLOP.展开更多
基金supported in part by the Overseas Expertise Introduc-tion Project for Discipline Innovation(Grant No.B17007)the National Natural Science Foundation of China(Grant No.81972248)+1 种基金the Natural Science Foundation of Beijing Municipality(Grant No.7202056)by the Beijing Municipal Administration of Hospitals Incubating Program(Grant No.PX2021013).
文摘Background The prevalence of thyroid cancer is growing rapidly.Early and precise diagnosis is critical in thy-roid cancer caring.An automatic thyroid cancer diagnostic tool can be valuable to achieve early detection and diagnostic consistency.Only the follicular areas in the sample contain useful information to the thyroid cancer diagnosis based on fine needle aspiration(FNA).This study aimed to develop a highly efficient accurate method for follicular cell areas segmentation(FCAS)of thyroid cytopathological whole slide images(WSIs).Methods A total of 96 cell samples from July 2017 to July 2018 were collected in one hospital in Beijing,China.Forty-three WSIs were selected and manually labeled,including 17 cases of papillary thyroid carci-noma sample and 26 cases of benign sample.Six thousand and nine hundred cropped typical image patches(available on https://github.com/bupt-ai-cz/Hybrid-Model-Enabling-Highly-Efficient-Follicular-Segmentation)of 1024×1024 pixels from 13 large WSIs were used for patch-level model training and testing and all of the 13 large WSIs were papillary thyroid carcinoma samples.Thirty testing WSIs with an average size 36,217×29,400(from 10,240×10,240 to 81,920×61,440)were used to test the effectiveness of the hybrid model.Based on the traditional semantic segmentation model deeplabv3,we constructed a hybrid segmentation architecture by adding a classification branch into the segmentation scheme to improve efficiency.Accuracy was used to measure the performance of the classification model;pixel accuracy(pAcc),mean accuracy(mAcc),mean intersection over union(mIoU),and frequency weighted intersection over union(fwIoU)were used to measure the performance of the segmentation model,respectively.Results Using this method,up to 93%WSI segmentation time was reduced by skipping the colloidal areas and the blank background areas.The average processing time of 30 WSI was 49.49 s.On the patch dataset,this hybrid model might reach pAcc=98.65%,mAcc=85.60%,mIoU=79.61%,and fwIoU=97.54%.On the WSI dataset,this model might reach pAcc=99.30%,mAcc=68.94%,mIoU=58.21%,and fwIoU=99.50%.Conclusion The proposed hybrid method might significantly improve previous solutions and achieve the superior performance of efficiency and accuracy.
基金supported by National Natural Science Foundation of China(Nos.61876212 and 1733007)Zhejiang Laboratory,China(No.2019NB0AB02)Hubei Province College Students Innovation and Entrepreneurship Training Program,China(No.S202010487058).
文摘A panoptic driving perception system is an essential part of autonomous driving.A high-precision and real-time perception system can assist the vehicle in making reasonable decisions while driving.We present a panoptic driving perception network(you only look once for panoptic(YOLOP))to perform traffic object detection,drivable area segmentation,and lane detection simultaneously.It is composed of one encoder for feature extraction and three decoders to handle the specific tasks.Our model performs extremely well on the challenging BDD100K dataset,achieving state-of-the-art on all three tasks in terms of accuracy and speed.Besides,we verify the effectiveness of our multi-task learning model for joint training via ablative studies.To our best knowledge,this is the first work that can process these three visual perception tasks simultaneously in real-time on an embedded device Jetson TX2(23 FPS),and maintain excellent accuracy.To facilitate further research,the source codes and pre-trained models are released at https://github.com/hustvl/YOLOP.