期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Rethinking Polyp Segmentation from An Out-ofdistribution Perspective
1
作者 Ge-Peng Ji Jing Zhang +2 位作者 Dylan Campbell Huan Xiong nick barnes 《Machine Intelligence Research》 EI CSCD 2024年第4期631-639,共9页
Unlike existing fully-supervised approaches,we rethink colorectal polyp segmentation from an out-of-distribution perspective with a simple but effective self-supervised learning approach.We leverage the ability of mas... Unlike existing fully-supervised approaches,we rethink colorectal polyp segmentation from an out-of-distribution perspective with a simple but effective self-supervised learning approach.We leverage the ability of masked autoencoders-self-supervised vision transformers trained on a reconstruction task-to learn in-distribution representations,here,the distribution of healthy colon images.We then perform out-of-distribution reconstruction and inference,with feature space standardisation to align the latent distribution of the diverse abnormal samples with the statistics of the healthy samples.We generate per-pixel anomaly scores for each image by calculating the difference between the input and reconstructed images and use this signal for out-of-distribution(i.e.,polyp)segmentation.Experimental results on six benchmarks show that our model has excellent segmentation performance and generalises across datasets.Our code is publicly available at https://github.com/GewelsJI/Polyp-OOD. 展开更多
关键词 Polyp segmentation anomaly segmentation out-of-distribution segmentation masked autoencoder abdomen.
原文传递
Editorial for Special Issue on Multi-modal Representation Learning
2
作者 Deng-Ping Fan nick barnes +1 位作者 Ming-Ming Cheng Luc Van Gool 《Machine Intelligence Research》 EI CSCD 2024年第4期615-616,共2页
The past decade has witnessed the impressive and steady development of single-modal AI technologies in several fields,thanks to the emergence of deep learning.Less studied,however,is multi-modal AI-commonly considered... The past decade has witnessed the impressive and steady development of single-modal AI technologies in several fields,thanks to the emergence of deep learning.Less studied,however,is multi-modal AI-commonly considered the next generation of AI-which utilizes complementary context concealed in different-modality inputs to improve performance.Humans naturally learn to form a global concept from multiple modalities(i.e.,sight,hearing,touch,smell,and taste),even when some are incomplete or missing.Thus,in addition to the two popular modalities(vision and language),other types of data such as depth,infrared information,and events are also important for multi-modal learning in real-world scenes. 展开更多
关键词 MODAL utilize INCOMPLETE
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部