期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Natural Image Matting with Attended Global Context
1
作者 张億一 牛力 +4 位作者 Yasushi Makihara 张健夫 赵维杰 Yasushi Yagi 张丽清 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第3期659-673,共15页
Image matting is to estimate the opacity of foreground objects from an image. A few deep learning based methods have been proposed for image matting and perform well in capturing spatially close information. However, ... Image matting is to estimate the opacity of foreground objects from an image. A few deep learning based methods have been proposed for image matting and perform well in capturing spatially close information. However, these methods fail to capture global contextual information, which has been proved essential in improving matting performance. This is because a matting image may be up to several megapixels, which is too big for a learning-based network to capture global contextual information due to the limit size of a receptive field. Although uniformly downsampling the matting image can alleviate this problem, it may result in the degradation of matting performance. To solve this problem, we introduce a natural image matting with the attended global context method to extract global contextual information from the whole image, and to condense them into a suitable size for learning-based network. Specifically, we first leverage a deformable sampling layer to obtain condensed foreground and background attended images respectively. Then, we utilize a contextual attention layer to extract information related to unknown regions from condensed foreground and background images generated by a deformable sampling layer. Besides, our network predicts a background as well as the alpha matte to obtain more purified foreground, which contributes to better qualitative performance in composition. Comprehensive experiments show that our method achieves competitive performance on both Composition-1k and the alphamatting.com benchmark quantitatively and qualitatively. 展开更多
关键词 image matting global context deformable sampling
原文传递
How Convolutional Neural Networks Diagnose Plant Disease 被引量:6
2
作者 Yosuke Toda Fumio Okura 《Plant Phenomics》 2019年第1期223-236,共14页
Deep learning with convolutional neural networks(CNNs)has achieved great success in the classification of various plant diseases.However,a limited number of studies have elucidated the process of inference,leaving it ... Deep learning with convolutional neural networks(CNNs)has achieved great success in the classification of various plant diseases.However,a limited number of studies have elucidated the process of inference,leaving it as an untouchable black box.Revealing the CNN to extract the learned feature as an interpretable form not only ensures its reliability but also enables the validation of the model authenticity and the training dataset by human intervention.In this study,a variety of neuron-wise and layer-wise visualization methods were applied using a CNN,trained with a publicly available plant disease image dataset.We showed that neural networks can capture the colors and textures of lesions specific to respective diseases upon diagnosis,which resembles human decision-making.While several visualizationmethods were used as they are,others had to be optimized to target a specific layer that fully captures the features to generate consequential outputs.Moreover,by interpreting the generated attention maps,we identified several layers that were not contributing to inference and removed such layers inside the network,decreasing the number of parameters by 75%without affecting the classification accuracy.The results provide an impetus for the CNN black box users in the field of plant science to better understand the diagnosis process and lead to further efficient use of deep learning for plant disease diagnosis. 展开更多
关键词 DIAGNOSIS removed INTERPRETING
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部