期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Examining data visualization pitfalls in scientific publications
1
作者 Vinh T Nguyen Kwanghee Jung Vibhuti Gupta 《Visual Computing for Industry,Biomedicine,and Art》 EI 2021年第1期268-282,共15页
Data visualization blends art and science to convey stories from data via graphical representations.Considering different problems,applications,requirements,and design goals,it is challenging to combine these two comp... Data visualization blends art and science to convey stories from data via graphical representations.Considering different problems,applications,requirements,and design goals,it is challenging to combine these two components at their full force.While the art component involves creating visually appealing and easily interpreted graphics for users,the science component requires accurate representations of a large amount of input data.With a lack of the science component,visualization cannot serve its role of creating correct representations of the actual data,thus leading to wrong perception,interpretation,and decision.It might be even worse if incorrect visual representations were intentionally produced to deceive the viewers.To address common pitfalls in graphical representations,this paper focuses on identifying and understanding the root causes of misinformation in graphical representations.We reviewed the misleading data visualization examples in the scientific publications collected from indexing databases and then projected them onto the fundamental units of visual communication such as color,shape,size,and spatial orientation.Moreover,a text mining technique was applied to extract practical insights from common visualization pitfalls.Cochran’s Q test and McNemar’s test were conducted to examine if there is any difference in the proportions of common errors among color,shape,size,and spatial orientation.The findings showed that the pie chart is the most misused graphical representation,and size is the most critical issue.It was also observed that there were statistically significant differences in the proportion of errors among color,shape,size,and spatial orientation. 展开更多
关键词 Data visualization Graphical representations MISINFORMATION visual encodings Association rule mining Word cloud Cochran’s Q test McNemar’s test
下载PDF
Exploring the Brain-like Properties of Deep Neural Networks:A Neural Encoding Perspective 被引量:1
2
作者 Qiongyi Zhou Changde Du Huiguang He 《Machine Intelligence Research》 EI CSCD 2022年第5期439-455,共17页
Nowadays,deep neural networks(DNNs)have been equipped with powerful representation capabilities.The deep convolutional neural networks(CNNs)that draw inspiration from the visual processing mechanism of the primate ear... Nowadays,deep neural networks(DNNs)have been equipped with powerful representation capabilities.The deep convolutional neural networks(CNNs)that draw inspiration from the visual processing mechanism of the primate early visual cortex have outperformed humans on object categorization and have been found to possess many brain-like properties.Recently,vision transformers(ViTs)have been striking paradigms of DNNs and have achieved remarkable improvements on many vision tasks compared to CNNs.It is natural to ask how the brain-like properties of ViTs are.Beyond the model paradigm,we are also interested in the effects of factors,such as model size,multimodality,and temporality,on the ability of networks to model the human visual pathway,especially when considering that existing research has been limited to CNNs.In this paper,we systematically evaluate the brain-like properties of 30 kinds of computer vision models varying from CNNs and ViTs to their hybrids from the perspective of explaining brain activities of the human visual cortex triggered by dynamic stimuli.Experiments on two neural datasets demonstrate that neither CNN nor transformer is the optimal model paradigm for modelling the human visual pathway.ViTs reveal hierarchical correspondences to the visual pathway as CNNs do.Moreover,we find that multi-modal and temporal networks can better explain the neural activities of large parts of the visual cortex,whereas a larger model size is not a sufficient condition for bridging the gap between human vision and artificial networks.Our study sheds light on the design principles for more brain-like networks.The code is available at https://github.com/QYiZhou/LWNeuralEncoding. 展开更多
关键词 Convolutional neural network(CNN) vision transformer(Vi T) multi-modal networks spatial-temporal networks visual neural encoding brain-like neural networks
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部