Data visualization blends art and science to convey stories from data via graphical representations.Considering different problems,applications,requirements,and design goals,it is challenging to combine these two comp...Data visualization blends art and science to convey stories from data via graphical representations.Considering different problems,applications,requirements,and design goals,it is challenging to combine these two components at their full force.While the art component involves creating visually appealing and easily interpreted graphics for users,the science component requires accurate representations of a large amount of input data.With a lack of the science component,visualization cannot serve its role of creating correct representations of the actual data,thus leading to wrong perception,interpretation,and decision.It might be even worse if incorrect visual representations were intentionally produced to deceive the viewers.To address common pitfalls in graphical representations,this paper focuses on identifying and understanding the root causes of misinformation in graphical representations.We reviewed the misleading data visualization examples in the scientific publications collected from indexing databases and then projected them onto the fundamental units of visual communication such as color,shape,size,and spatial orientation.Moreover,a text mining technique was applied to extract practical insights from common visualization pitfalls.Cochran’s Q test and McNemar’s test were conducted to examine if there is any difference in the proportions of common errors among color,shape,size,and spatial orientation.The findings showed that the pie chart is the most misused graphical representation,and size is the most critical issue.It was also observed that there were statistically significant differences in the proportion of errors among color,shape,size,and spatial orientation.展开更多
Nowadays,deep neural networks(DNNs)have been equipped with powerful representation capabilities.The deep convolutional neural networks(CNNs)that draw inspiration from the visual processing mechanism of the primate ear...Nowadays,deep neural networks(DNNs)have been equipped with powerful representation capabilities.The deep convolutional neural networks(CNNs)that draw inspiration from the visual processing mechanism of the primate early visual cortex have outperformed humans on object categorization and have been found to possess many brain-like properties.Recently,vision transformers(ViTs)have been striking paradigms of DNNs and have achieved remarkable improvements on many vision tasks compared to CNNs.It is natural to ask how the brain-like properties of ViTs are.Beyond the model paradigm,we are also interested in the effects of factors,such as model size,multimodality,and temporality,on the ability of networks to model the human visual pathway,especially when considering that existing research has been limited to CNNs.In this paper,we systematically evaluate the brain-like properties of 30 kinds of computer vision models varying from CNNs and ViTs to their hybrids from the perspective of explaining brain activities of the human visual cortex triggered by dynamic stimuli.Experiments on two neural datasets demonstrate that neither CNN nor transformer is the optimal model paradigm for modelling the human visual pathway.ViTs reveal hierarchical correspondences to the visual pathway as CNNs do.Moreover,we find that multi-modal and temporal networks can better explain the neural activities of large parts of the visual cortex,whereas a larger model size is not a sufficient condition for bridging the gap between human vision and artificial networks.Our study sheds light on the design principles for more brain-like networks.The code is available at https://github.com/QYiZhou/LWNeuralEncoding.展开更多
文摘Data visualization blends art and science to convey stories from data via graphical representations.Considering different problems,applications,requirements,and design goals,it is challenging to combine these two components at their full force.While the art component involves creating visually appealing and easily interpreted graphics for users,the science component requires accurate representations of a large amount of input data.With a lack of the science component,visualization cannot serve its role of creating correct representations of the actual data,thus leading to wrong perception,interpretation,and decision.It might be even worse if incorrect visual representations were intentionally produced to deceive the viewers.To address common pitfalls in graphical representations,this paper focuses on identifying and understanding the root causes of misinformation in graphical representations.We reviewed the misleading data visualization examples in the scientific publications collected from indexing databases and then projected them onto the fundamental units of visual communication such as color,shape,size,and spatial orientation.Moreover,a text mining technique was applied to extract practical insights from common visualization pitfalls.Cochran’s Q test and McNemar’s test were conducted to examine if there is any difference in the proportions of common errors among color,shape,size,and spatial orientation.The findings showed that the pie chart is the most misused graphical representation,and size is the most critical issue.It was also observed that there were statistically significant differences in the proportion of errors among color,shape,size,and spatial orientation.
基金supported by National Natural Science Foundation of China(Nos.61976209 and 62020106015)the CAS International Collaboration Key Project,China(No.173211KYSB20190024)the Strategic Priority Research Program of CAS,China(No.XDB32040000)。
文摘Nowadays,deep neural networks(DNNs)have been equipped with powerful representation capabilities.The deep convolutional neural networks(CNNs)that draw inspiration from the visual processing mechanism of the primate early visual cortex have outperformed humans on object categorization and have been found to possess many brain-like properties.Recently,vision transformers(ViTs)have been striking paradigms of DNNs and have achieved remarkable improvements on many vision tasks compared to CNNs.It is natural to ask how the brain-like properties of ViTs are.Beyond the model paradigm,we are also interested in the effects of factors,such as model size,multimodality,and temporality,on the ability of networks to model the human visual pathway,especially when considering that existing research has been limited to CNNs.In this paper,we systematically evaluate the brain-like properties of 30 kinds of computer vision models varying from CNNs and ViTs to their hybrids from the perspective of explaining brain activities of the human visual cortex triggered by dynamic stimuli.Experiments on two neural datasets demonstrate that neither CNN nor transformer is the optimal model paradigm for modelling the human visual pathway.ViTs reveal hierarchical correspondences to the visual pathway as CNNs do.Moreover,we find that multi-modal and temporal networks can better explain the neural activities of large parts of the visual cortex,whereas a larger model size is not a sufficient condition for bridging the gap between human vision and artificial networks.Our study sheds light on the design principles for more brain-like networks.The code is available at https://github.com/QYiZhou/LWNeuralEncoding.