Visualizing high-dimensional data on a 2D canvas is generally challenging.It becomes significantly more difficult when multiple time-steps are to be presented,as the visual clutter quickly increases.Moreover,the chall...Visualizing high-dimensional data on a 2D canvas is generally challenging.It becomes significantly more difficult when multiple time-steps are to be presented,as the visual clutter quickly increases.Moreover,the challenge to perceive the significant temporal evolution is even greater.In this paper,we present a method to plot temporal high-dimensional data in a static scatterplot;it uses the established PCA technique to project data from multiple time-steps.The key idea is to extend each individual displacement prior to applying PCA,so as to skew the projection process,and to set a projection plane that balances the directions of temporal change and spatial variance.We present numerous examples and various visual cues to highlight the data trajectories,and demonstrate the effectiveness of the method for visualizing temporal data.展开更多
This study introduces CLIP-Flow,a novel network for generating images from a given image or text.To effectively utilize the rich semantics contained in both modalities,we designed a semantics-guided methodology for im...This study introduces CLIP-Flow,a novel network for generating images from a given image or text.To effectively utilize the rich semantics contained in both modalities,we designed a semantics-guided methodology for image-and text-to-image synthesis.In particular,we adopted Contrastive Language-Image Pretraining(CLIP)as an encoder to extract semantics and StyleGAN as a decoder to generate images from such information.Moreover,to bridge the embedding space of CLIP and latent space of StyleGAN,real NVP is employed and modified with activation normalization and invertible convolution.As the images and text in CLIP share the same representation space,text prompts can be fed directly into CLIP-Flow to achieve text-to-image synthesis.We conducted extensive experiments on several datasets to validate the effectiveness of the proposed image-to-image synthesis method.In addition,we tested on the public dataset Multi-Modal CelebA-HQ,for text-to-image synthesis.Experiments validated that our approach can generate high-quality text-matching images,and is comparable with state-of-the-art methods,both qualitatively and quantitatively.展开更多
基金the Israel Science Foundation(Grant No.2366/16 and 2472/17)。
文摘Visualizing high-dimensional data on a 2D canvas is generally challenging.It becomes significantly more difficult when multiple time-steps are to be presented,as the visual clutter quickly increases.Moreover,the challenge to perceive the significant temporal evolution is even greater.In this paper,we present a method to plot temporal high-dimensional data in a static scatterplot;it uses the established PCA technique to project data from multiple time-steps.The key idea is to extend each individual displacement prior to applying PCA,so as to skew the projection process,and to set a projection plane that balances the directions of temporal change and spatial variance.We present numerous examples and various visual cues to highlight the data trajectories,and demonstrate the effectiveness of the method for visualizing temporal data.
基金supported in parts by the National Natural Science Foundation of China(62161146005,U21B2023)Shenzhen Science and Technology Program(KQTD20210811090044003,RCJC20200714114435012)Israel Science Foundation.
文摘This study introduces CLIP-Flow,a novel network for generating images from a given image or text.To effectively utilize the rich semantics contained in both modalities,we designed a semantics-guided methodology for image-and text-to-image synthesis.In particular,we adopted Contrastive Language-Image Pretraining(CLIP)as an encoder to extract semantics and StyleGAN as a decoder to generate images from such information.Moreover,to bridge the embedding space of CLIP and latent space of StyleGAN,real NVP is employed and modified with activation normalization and invertible convolution.As the images and text in CLIP share the same representation space,text prompts can be fed directly into CLIP-Flow to achieve text-to-image synthesis.We conducted extensive experiments on several datasets to validate the effectiveness of the proposed image-to-image synthesis method.In addition,we tested on the public dataset Multi-Modal CelebA-HQ,for text-to-image synthesis.Experiments validated that our approach can generate high-quality text-matching images,and is comparable with state-of-the-art methods,both qualitatively and quantitatively.