Objective evaluations of fused images are important in comparing the performance of different image fusion algorithms. This paper describes a structural similarity metric that does not use a reference image for image ...Objective evaluations of fused images are important in comparing the performance of different image fusion algorithms. This paper describes a structural similarity metric that does not use a reference image for image fusion evaluations. The metric is based on the universal image quality index and addresses not only the similarities between the input images and the fused image, but also the similarities among the input images. The evaluation process distinguishes between complementary information and redundant information using similarities among the input images. The metric uses the information classification to estimate how much structural similarity is preserved in the fused image. Tests demonstrate that the metric correlates well with subjective evaluations of the fused images.展开更多
A way of embedded learning convolution neural network(ELCNN) based on the image content is proposed to evaluate the image aesthetic quality in this paper. Our approach can not only solve the problem of small-scale dat...A way of embedded learning convolution neural network(ELCNN) based on the image content is proposed to evaluate the image aesthetic quality in this paper. Our approach can not only solve the problem of small-scale data but also score the image aesthetic quality. First, we chose Alexnet and VGG_S to compare for confirming which is more suitable for this image aesthetic quality evaluation task. Second, to further boost the image aesthetic quality classification performance, we employ the image content to train aesthetic quality classification models. But the training samples become smaller and only using once fine-tuning cannot make full use of the small-scale data set. Third, to solve the problem in second step, a way of using twice fine-tuning continually based on the aesthetic quality label and content label respective is proposed, the classification probability of the trained CNN models is used to evaluate the image aesthetic quality. The experiments are carried on the small-scale data set of Photo Quality. The experiment results show that the classification accuracy rates of our approach are higher than the existing image aesthetic quality evaluation approaches.展开更多
In many applications of computer graphics,art,and design,it is desirable for a user to provide intuitive non-image input,such as text,sketch,stroke,graph,or layout,and have a computer system automatically generate pho...In many applications of computer graphics,art,and design,it is desirable for a user to provide intuitive non-image input,such as text,sketch,stroke,graph,or layout,and have a computer system automatically generate photo-realistic images according to that input.While classically,works that allow such automatic image content generation have followed a framework of image retrieval and composition,recent advances in deep generative models such as generative adversarial networks(GANs),variational autoencoders(VAEs),and flow-based methods have enabled more powerful and versatile image generation approaches.This paper reviews recent works for image synthesis given intuitive user input,covering advances in input versatility,image generation methodology,benchmark datasets,and evaluation metrics.This motivates new perspectives on input representation and interactivity,cross fertilization between major image generation paradigms,and evaluation and comparison of generation methods.展开更多
基金Supported by the National Natural Science Foundation of China (No.60673024)
文摘Objective evaluations of fused images are important in comparing the performance of different image fusion algorithms. This paper describes a structural similarity metric that does not use a reference image for image fusion evaluations. The metric is based on the universal image quality index and addresses not only the similarities between the input images and the fused image, but also the similarities among the input images. The evaluation process distinguishes between complementary information and redundant information using similarities among the input images. The metric uses the information classification to estimate how much structural similarity is preserved in the fused image. Tests demonstrate that the metric correlates well with subjective evaluations of the fused images.
基金supported by the National Natural Science Foundation of China(Nos.61271361,61163019,61462093 and 61761046)the Research Foundation of Yunnan Province(Nos.2014FA021 and 2014FB113)the Digital Media Technology Key Laboratory of Universities in Yunnan Province
文摘A way of embedded learning convolution neural network(ELCNN) based on the image content is proposed to evaluate the image aesthetic quality in this paper. Our approach can not only solve the problem of small-scale data but also score the image aesthetic quality. First, we chose Alexnet and VGG_S to compare for confirming which is more suitable for this image aesthetic quality evaluation task. Second, to further boost the image aesthetic quality classification performance, we employ the image content to train aesthetic quality classification models. But the training samples become smaller and only using once fine-tuning cannot make full use of the small-scale data set. Third, to solve the problem in second step, a way of using twice fine-tuning continually based on the aesthetic quality label and content label respective is proposed, the classification probability of the trained CNN models is used to evaluate the image aesthetic quality. The experiments are carried on the small-scale data set of Photo Quality. The experiment results show that the classification accuracy rates of our approach are higher than the existing image aesthetic quality evaluation approaches.
基金supported by the National Natural Science Foundation of China(Project Nos.61521002 and 61772298)。
文摘In many applications of computer graphics,art,and design,it is desirable for a user to provide intuitive non-image input,such as text,sketch,stroke,graph,or layout,and have a computer system automatically generate photo-realistic images according to that input.While classically,works that allow such automatic image content generation have followed a framework of image retrieval and composition,recent advances in deep generative models such as generative adversarial networks(GANs),variational autoencoders(VAEs),and flow-based methods have enabled more powerful and versatile image generation approaches.This paper reviews recent works for image synthesis given intuitive user input,covering advances in input versatility,image generation methodology,benchmark datasets,and evaluation metrics.This motivates new perspectives on input representation and interactivity,cross fertilization between major image generation paradigms,and evaluation and comparison of generation methods.