We present an approach for generating paintings on photographic images with the style encoded by the example paintings and adopt representative brushes extracted from the example paintings as the painting primitives. ...We present an approach for generating paintings on photographic images with the style encoded by the example paintings and adopt representative brushes extracted from the example paintings as the painting primitives. Our system first divides the given photographic image into several regions on which we synthesize a grounding layer with texture patches extracted from the example paintings. Then, we paint those regions using brushes stochastically chosen from the brush library, with further brush color and shape perturbations. The brush direction is determined by a direction field either constructed by a convenient user interactive manner or synthesized from the examples. Our approach offers flexible and intuitive user control over the painting process and style.展开更多
Recently, topic models such as Latent Dirichlet Allocation(LDA) have been widely used in large-scale web mining. Many large-scale LDA training systems have been developed, which usually prefer a customized design from...Recently, topic models such as Latent Dirichlet Allocation(LDA) have been widely used in large-scale web mining. Many large-scale LDA training systems have been developed, which usually prefer a customized design from top to bottom with sophisticated synchronization support. We propose an LDA training system named ZenLDA, which follows a generalized design for the distributed data-parallel platform. The novelty of ZenLDA consists of three main aspects:(1) it converts the commonly used serial Collapsed Gibbs Sampling(CGS) inference algorithm to a Monte-Carlo Collapsed Bayesian(MCCB) estimation method, which is embarrassingly parallel;(2)it decomposes the LDA inference formula into parts that can be sampled more efficiently to reduce computation complexity;(3) it proposes a distributed LDA training framework, which represents the corpus as a directed graph with the parameters annotated as corresponding vertices and implements ZenLDA and other well-known inference methods based on Spark. Experimental results indicate that MCCB converges with accuracy similar to that of CGS, while running much faster. On top of MCCB, the ZenLDA formula decomposition achieved the fastest speed among other well-known inference methods. ZenLDA also showed good scalability when dealing with large-scale topic models on the data-parallel platform. Overall, ZenLDA could achieve comparable and even better computing performance with state-of-the-art dedicated systems.展开更多
基金Project supported by the National Basic Research Program (973) of China (No. 2002CB312101) and the National Natural Science Foun-dation of China (Nos. 60403038 and 60373037)
文摘We present an approach for generating paintings on photographic images with the style encoded by the example paintings and adopt representative brushes extracted from the example paintings as the painting primitives. Our system first divides the given photographic image into several regions on which we synthesize a grounding layer with texture patches extracted from the example paintings. Then, we paint those regions using brushes stochastically chosen from the brush library, with further brush color and shape perturbations. The brush direction is determined by a direction field either constructed by a convenient user interactive manner or synthesized from the examples. Our approach offers flexible and intuitive user control over the painting process and style.
基金partially supported by the National Natural Science Foundation of China(No.61572250)the Science and Technology Program of Jiangsu Province(No.BE2017155)
文摘Recently, topic models such as Latent Dirichlet Allocation(LDA) have been widely used in large-scale web mining. Many large-scale LDA training systems have been developed, which usually prefer a customized design from top to bottom with sophisticated synchronization support. We propose an LDA training system named ZenLDA, which follows a generalized design for the distributed data-parallel platform. The novelty of ZenLDA consists of three main aspects:(1) it converts the commonly used serial Collapsed Gibbs Sampling(CGS) inference algorithm to a Monte-Carlo Collapsed Bayesian(MCCB) estimation method, which is embarrassingly parallel;(2)it decomposes the LDA inference formula into parts that can be sampled more efficiently to reduce computation complexity;(3) it proposes a distributed LDA training framework, which represents the corpus as a directed graph with the parameters annotated as corresponding vertices and implements ZenLDA and other well-known inference methods based on Spark. Experimental results indicate that MCCB converges with accuracy similar to that of CGS, while running much faster. On top of MCCB, the ZenLDA formula decomposition achieved the fastest speed among other well-known inference methods. ZenLDA also showed good scalability when dealing with large-scale topic models on the data-parallel platform. Overall, ZenLDA could achieve comparable and even better computing performance with state-of-the-art dedicated systems.