We report the experimental and theoretical investigation of tilted spatiotemporal optical vortices with partial temporal coherence.The theoretical study shows that the instantaneous spatiotemporal optical vortex is wi...We report the experimental and theoretical investigation of tilted spatiotemporal optical vortices with partial temporal coherence.The theoretical study shows that the instantaneous spatiotemporal optical vortex is widely variable with the statistical orbital angular momentum(OAM)direction.While decreasing temporal coherence results in a larger variability of OAM tilt,the average OAM direction is relatively unchanged.展开更多
We measure the electromagnetic degree of temporal coherence and the associated coherence time for quasi-monochromatic unpolarized light beams emitted by an LED, a filtered halogen lamp, and a multimode He–Ne laser.Th...We measure the electromagnetic degree of temporal coherence and the associated coherence time for quasi-monochromatic unpolarized light beams emitted by an LED, a filtered halogen lamp, and a multimode He–Ne laser.The method is based on observing at the output of a Michelson interferometer the visibilities(contrasts) of the intensity and polarization-state modulations expressed in terms of the Stokes parameters. The results are in good agreement with those deduced directly from the source spectra. The measurements are repeated after passing the beams through a linear polarizer so as to elucidate the role of polarization in electromagnetic coherence. While the polarizer varies the equal-time degree of coherence consistently with the theoretical predictions and alters the inner structure of the coherence matrix, the coherence time remains almost unchanged when the light varies from unpolarized to polarized. The results are important in the areas of applications dealing with physical optics and electromagnetic interference.展开更多
Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved,including digit deformation,noise interference between frames,blurred output,and the need for tem...Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved,including digit deformation,noise interference between frames,blurred output,and the need for temporal coherence across frames.In this paper,we propose a novel approach for generating coherent videos of moving digits from textual input using a Deep Deconvolutional Generative Adversarial Network(DD-GAN).The DDGAN comprises a Deep Deconvolutional Neural Network(DDNN)as a Generator(G)and a modified Deep Convolutional Neural Network(DCNN)as a Discriminator(D)to ensure temporal coherence between adjacent frames.The proposed research involves several steps.First,the input text is fed into a Long Short Term Memory(LSTM)based text encoder and then smoothed using Conditioning Augmentation(CA)techniques to enhance the effectiveness of the Generator(G).Next,using a DDNN to generate video frames by incorporating enhanced text and random noise and modifying a DCNN to act as a Discriminator(D),effectively distinguishing between generated and real videos.This research evaluates the quality of the generated videos using standard metrics like Inception Score(IS),Fréchet Inception Distance(FID),Fréchet Inception Distance for video(FID2vid),and Generative Adversarial Metric(GAM),along with a human study based on realism,coherence,and relevance.By conducting experiments on Single-Digit Bouncing MNIST GIFs(SBMG),Two-Digit Bouncing MNIST GIFs(TBMG),and a custom dataset of essential mathematics videos with related text,this research demonstrates significant improvements in both metrics and human study results,confirming the effectiveness of DD-GAN.This research also took the exciting challenge of generating preschool math videos from text,handling complex structures,digits,and symbols,and achieving successful results.The proposed research demonstrates promising results for generating coherent videos from textual input.展开更多
Temporal coherence is one of the central challenges for rendering a stylized line. It is especially difficult for stylized contours of coarse meshes or nonuniformly sampled models, because those contours are polygonal...Temporal coherence is one of the central challenges for rendering a stylized line. It is especially difficult for stylized contours of coarse meshes or nonuniformly sampled models, because those contours are polygonal feature edges on the models with no continuous correspondences between frames. We describe a novel and simple technique for constructing a 2D brush path along a 3D contour. We also introduce a 3D parameter propagation and re-parameterization procedure to construct stroke paths along the 2D brush path to draw coherently stylized feature lines with a wide range of styles. Our method runs in real-time for coarse or non-uniformly sampled models, making it suitable for interactive applications needing temporal coherence.展开更多
To reduce the flicker artifacts caused by video defogging,a surveillance video defogging algorithm based on the background extraction and consistent constraints is proposed.First,an inter frame consistency constraint ...To reduce the flicker artifacts caused by video defogging,a surveillance video defogging algorithm based on the background extraction and consistent constraints is proposed.First,an inter frame consistency constraint is constructed and applied to background modeling.Second,the extracted background is defogged with an improved static defogging approach.Third,the foreground is extracted using the extracted background and further defogged using constraints of the consistency between the foreground and background.Experimental results show that our algorithm can remove fog effectively and preserve the temporal coherence well.展开更多
基金supported by the National Research Foundation of Korea(NRF)funded by the Korea government(MSIT)(No.2022R1A2C1091890).
文摘We report the experimental and theoretical investigation of tilted spatiotemporal optical vortices with partial temporal coherence.The theoretical study shows that the instantaneous spatiotemporal optical vortex is widely variable with the statistical orbital angular momentum(OAM)direction.While decreasing temporal coherence results in a larger variability of OAM tilt,the average OAM direction is relatively unchanged.
文摘We measure the electromagnetic degree of temporal coherence and the associated coherence time for quasi-monochromatic unpolarized light beams emitted by an LED, a filtered halogen lamp, and a multimode He–Ne laser.The method is based on observing at the output of a Michelson interferometer the visibilities(contrasts) of the intensity and polarization-state modulations expressed in terms of the Stokes parameters. The results are in good agreement with those deduced directly from the source spectra. The measurements are repeated after passing the beams through a linear polarizer so as to elucidate the role of polarization in electromagnetic coherence. While the polarizer varies the equal-time degree of coherence consistently with the theoretical predictions and alters the inner structure of the coherence matrix, the coherence time remains almost unchanged when the light varies from unpolarized to polarized. The results are important in the areas of applications dealing with physical optics and electromagnetic interference.
基金supported by the General Program of the National Natural Science Foundation of China(Grant No.61977029).
文摘Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved,including digit deformation,noise interference between frames,blurred output,and the need for temporal coherence across frames.In this paper,we propose a novel approach for generating coherent videos of moving digits from textual input using a Deep Deconvolutional Generative Adversarial Network(DD-GAN).The DDGAN comprises a Deep Deconvolutional Neural Network(DDNN)as a Generator(G)and a modified Deep Convolutional Neural Network(DCNN)as a Discriminator(D)to ensure temporal coherence between adjacent frames.The proposed research involves several steps.First,the input text is fed into a Long Short Term Memory(LSTM)based text encoder and then smoothed using Conditioning Augmentation(CA)techniques to enhance the effectiveness of the Generator(G).Next,using a DDNN to generate video frames by incorporating enhanced text and random noise and modifying a DCNN to act as a Discriminator(D),effectively distinguishing between generated and real videos.This research evaluates the quality of the generated videos using standard metrics like Inception Score(IS),Fréchet Inception Distance(FID),Fréchet Inception Distance for video(FID2vid),and Generative Adversarial Metric(GAM),along with a human study based on realism,coherence,and relevance.By conducting experiments on Single-Digit Bouncing MNIST GIFs(SBMG),Two-Digit Bouncing MNIST GIFs(TBMG),and a custom dataset of essential mathematics videos with related text,this research demonstrates significant improvements in both metrics and human study results,confirming the effectiveness of DD-GAN.This research also took the exciting challenge of generating preschool math videos from text,handling complex structures,digits,and symbols,and achieving successful results.The proposed research demonstrates promising results for generating coherent videos from textual input.
基金supported by the National Natural Science Foundation of China (Nos. 61472224 and 61472225)the National High-tech R&D Program of China (No. 2012AA01A306)+1 种基金the special fund for Independent Innovation and Transformation of Achievements in Shandong Province (No. 2014zzcx08201)the special funds of the Taishan Scholar Construction Project, and the China Scholarship Council (No. 201406220065)
文摘Temporal coherence is one of the central challenges for rendering a stylized line. It is especially difficult for stylized contours of coarse meshes or nonuniformly sampled models, because those contours are polygonal feature edges on the models with no continuous correspondences between frames. We describe a novel and simple technique for constructing a 2D brush path along a 3D contour. We also introduce a 3D parameter propagation and re-parameterization procedure to construct stroke paths along the 2D brush path to draw coherently stylized feature lines with a wide range of styles. Our method runs in real-time for coarse or non-uniformly sampled models, making it suitable for interactive applications needing temporal coherence.
基金Supported by the National Natural Science Foundation of China(61571046)the 2020 Postgraduate Curriculum Construction Project of Beijing Forestry University(HXKC2005)
文摘To reduce the flicker artifacts caused by video defogging,a surveillance video defogging algorithm based on the background extraction and consistent constraints is proposed.First,an inter frame consistency constraint is constructed and applied to background modeling.Second,the extracted background is defogged with an improved static defogging approach.Third,the foreground is extracted using the extracted background and further defogged using constraints of the consistency between the foreground and background.Experimental results show that our algorithm can remove fog effectively and preserve the temporal coherence well.