In this article we consider the asymptotic behavior of extreme distribution with the extreme value index γ>0 . The rates of uniform convergence for Fréchet distribution are constructed under the second-order ...In this article we consider the asymptotic behavior of extreme distribution with the extreme value index γ>0 . The rates of uniform convergence for Fréchet distribution are constructed under the second-order regular variation condition.展开更多
为提升谱聚类的聚类精度和适用性,提出了一种基于Fréchet距离的谱聚类算法(A Spectral Clustering Algorithm Based on Fréchet Distance,FSC),通过Fréchet距离构建相似度矩阵,并将重构的相似矩阵应用于谱聚类中。利用Fr...为提升谱聚类的聚类精度和适用性,提出了一种基于Fréchet距离的谱聚类算法(A Spectral Clustering Algorithm Based on Fréchet Distance,FSC),通过Fréchet距离构建相似度矩阵,并将重构的相似矩阵应用于谱聚类中。利用Fréchet距离度量数据特征维度的相似性对样本的多个特征进行分析,进而扩展典型谱聚类算法的适用性。FSC不仅适用于低维流形结构清晰的数据,也适用于高维或稀疏数据,如高光谱图像数据。在3个经典的高光谱图像上的实验结果表明,FSC算法有效提高了高光谱图像聚类的精度。展开更多
Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved,including digit deformation,noise interference between frames,blurred output,and the need for tem...Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved,including digit deformation,noise interference between frames,blurred output,and the need for temporal coherence across frames.In this paper,we propose a novel approach for generating coherent videos of moving digits from textual input using a Deep Deconvolutional Generative Adversarial Network(DD-GAN).The DDGAN comprises a Deep Deconvolutional Neural Network(DDNN)as a Generator(G)and a modified Deep Convolutional Neural Network(DCNN)as a Discriminator(D)to ensure temporal coherence between adjacent frames.The proposed research involves several steps.First,the input text is fed into a Long Short Term Memory(LSTM)based text encoder and then smoothed using Conditioning Augmentation(CA)techniques to enhance the effectiveness of the Generator(G).Next,using a DDNN to generate video frames by incorporating enhanced text and random noise and modifying a DCNN to act as a Discriminator(D),effectively distinguishing between generated and real videos.This research evaluates the quality of the generated videos using standard metrics like Inception Score(IS),Fréchet Inception Distance(FID),Fréchet Inception Distance for video(FID2vid),and Generative Adversarial Metric(GAM),along with a human study based on realism,coherence,and relevance.By conducting experiments on Single-Digit Bouncing MNIST GIFs(SBMG),Two-Digit Bouncing MNIST GIFs(TBMG),and a custom dataset of essential mathematics videos with related text,this research demonstrates significant improvements in both metrics and human study results,confirming the effectiveness of DD-GAN.This research also took the exciting challenge of generating preschool math videos from text,handling complex structures,digits,and symbols,and achieving successful results.The proposed research demonstrates promising results for generating coherent videos from textual input.展开更多
文摘In this article we consider the asymptotic behavior of extreme distribution with the extreme value index γ>0 . The rates of uniform convergence for Fréchet distribution are constructed under the second-order regular variation condition.
文摘为提升谱聚类的聚类精度和适用性,提出了一种基于Fréchet距离的谱聚类算法(A Spectral Clustering Algorithm Based on Fréchet Distance,FSC),通过Fréchet距离构建相似度矩阵,并将重构的相似矩阵应用于谱聚类中。利用Fréchet距离度量数据特征维度的相似性对样本的多个特征进行分析,进而扩展典型谱聚类算法的适用性。FSC不仅适用于低维流形结构清晰的数据,也适用于高维或稀疏数据,如高光谱图像数据。在3个经典的高光谱图像上的实验结果表明,FSC算法有效提高了高光谱图像聚类的精度。
基金supported by the General Program of the National Natural Science Foundation of China(Grant No.61977029).
文摘Generating realistic and synthetic video from text is a highly challenging task due to the multitude of issues involved,including digit deformation,noise interference between frames,blurred output,and the need for temporal coherence across frames.In this paper,we propose a novel approach for generating coherent videos of moving digits from textual input using a Deep Deconvolutional Generative Adversarial Network(DD-GAN).The DDGAN comprises a Deep Deconvolutional Neural Network(DDNN)as a Generator(G)and a modified Deep Convolutional Neural Network(DCNN)as a Discriminator(D)to ensure temporal coherence between adjacent frames.The proposed research involves several steps.First,the input text is fed into a Long Short Term Memory(LSTM)based text encoder and then smoothed using Conditioning Augmentation(CA)techniques to enhance the effectiveness of the Generator(G).Next,using a DDNN to generate video frames by incorporating enhanced text and random noise and modifying a DCNN to act as a Discriminator(D),effectively distinguishing between generated and real videos.This research evaluates the quality of the generated videos using standard metrics like Inception Score(IS),Fréchet Inception Distance(FID),Fréchet Inception Distance for video(FID2vid),and Generative Adversarial Metric(GAM),along with a human study based on realism,coherence,and relevance.By conducting experiments on Single-Digit Bouncing MNIST GIFs(SBMG),Two-Digit Bouncing MNIST GIFs(TBMG),and a custom dataset of essential mathematics videos with related text,this research demonstrates significant improvements in both metrics and human study results,confirming the effectiveness of DD-GAN.This research also took the exciting challenge of generating preschool math videos from text,handling complex structures,digits,and symbols,and achieving successful results.The proposed research demonstrates promising results for generating coherent videos from textual input.