摘要
基于去噪扩散模型的零样本图像编辑方法取得了瞩目的成就,将之应用于3D场景编辑可实现零样本的文本驱动3D场景编辑。然而,其3D编辑效果容易受扩散模型的3D连续性与过度编辑等问题影响,产生错误的编辑结果。针对这些问题,提出了一种新的文本驱动3D编辑方法,该方法从数据端着手,提出了基于关键视图的数据迭代方法与基于像素点的异常数据掩码模块。关键视图数据可以引导一个3D区域的编辑以减少3D不一致数据的影响,而数据掩码模块则可以过滤掉2D输入数据中的异常点。使用该方法,可以实现生动的照片级文本驱动3D场景编辑效果。实验证明,相较于一些目前先进的文本驱动3D场景编辑方法,可以大大减少3D场景中错误的编辑,实现更加生动的、更具真实感的3D编辑效果。此外,使用该方法生成的编辑结果更具多样性、编辑效率也更高。
The zero-shot image editing method based on denoising diffusion model has made remarkable achievements,and its application to 3D scene editing enables zero-shot text-driven 3D scene editing.However,its 3D editing results are easily affected by the 3D continuity of the diffusion model and over-editing,leading to erroneous editing results.To address these problems,a new text-driven 3D editing method was proposed,which started from the dataset and proposed key view-based data iteration and pixel-based abnormal data masking module.The key view data could guide the editing of a 3D area to minimize the effect of 3D inconsistent data,while the data masking module could filter out anomalies in the 2D input data.Using this method,vivid photo-quality text-driven 3D scene editing effects could be realized.Experiments have demonstrated that compared to some current advanced text-driven 3D scene editing methods,the erroneous editing in the 3D scenes could be greatly reduced,resulting in more vivid and realistic 3D editing effects.In addition,the editing results generated by the method in this paper were more diversified and more efficient.
作者
张冀
崔文帅
张荣华
王文彬
李亚琦
ZHANG Ji;CUI Wenshuai;ZHANG Ronghua;WANG Wenbin;LI Yaqi(Department of Computer,North China Electric Power University,Baoding Hebei 071003,China;Hebei Key Laboratory of Knowledge Computing for Energy&Power,Baoding Hebei 071003,China;Engineering Research Center of Intelligent Computing for Complex Energy Systems,Ministry of Education,Baoding Hebei 071003,China)
出处
《图学学报》
CSCD
北大核心
2024年第4期834-844,共11页
Journal of Graphics
基金
河北省科技计划资助项目(22310302D)。