期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
基于AttentionR2U-net的岩石(体)关键节理智能识别与参数提取
1
作者 孙浩 代宗晟 +1 位作者 金爱兵 陈岩 《东北大学学报(自然科学版)》 EI CAS CSCD 北大核心 2024年第1期101-110,共10页
针对岩石(体)表面复杂节理网中关键节理的智能识别与参数提取问题,提出一种基于AttentionR2U-net网络与节理几何特征模型耦合识别的方法.在R2U-net网络的基础上引入注意门(attentiongate)改进网络,通过定性与定量的方法对边坡节理图像... 针对岩石(体)表面复杂节理网中关键节理的智能识别与参数提取问题,提出一种基于AttentionR2U-net网络与节理几何特征模型耦合识别的方法.在R2U-net网络的基础上引入注意门(attentiongate)改进网络,通过定性与定量的方法对边坡节理图像和混凝土、龟裂土、常见脆性岩石裂隙图像的识别结果分别作准确性及泛化能力检验;利用AttentionR2U-net网络耦合节理几何特征的方法识别关键节理,提取原始节理和关键节理的几何参数并对其迹长、面积及倾角作差异性分析.研究结果表明:针对岩石(体)节理识别,本文算法的Dice相似系数从U-net网络的0.965提升至0.990,且明显优于传统算法,故本文算法在岩石(体)节理识别上具有更强的可靠性与优越性;针对混凝土、龟裂土和大理岩、花岗岩、砂岩等脆性岩石裂隙的识别,本文算法的Dice相似系数均在0.953以上,故本文算法具有较强的泛化能力.与原始节理网络相比,关键节理网络优势迹长由0.732m显著增大至1.835m,节理倾角分布形式和优势倾角组均不变,优势迹长和倾角的节理占比均显著增大. 展开更多
关键词 岩石(体) 关键节理 attentionr2u-net网络 智能识别 参数提取
下载PDF
Image to Image Translation Based on Differential Image Pix2Pix Model
2
作者 Xi Zhao Haizheng Yu Hong Bian 《Computers, Materials & Continua》 SCIE EI 2023年第10期181-198,共18页
In recent years,Pix2Pix,a model within the domain of GANs,has found widespread application in the field of image-to-image translation.However,traditional Pix2Pix models suffer from significant drawbacks in image gener... In recent years,Pix2Pix,a model within the domain of GANs,has found widespread application in the field of image-to-image translation.However,traditional Pix2Pix models suffer from significant drawbacks in image generation,such as the loss of important information features during the encoding and decoding processes,as well as a lack of constraints during the training process.To address these issues and improve the quality of Pix2Pixgenerated images,this paper introduces two key enhancements.Firstly,to reduce information loss during encoding and decoding,we utilize the U-Net++network as the generator for the Pix2Pix model,incorporating denser skip-connection to minimize information loss.Secondly,to enhance constraints during image generation,we introduce a specialized discriminator designed to distinguish differential images,further enhancing the quality of the generated images.We conducted experiments on the facades dataset and the sketch portrait dataset from the Chinese University of Hong Kong to validate our proposed model.The experimental results demonstrate that our improved Pix2Pix model significantly enhances image quality and outperforms other models in the selected metrics.Notably,the Pix2Pix model incorporating the differential image discriminator exhibits the most substantial improvements across all metrics.An analysis of the experimental results reveals that the use of the U-Net++generator effectively reduces information feature loss,while the Pix2Pix model incorporating the differential image discriminator enhances the supervision of the generator during training.Both of these enhancements collectively improve the quality of Pix2Pix-generated images. 展开更多
关键词 Image-to-image translation generative adversarial networks u-net++ differential image Pix2Pix
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部