期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
A Tabletop Nano-CT Image Noise Reduction Network Based on 3-Dimensional Axial Attention Mechanism
1
作者 Huijuan Fu Linlin Zhu +5 位作者 ChunhuiWang Xiaoqi Xi Yu Han Lei Li Yanmin Sun Bin Yan 《Computers, Materials & Continua》 SCIE EI 2024年第7期1711-1725,共15页
Nano-computed tomography(Nano-CT)is an emerging,high-resolution imaging technique.However,due to their low-light properties,tabletop Nano-CT has to be scanned under long exposure conditions,which the scanning process ... Nano-computed tomography(Nano-CT)is an emerging,high-resolution imaging technique.However,due to their low-light properties,tabletop Nano-CT has to be scanned under long exposure conditions,which the scanning process is time-consuming.For 3D reconstruction data,this paper proposed a lightweight 3D noise reduction method for desktop-level Nano-CT called AAD-ResNet(Axial Attention DeNoise ResNet).The network is framed by theU-net structure.The encoder and decoder are incorporated with the proposed 3D axial attention mechanism and residual dense block.Each layer of the residual dense block can directly access the features of the previous layer,which reduces the redundancy of parameters and improves the efficiency of network training.The 3D axial attention mechanism enhances the correlation between 3D information in the training process and captures the long-distance dependence.It can improve the noise reduction effect and avoid the loss of image structure details.Experimental results show that the network can effectively improve the image quality of a 0.1-s exposure scan to a level close to a 3-s exposure,significantly shortening the sample scanning time. 展开更多
关键词 Deep learning tabletop Nano-CT image denoising 3D axial attention mechanism
下载PDF
Image Inpainting Technique Incorporating Edge Prior and Attention Mechanism
2
作者 Jinxian Bai Yao Fan +1 位作者 Zhiwei Zhao Lizhi Zheng 《Computers, Materials & Continua》 SCIE EI 2024年第1期999-1025,共27页
Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images wit... Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images with large holes,leading to distortions in the structure and blurring of textures.To address these problems,we combine the advantages of transformers and convolutions to propose an image inpainting method that incorporates edge priors and attention mechanisms.The proposed method aims to improve the results of inpainting large holes in images by enhancing the accuracy of structure restoration and the ability to recover texture details.This method divides the inpainting task into two phases:edge prediction and image inpainting.Specifically,in the edge prediction phase,a transformer architecture is designed to combine axial attention with standard self-attention.This design enhances the extraction capability of global structural features and location awareness.It also balances the complexity of self-attention operations,resulting in accurate prediction of the edge structure in the defective region.In the image inpainting phase,a multi-scale fusion attention module is introduced.This module makes full use of multi-level distant features and enhances local pixel continuity,thereby significantly improving the quality of image inpainting.To evaluate the performance of our method.comparative experiments are conducted on several datasets,including CelebA,Places2,and Facade.Quantitative experiments show that our method outperforms the other mainstream methods.Specifically,it improves Peak Signal-to-Noise Ratio(PSNR)and Structure Similarity Index Measure(SSIM)by 1.141~3.234 db and 0.083~0.235,respectively.Moreover,it reduces Learning Perceptual Image Patch Similarity(LPIPS)and Mean Absolute Error(MAE)by 0.0347~0.1753 and 0.0104~0.0402,respectively.Qualitative experiments reveal that our method excels at reconstructing images with complete structural information and clear texture details.Furthermore,our model exhibits impressive performance in terms of the number of parameters,memory cost,and testing time. 展开更多
关键词 Image inpainting TRANSFORMER edge prior axial attention multi-scale fusion attention
下载PDF
TC-Fuse: A Transformers Fusing CNNs Network for Medical Image Segmentation
3
作者 Peng Geng Ji Lu +3 位作者 Ying Zhang Simin Ma Zhanzhong Tang Jianhua Liu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第11期2001-2023,共23页
In medical image segmentation task,convolutional neural networks(CNNs)are difficult to capture long-range dependencies,but transformers can model the long-range dependencies effectively.However,transformers have a fle... In medical image segmentation task,convolutional neural networks(CNNs)are difficult to capture long-range dependencies,but transformers can model the long-range dependencies effectively.However,transformers have a flexible structure and seldom assume the structural bias of input data,so it is difficult for transformers to learn positional encoding of the medical images when using fewer images for training.To solve these problems,a dual branch structure is proposed.In one branch,Mix-Feed-Forward Network(Mix-FFN)and axial attention are adopted to capture long-range dependencies and keep the translation invariance of the model.Mix-FFN whose depth-wise convolutions can provide position information is better than ordinary positional encoding.In the other branch,traditional convolutional neural networks(CNNs)are used to extract different features of fewer medical images.In addition,the attention fusion module BiFusion is used to effectively integrate the information from the CNN branch and Transformer branch,and the fused features can effectively capture the global and local context of the current spatial resolution.On the public standard datasets Gland Segmentation(GlaS),Colorectal adenocarcinoma gland(CRAG)and COVID-19 CT Images Segmentation,the F1-score,Intersection over Union(IoU)and parameters of the proposed TC-Fuse are superior to those by Axial Attention U-Net,U-Net,Medical Transformer and other methods.And F1-score increased respectively by 2.99%,3.42%and 3.95%compared with Medical Transformer. 展开更多
关键词 TRANSFORMERS convolutional neural networks fusion medical image segmentation axial attention
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部