期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Residual Feature Attentional Fusion Network for Lightweight Chest CT Image Super-Resolution 被引量:1
1
作者 Kun Yang Lei Zhao +4 位作者 Xianghui Wang Mingyang Zhang Linyan Xue Shuang Liu Kun Liu 《Computers, Materials & Continua》 SCIE EI 2023年第6期5159-5176,共18页
The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study s... The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study super-resolution(SR)algorithms applied to CT images to improve the reso-lution of CT images.However,most of the existing SR algorithms are studied based on natural images,which are not suitable for medical images;and most of these algorithms improve the reconstruction quality by increasing the network depth,which is not suitable for machines with limited resources.To alleviate these issues,we propose a residual feature attentional fusion network for lightweight chest CT image super-resolution(RFAFN).Specifically,we design a contextual feature extraction block(CFEB)that can extract CT image features more efficiently and accurately than ordinary residual blocks.In addition,we propose a feature-weighted cascading strategy(FWCS)based on attentional feature fusion blocks(AFFB)to utilize the high-frequency detail information extracted by CFEB as much as possible via selectively fusing adjacent level feature information.Finally,we suggest a global hierarchical feature fusion strategy(GHFFS),which can utilize the hierarchical features more effectively than dense concatenation by progressively aggregating the feature information at various levels.Numerous experiments show that our method performs better than most of the state-of-the-art(SOTA)methods on the COVID-19 chest CT dataset.In detail,the peak signal-to-noise ratio(PSNR)is 0.11 dB and 0.47 dB higher on CTtest1 and CTtest2 at×3 SR compared to the suboptimal method,but the number of parameters and multi-adds are reduced by 22K and 0.43G,respectively.Our method can better recover chest CT image quality with fewer computational resources and effectively assist in COVID-19. 展开更多
关键词 SUPER-RESOLUTION COVID-19 chest CT lightweight network contextual feature extraction attentional feature fusion
下载PDF
Image Inpainting Technique Incorporating Edge Prior and Attention Mechanism
2
作者 Jinxian Bai Yao Fan +1 位作者 Zhiwei Zhao Lizhi Zheng 《Computers, Materials & Continua》 SCIE EI 2024年第1期999-1025,共27页
Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images wit... Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images with large holes,leading to distortions in the structure and blurring of textures.To address these problems,we combine the advantages of transformers and convolutions to propose an image inpainting method that incorporates edge priors and attention mechanisms.The proposed method aims to improve the results of inpainting large holes in images by enhancing the accuracy of structure restoration and the ability to recover texture details.This method divides the inpainting task into two phases:edge prediction and image inpainting.Specifically,in the edge prediction phase,a transformer architecture is designed to combine axial attention with standard self-attention.This design enhances the extraction capability of global structural features and location awareness.It also balances the complexity of self-attention operations,resulting in accurate prediction of the edge structure in the defective region.In the image inpainting phase,a multi-scale fusion attention module is introduced.This module makes full use of multi-level distant features and enhances local pixel continuity,thereby significantly improving the quality of image inpainting.To evaluate the performance of our method.comparative experiments are conducted on several datasets,including CelebA,Places2,and Facade.Quantitative experiments show that our method outperforms the other mainstream methods.Specifically,it improves Peak Signal-to-Noise Ratio(PSNR)and Structure Similarity Index Measure(SSIM)by 1.141~3.234 db and 0.083~0.235,respectively.Moreover,it reduces Learning Perceptual Image Patch Similarity(LPIPS)and Mean Absolute Error(MAE)by 0.0347~0.1753 and 0.0104~0.0402,respectively.Qualitative experiments reveal that our method excels at reconstructing images with complete structural information and clear texture details.Furthermore,our model exhibits impressive performance in terms of the number of parameters,memory cost,and testing time. 展开更多
关键词 Image inpainting TRANSFORMER edge prior axial attention multi-scale fusion attention
下载PDF
A life-prediction method for lithium-ion batteries based on a fusion model and an attention mechanism 被引量:1
3
作者 WANG Xian-bao WU Fei-teng YAO Ming-hai 《Optoelectronics Letters》 EI 2020年第6期410-417,共8页
The current life-prediction models for lithium-ion batteries have several problems, such as the construction of complex feature structures, a high number of feature dimensions, and inaccurate prediction results. To ov... The current life-prediction models for lithium-ion batteries have several problems, such as the construction of complex feature structures, a high number of feature dimensions, and inaccurate prediction results. To overcome these problems, this paper proposes a deep-learning model combining an autoencoder network and a long short-term memory network. First, this model applies the characteristics of the autoencoder to reduce the dimensionality of the high-dimensional features extracted from the battery data set and realize the fusion of complex time-domain features, which overcomes the problems of redundant model information and low computational efficiency. This model then uses a long short-term memory network that is sensitive to time-series data to solve the long-path dependence problem in the prediction of battery life. Lastly, the attention mechanism is used to give greater weight to features that have a greater impact on the target value, which enhances the learning effect of the model on the long input sequence. To verify the efficacy of the proposed model, this paper uses NASA's lithium-ion battery cycle life data set. 展开更多
关键词 A life-prediction method for lithium-ion batteries based on a fusion model and an attention mechanism
原文传递
Video Enhancement Network Based on CNN and Transformer
4
作者 YUAN Lang HUI Chen +3 位作者 WU Yanfeng LIAO Ronghua JIANG Feng GAO Ying 《ZTE Communications》 2024年第4期78-88,共11页
To enhance the video quality after encoding and decoding in video compression,a video quality enhancement framework is pro-posed based on local and non-local priors in this paper.Low-level features are first extracted... To enhance the video quality after encoding and decoding in video compression,a video quality enhancement framework is pro-posed based on local and non-local priors in this paper.Low-level features are first extracted through a single convolution layer and then pro-cessed by several conv-tran blocks(CTB)to extract high-level features,which are ultimately transformed into a residual image.The final re-constructed video frame is obtained by performing an element-wise addition of the residual image and the original lossy video frame.Experi-ments show that the proposed Conv-Tran Network(CTN)model effectively recovers the quality loss caused by Versatile Video Coding(VVC)and further improves VVC's performance. 展开更多
关键词 attention fusion mechanism H.266/VVC transformer video coding video quality enhancement
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部