期刊文献+
共找到9,872篇文章
< 1 2 250 >
每页显示 20 50 100
Multi-Focus Image Fusion Based on Wavelet Transformation 被引量:4
1
作者 Peng Zhang Ying-Xun Tang +1 位作者 Yan-Hua Liang Xu-Bo Liu 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2013年第2期124-128,共5页
In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, whi... In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, which will make the energy expression insufficient to reflect the local clarity. Therefore,in this paper,a novel construction method for activity measurement is proposed. Firstly,it uses the wavelet decomposition for the fusion resource image, and then utilizes the high and low frequency wavelet coefficients synthetically. Meantime,it takes the normalized variance as the weight of high-frequency energy. Secondly,it calculates the measurement by the weighted energy,which can be used to measure the local character. Finally,the fusion coefficients can be got. In order to illustrate the superiority of this new method,three kinds of assessing indicators are provided. The experiment results show that,comparing with the traditional methods,this new method weakens the fuzzy and promotes the indicator value. Therefore,it has much more advantages for practical application. 展开更多
关键词 variance MEASURE image fusion wavelet transformation multi-resolution analysis
下载PDF
Multi-focus image fusion based on block matching in 3D transform domain 被引量:5
2
作者 YANG Dongsheng HU Shaohai +2 位作者 LIU Shuaiqi MA Xiaole SUN Yuchao 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2018年第2期415-428,共14页
Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to ... Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods. 展开更多
关键词 image fusion block matching 3D transform block-matching and 3D(BM3D) non-subsampled Shearlet transform(NSST)
下载PDF
Efficient Compressive Multi-Focus Image Fusion
3
作者 Chao Yang Bin Yang 《Journal of Computer and Communications》 2014年第9期78-86,共9页
Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coeffi- cients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed... Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coeffi- cients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed in literatures. However, the traditional clarity measures are not designed for compressive imaging measurements which are maps of source sense with random or likely ran- dom measurements matrix. This paper presents a novel efficient multi-focus image fusion frame- work for compressive imaging sensor network. Here the clarity measure of the raw compressive measurements is not obtained from the random sampling data itself but from the selected Hada- mard coefficients which can also be acquired from compressive imaging system efficiently. Then, the compressive measurements with different images are fused by selecting fusion rule. Finally, the block-based CS which coupled with iterative projection-based reconstruction is used to re- cover the fused image. Experimental results on common used testing data demonstrate the effectiveness of the proposed method. 展开更多
关键词 CLARITY Measures COMPRESSIVE imagING multi-focus image fusion
下载PDF
A New Method of Multi-Focus Image Fusion Using Laplacian Operator and Region Optimization
4
作者 Chao Wang Rui Yuan +3 位作者 Yuqiu Sun Yuanxiang Jiang Changsheng Chen Xiangliang Lin 《Journal of Computer and Communications》 2018年第5期106-118,共13页
Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus ... Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus image fusion extracts the focused information from the source images to construct a global in-focus image which includes more information than any of the source images. In this paper, a novel multi-focus image fusion based on Laplacian operator and region optimization is proposed. The evaluation of image saliency based on Laplacian operator can easily distinguish the focus region and out of focus region. And the decision map obtained by Laplacian operator processing has less the residual information than other methods. For getting precise decision map, focus area and edge optimization based on regional connectivity and edge detection have been taken. Finally, the original images are fused through the decision map. Experimental results indicate that the proposed algorithm outperforms the other series of algorithms in terms of both subjective and objective evaluations. 展开更多
关键词 image fusion LAPLACIAN OPERATOR multi-focus REGION OPTIMIZATION
下载PDF
Multi-focus Image Fusion Combined with CNN and Algebraic Multi-grid Method
5
作者 Ying Huang Gaofeng Mao +1 位作者 Min Liu Yafei Ou 《国际计算机前沿大会会议论文集》 2019年第2期127-129,共3页
The aim of the paper is to solve the problem of over-segmentation problem generated by Watershed segmentation algorithm or unstable clarity judgment by small areas in image fusion. A multi-focus image fusion algorithm... The aim of the paper is to solve the problem of over-segmentation problem generated by Watershed segmentation algorithm or unstable clarity judgment by small areas in image fusion. A multi-focus image fusion algorithm is proposed based on CNN segmentation and algebraic multi-grid method (CNN-AMG). Firstly, the CNN segmentation result was utilized to instruct the merging process of the regions generated by the Watershed segmentation method. Then the clear regions were selected into the temporary fusion image and the final fusion process was performed according to the clarity evaluation index, which was computed with the algebraic multi-grid method (AMG). The experimental results show that the fused image quality obtained by the CNNAMG algorithm outperforms the traditional fusion methods such as DSIFT fusion method, CNN fusion method, ASR fusion method, GFF fusion method and so on with some evaluation indexes. 展开更多
关键词 image SEGMENTATION image fusion ALGEBRAIC multi-grid CLARITY Evaluation INDEX
下载PDF
Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding
6
作者 Chunming Wu Wukai Liu Xin Ma 《Computers, Materials & Continua》 SCIE EI 2024年第4期1441-1461,共21页
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne... A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations. 展开更多
关键词 image fusion Res2Net-Transformer infrared image visible image
下载PDF
A Novel Multi-Stream Fusion Network for Underwater Image Enhancement
7
作者 Guijin Tang Lian Duan +1 位作者 Haitao Zhao Feng Liu 《China Communications》 SCIE CSCD 2024年第2期166-182,共17页
Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color... Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images. 展开更多
关键词 image enhancement multi-stream fusion underwater image
下载PDF
Multimodality Medical Image Fusion Based on Pixel Significance with Edge-Preserving Processing for Clinical Applications
8
作者 Bhawna Goyal Ayush Dogra +4 位作者 Dawa Chyophel Lepcha Rajesh Singh Hemant Sharma Ahmed Alkhayyat Manob Jyoti Saikia 《Computers, Materials & Continua》 SCIE EI 2024年第3期4317-4342,共26页
Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta... Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast. 展开更多
关键词 image fusion fractal data analysis BIOMEDICAL diseases research multiresolution analysis numerical analysis
下载PDF
Image Fusion UsingWavelet Transformation and XGboost Algorithm
9
作者 Shahid Naseem Tariq Mahmood +4 位作者 Amjad Rehman Khan Umer Farooq Samra Nawazish Faten S.Alamri Tanzila Saba 《Computers, Materials & Continua》 SCIE EI 2024年第4期801-817,共17页
Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful ... Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators. 展开更多
关键词 image fusion max-min average CWT XGBoost DCT inclusive innovations spatial and frequency domain
下载PDF
DCFNet:An Effective Dual-Branch Cross-Attention Fusion Network for Medical Image Segmentation
10
作者 Chengzhang Zhu Renmao Zhang +5 位作者 Yalong Xiao Beiji Zou Xian Chai Zhangzheng Yang Rong Hu Xuanchu Duan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1103-1128,共26页
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans... Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance. 展开更多
关键词 Convolutional neural networks Swin Transformer dual branch medical image segmentation feature cross fusion
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
11
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet image Classification Lightweight Convolutional Neural Network Depthwise Dilated Separable Convolution Hierarchical Multi-Scale Feature fusion
下载PDF
Multi-focus image fusion with the all convolutional neural network 被引量:2
12
作者 杜超本 高社生 《Optoelectronics Letters》 EI 2018年第1期71-75,共5页
A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image f... A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network(CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN(ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations. 展开更多
关键词 multi-focus image fusion with the all convolutional neural network
原文传递
Multi-focus image fusion based on fully convolutional networks
13
作者 Rui GUO Xuan-jing SHEN +1 位作者 Xiao-yu DONG Xiao-li ZHANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2020年第7期1019-1033,共15页
We propose a multi-focus image fusion method, in which a fully convolutional network for focus detection(FD-FCN) is constructed. To obtain more precise focus detection maps, we propose to add skip layers in the networ... We propose a multi-focus image fusion method, in which a fully convolutional network for focus detection(FD-FCN) is constructed. To obtain more precise focus detection maps, we propose to add skip layers in the network to make both detailed and abstract visual information available when using FD-FCN to generate maps. A new training dataset for the proposed network is constructed based on dataset CIFAR-10. The image fusion algorithm using FD-FCN contains three steps: focus maps are obtained using FD-FCN, decision map generation occurs by applying a morphological process on the focus maps, and image fusion occurs using a decision map. We carry out several sets of experiments, and both subjective and objective assessments demonstrate the superiority of the proposed fusion method to state-of-the-art algorithms. 展开更多
关键词 multi-focus image fusion Fully convolutional networks Skip layer Performance evaluation
原文传递
Multi-focus image fusion based on fractional-orderderivative and intuitionistic fuzzy sets 被引量:1
14
作者 Xue-feng ZHANG Hui YAN Hao HE 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2020年第6期834-843,共10页
Multi-focus image fusion is an increasingly important component in image fusion,and it plays a key role in imaging.In this paper,we put forward a novel multi-focus image fusion method which employs fractional-order de... Multi-focus image fusion is an increasingly important component in image fusion,and it plays a key role in imaging.In this paper,we put forward a novel multi-focus image fusion method which employs fractional-order derivative and intuitionistic fuzzy sets.The original image is decomposed into a base layer and a detail layer.Furthermore,a new fractional-order spatial frequency is built to reflect the clarity of the image.The fractional-order spatial frequency is used as a rule for detail layers fusion,and intuitionistic fuzzy sets are introduced to fuse base layers.Experimental results demonstrate that the proposed fusion method outperforms the state-of-the-art methods for multi-focus image fusion. 展开更多
关键词 image fusion Fractional-order derivative Intuitionistic fuzzy sets multi-focus images
原文传递
Joint Multi-Focus Fusion and Bayer ImageRestoration
15
《信息工程期刊(中英文版)》 2015年第3期67-72,共6页
In this paper, a joint multifocus image fusion and Bayer pattern image restoration algorithm for raw images of single-sensor colorimaging devices is proposed. Different from traditional fusion schemes, the raw Bayer p... In this paper, a joint multifocus image fusion and Bayer pattern image restoration algorithm for raw images of single-sensor colorimaging devices is proposed. Different from traditional fusion schemes, the raw Bayer pattern images are fused before colorrestoration. Therefore, the Bayer image restoration operation is only performed one time. Thus, the proposed algorithm is moreefficient than traditional fusion schemes. In detail, a clarity measurement of Bayer pattern image is defined for raw Bayer patternimages, and the fusion operator is performed on superpixels which provide powerful grouping cues of local image feature. Theraw images are merged with refined weight map to get the fused Bayer pattern image, which is restored by the demosaicingalgorithm to get the full resolution color image. Experimental results demonstrate that the proposed algorithm can obtain betterfused results with more natural appearance and fewer artifacts than the traditional algorithms. 展开更多
关键词 multi-focus image fusion BAYER PATTERN Superpixel DEMOSAICING
下载PDF
Image Inpainting Technique Incorporating Edge Prior and Attention Mechanism
16
作者 Jinxian Bai Yao Fan +1 位作者 Zhiwei Zhao Lizhi Zheng 《Computers, Materials & Continua》 SCIE EI 2024年第1期999-1025,共27页
Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images wit... Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images with large holes,leading to distortions in the structure and blurring of textures.To address these problems,we combine the advantages of transformers and convolutions to propose an image inpainting method that incorporates edge priors and attention mechanisms.The proposed method aims to improve the results of inpainting large holes in images by enhancing the accuracy of structure restoration and the ability to recover texture details.This method divides the inpainting task into two phases:edge prediction and image inpainting.Specifically,in the edge prediction phase,a transformer architecture is designed to combine axial attention with standard self-attention.This design enhances the extraction capability of global structural features and location awareness.It also balances the complexity of self-attention operations,resulting in accurate prediction of the edge structure in the defective region.In the image inpainting phase,a multi-scale fusion attention module is introduced.This module makes full use of multi-level distant features and enhances local pixel continuity,thereby significantly improving the quality of image inpainting.To evaluate the performance of our method.comparative experiments are conducted on several datasets,including CelebA,Places2,and Facade.Quantitative experiments show that our method outperforms the other mainstream methods.Specifically,it improves Peak Signal-to-Noise Ratio(PSNR)and Structure Similarity Index Measure(SSIM)by 1.141~3.234 db and 0.083~0.235,respectively.Moreover,it reduces Learning Perceptual Image Patch Similarity(LPIPS)and Mean Absolute Error(MAE)by 0.0347~0.1753 and 0.0104~0.0402,respectively.Qualitative experiments reveal that our method excels at reconstructing images with complete structural information and clear texture details.Furthermore,our model exhibits impressive performance in terms of the number of parameters,memory cost,and testing time. 展开更多
关键词 image inpainting TRANSFORMER edge prior axial attention multi-scale fusion attention
下载PDF
Medical Image Fusion Based on Anisotropic Diffusion and Non-Subsampled Contourlet Transform 被引量:1
17
作者 Bhawna Goyal Ayush Dogra +3 位作者 Rahul Khoond Dawa Chyophel Lepcha Vishal Goyal Steven LFernandes 《Computers, Materials & Continua》 SCIE EI 2023年第7期311-327,共17页
The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedi... The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis. 展开更多
关键词 Anisotropic diffusion BIOMEDICAL medical HEALTH DISEASES adversarial attacks image fusion research and development PRECISION
下载PDF
Enhancing the Quality of Low-Light Printed Circuit Board Images through Hue, Saturation, and Value Channel Processing and Improved Multi-Scale Retinex
18
作者 Huichao Shang Penglei Li Xiangqian Peng 《Journal of Computer and Communications》 2024年第1期1-10,共10页
To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. First... To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. Firstly, an improved MSRCR method was employed for brightness enhancement of the original image. Next, the color space of the original image was transformed from RGB to HSV, followed by processing the S-channel image using bilateral filtering and contrast stretching algorithms. The V-channel image was subjected to brightness enhancement using adaptive Gamma and CLAHE algorithms. Subsequently, the processed image was transformed back to the RGB color space from HSV. Finally, the images processed by the two algorithms were fused to create a new RGB image, and color restoration was performed on the fused image. Comparative experiments with other methods indicated that the contrast of the image was optimized, texture features were more abundantly preserved, brightness levels were significantly improved, and color distortion was prevented effectively, thus enhancing the quality of low-lit PCB images. 展开更多
关键词 Low-Lit PCB images Spatial Transformation image Enhancement image fusion HSV
下载PDF
Fusion of Hash-Based Hard and Soft Biometrics for Enhancing Face Image Database Search and Retrieval
19
作者 Ameerah Abdullah Alshahrani Emad Sami Jaha Nahed Alowidi 《Computers, Materials & Continua》 SCIE EI 2023年第12期3489-3509,共21页
The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision... The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision approaches.In multiple real-life applications,for example,social media,content-based face picture retrieval is a well-invested technique for large-scale databases,where there is a significant necessity for reliable retrieval capabilities enabling quick search in a vast number of pictures.Humans widely employ faces for recognizing and identifying people.Thus,face recognition through formal or personal pictures is increasingly used in various real-life applications,such as helping crime investigators retrieve matching images from face image databases to identify victims and criminals.However,such face image retrieval becomes more challenging in large-scale databases,where traditional vision-based face analysis requires ample additional storage space than the raw face images already occupied to store extracted lengthy feature vectors and takes much longer to process and match thousands of face images.This work mainly contributes to enhancing face image retrieval performance in large-scale databases using hash codes inferred by locality-sensitive hashing(LSH)for facial hard and soft biometrics as(Hard BioHash)and(Soft BioHash),respectively,to be used as a search input for retrieving the top-k matching faces.Moreover,we propose the multi-biometric score-level fusion of both face hard and soft BioHashes(Hard-Soft BioHash Fusion)for further augmented face image retrieval.The experimental outcomes applied on the Labeled Faces in the Wild(LFW)dataset and the related attributes dataset(LFW-attributes),demonstrate that the retrieval performance of the suggested fusion approach(Hard-Soft BioHash Fusion)significantly improved the retrieval performance compared to solely using Hard BioHash or Soft BioHash in isolation,where the suggested method provides an augmented accuracy of 87%when executed on 1000 specimens and 77%on 5743 samples.These results remarkably outperform the results of the Hard BioHash method by(50%on the 1000 samples and 30%on the 5743 samples),and the Soft BioHash method by(78%on the 1000 samples and 63%on the 5743 samples). 展开更多
关键词 Face image retrieval soft biometrics similar pictures HASHING database search large databases score-level fusion multimodal fusion
下载PDF
Visual Enhancement of Underwater Images Using Transmission Estimation and Multi-Scale Fusion
20
作者 R.Vijay Anandh S.Rukmani Devi 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期1897-1910,共14页
The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded beca... The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded because of two main factors namely,backscattering and attenuation.Therefore,visual enhancement has become an essential process to recover the required data from the images.Many algorithms had been proposed in a decade for improving the quality of images.This paper aims to propose a single image enhancement technique without the use of any external datasets.For that,the degraded images are subjected to two main processes namely,color correction and image fusion.Initially,veiling light and transmission light is estimated tofind the color required for correction.Veiling light refers to unwanted light,whereas transmission light refers to the required light for color correction.These estimated outputs are applied in the scene recovery equation.The image obtained from color correction is subjected to a fusion process where the image is categorized into two versions and applied to white balance and contrast enhancement techniques.The resultants are divided into three weight maps namely,luminance,saliency,chromaticity and fused using the Laplacian pyramid.The results obtained are graphically compared with their input data using RGB Histogram plot.Finally,image quality is measured and tabulated using underwater image quality measures. 展开更多
关键词 Underwater image BACKSCATTERING ATTENUATION image fusion veiling light white balance laplacian pyramid
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部