期刊文献+
共找到9,891篇文章
< 1 2 250 >
每页显示 20 50 100
A Novel Multi-Stream Fusion Network for Underwater Image Enhancement
1
作者 Guijin Tang Lian Duan +1 位作者 Haitao Zhao Feng Liu 《China Communications》 SCIE CSCD 2024年第2期166-182,共17页
Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color... Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images. 展开更多
关键词 image enhancement multi-stream fusion underwater image
下载PDF
Multimodality Medical Image Fusion Based on Pixel Significance with Edge-Preserving Processing for Clinical Applications
2
作者 Bhawna Goyal Ayush Dogra +4 位作者 Dawa Chyophel Lepcha Rajesh Singh Hemant Sharma Ahmed Alkhayyat Manob Jyoti Saikia 《Computers, Materials & Continua》 SCIE EI 2024年第3期4317-4342,共26页
Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta... Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast. 展开更多
关键词 image fusion fractal data analysis BIOMEDICAL diseases research multiresolution analysis numerical analysis
下载PDF
DCFNet:An Effective Dual-Branch Cross-Attention Fusion Network for Medical Image Segmentation
3
作者 Chengzhang Zhu Renmao Zhang +5 位作者 Yalong Xiao Beiji Zou Xian Chai Zhangzheng Yang Rong Hu Xuanchu Duan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1103-1128,共26页
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans... Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance. 展开更多
关键词 Convolutional neural networks Swin Transformer dual branch medical image segmentation feature cross fusion
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
4
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet image Classification Lightweight Convolutional Neural Network Depthwise Dilated Separable Convolution Hierarchical Multi-Scale Feature fusion
下载PDF
Image Inpainting Technique Incorporating Edge Prior and Attention Mechanism
5
作者 Jinxian Bai Yao Fan +1 位作者 Zhiwei Zhao Lizhi Zheng 《Computers, Materials & Continua》 SCIE EI 2024年第1期999-1025,共27页
Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images wit... Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images with large holes,leading to distortions in the structure and blurring of textures.To address these problems,we combine the advantages of transformers and convolutions to propose an image inpainting method that incorporates edge priors and attention mechanisms.The proposed method aims to improve the results of inpainting large holes in images by enhancing the accuracy of structure restoration and the ability to recover texture details.This method divides the inpainting task into two phases:edge prediction and image inpainting.Specifically,in the edge prediction phase,a transformer architecture is designed to combine axial attention with standard self-attention.This design enhances the extraction capability of global structural features and location awareness.It also balances the complexity of self-attention operations,resulting in accurate prediction of the edge structure in the defective region.In the image inpainting phase,a multi-scale fusion attention module is introduced.This module makes full use of multi-level distant features and enhances local pixel continuity,thereby significantly improving the quality of image inpainting.To evaluate the performance of our method.comparative experiments are conducted on several datasets,including CelebA,Places2,and Facade.Quantitative experiments show that our method outperforms the other mainstream methods.Specifically,it improves Peak Signal-to-Noise Ratio(PSNR)and Structure Similarity Index Measure(SSIM)by 1.141~3.234 db and 0.083~0.235,respectively.Moreover,it reduces Learning Perceptual Image Patch Similarity(LPIPS)and Mean Absolute Error(MAE)by 0.0347~0.1753 and 0.0104~0.0402,respectively.Qualitative experiments reveal that our method excels at reconstructing images with complete structural information and clear texture details.Furthermore,our model exhibits impressive performance in terms of the number of parameters,memory cost,and testing time. 展开更多
关键词 image inpainting TRANSFORMER edge prior axial attention multi-scale fusion attention
下载PDF
Medical Image Fusion Based on Anisotropic Diffusion and Non-Subsampled Contourlet Transform 被引量:1
6
作者 Bhawna Goyal Ayush Dogra +3 位作者 Rahul Khoond Dawa Chyophel Lepcha Vishal Goyal Steven LFernandes 《Computers, Materials & Continua》 SCIE EI 2023年第7期311-327,共17页
The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedi... The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis. 展开更多
关键词 Anisotropic diffusion BIOMEDICAL medical HEALTH DISEASES adversarial attacks image fusion research and development PRECISION
下载PDF
Enhancing the Quality of Low-Light Printed Circuit Board Images through Hue, Saturation, and Value Channel Processing and Improved Multi-Scale Retinex
7
作者 Huichao Shang Penglei Li Xiangqian Peng 《Journal of Computer and Communications》 2024年第1期1-10,共10页
To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. First... To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. Firstly, an improved MSRCR method was employed for brightness enhancement of the original image. Next, the color space of the original image was transformed from RGB to HSV, followed by processing the S-channel image using bilateral filtering and contrast stretching algorithms. The V-channel image was subjected to brightness enhancement using adaptive Gamma and CLAHE algorithms. Subsequently, the processed image was transformed back to the RGB color space from HSV. Finally, the images processed by the two algorithms were fused to create a new RGB image, and color restoration was performed on the fused image. Comparative experiments with other methods indicated that the contrast of the image was optimized, texture features were more abundantly preserved, brightness levels were significantly improved, and color distortion was prevented effectively, thus enhancing the quality of low-lit PCB images. 展开更多
关键词 Low-Lit PCB images Spatial Transformation image Enhancement image fusion HSV
下载PDF
Fusion of Hash-Based Hard and Soft Biometrics for Enhancing Face Image Database Search and Retrieval
8
作者 Ameerah Abdullah Alshahrani Emad Sami Jaha Nahed Alowidi 《Computers, Materials & Continua》 SCIE EI 2023年第12期3489-3509,共21页
The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision... The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision approaches.In multiple real-life applications,for example,social media,content-based face picture retrieval is a well-invested technique for large-scale databases,where there is a significant necessity for reliable retrieval capabilities enabling quick search in a vast number of pictures.Humans widely employ faces for recognizing and identifying people.Thus,face recognition through formal or personal pictures is increasingly used in various real-life applications,such as helping crime investigators retrieve matching images from face image databases to identify victims and criminals.However,such face image retrieval becomes more challenging in large-scale databases,where traditional vision-based face analysis requires ample additional storage space than the raw face images already occupied to store extracted lengthy feature vectors and takes much longer to process and match thousands of face images.This work mainly contributes to enhancing face image retrieval performance in large-scale databases using hash codes inferred by locality-sensitive hashing(LSH)for facial hard and soft biometrics as(Hard BioHash)and(Soft BioHash),respectively,to be used as a search input for retrieving the top-k matching faces.Moreover,we propose the multi-biometric score-level fusion of both face hard and soft BioHashes(Hard-Soft BioHash Fusion)for further augmented face image retrieval.The experimental outcomes applied on the Labeled Faces in the Wild(LFW)dataset and the related attributes dataset(LFW-attributes),demonstrate that the retrieval performance of the suggested fusion approach(Hard-Soft BioHash Fusion)significantly improved the retrieval performance compared to solely using Hard BioHash or Soft BioHash in isolation,where the suggested method provides an augmented accuracy of 87%when executed on 1000 specimens and 77%on 5743 samples.These results remarkably outperform the results of the Hard BioHash method by(50%on the 1000 samples and 30%on the 5743 samples),and the Soft BioHash method by(78%on the 1000 samples and 63%on the 5743 samples). 展开更多
关键词 Face image retrieval soft biometrics similar pictures HASHING database search large databases score-level fusion multimodal fusion
下载PDF
Visual Enhancement of Underwater Images Using Transmission Estimation and Multi-Scale Fusion
9
作者 R.Vijay Anandh S.Rukmani Devi 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期1897-1910,共14页
The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded beca... The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded because of two main factors namely,backscattering and attenuation.Therefore,visual enhancement has become an essential process to recover the required data from the images.Many algorithms had been proposed in a decade for improving the quality of images.This paper aims to propose a single image enhancement technique without the use of any external datasets.For that,the degraded images are subjected to two main processes namely,color correction and image fusion.Initially,veiling light and transmission light is estimated tofind the color required for correction.Veiling light refers to unwanted light,whereas transmission light refers to the required light for color correction.These estimated outputs are applied in the scene recovery equation.The image obtained from color correction is subjected to a fusion process where the image is categorized into two versions and applied to white balance and contrast enhancement techniques.The resultants are divided into three weight maps namely,luminance,saliency,chromaticity and fused using the Laplacian pyramid.The results obtained are graphically compared with their input data using RGB Histogram plot.Finally,image quality is measured and tabulated using underwater image quality measures. 展开更多
关键词 Underwater image BACKSCATTERING ATTENUATION image fusion veiling light white balance laplacian pyramid
下载PDF
An Efficient Medical Image Deep Fusion Model Based on Convolutional Neural Networks
10
作者 Walid El-Shafai Noha A.El-Hag +5 位作者 Ahmed Sedik Ghada Elbanby Fathi E.Abd El-Samie Naglaa F.Soliman Hussah Nasser AlEisa Mohammed E.Abdel Samea 《Computers, Materials & Continua》 SCIE EI 2023年第2期2905-2925,共21页
Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis app... Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis applications.This paper proposes a deep learning model for the medical image fusion process.This model depends on Convolutional Neural Network(CNN).The basic idea of the proposed model is to extract features from both CT and MR images.Then,an additional process is executed on the extracted features.After that,the fused feature map is reconstructed to obtain the resulting fused image.Finally,the quality of the resulting fused image is enhanced by various enhancement techniques such as Histogram Matching(HM),Histogram Equalization(HE),fuzzy technique,fuzzy type,and Contrast Limited Histogram Equalization(CLAHE).The performance of the proposed fusion-based CNN model is measured by various metrics of the fusion and enhancement quality.Different realistic datasets of different modalities and diseases are tested and implemented.Also,real datasets are tested in the simulation analysis. 展开更多
关键词 image fusion CNN deep learning feature extraction evaluation metrics medical diagnosis
下载PDF
Combining Entropy Optimization and Sobel Operator for Medical Image Fusion
11
作者 Nguyen Tu Trung Tran Thi Ngan +1 位作者 Tran Manh Tuan To Huu Nguyen 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期535-544,共10页
Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fus... Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices. 展开更多
关键词 Medical image fusion WAVELET entropy optimization PSO Sobel operator
下载PDF
RF-Net: Unsupervised Low-Light Image Enhancement Based on Retinex and Exposure Fusion
12
作者 Tian Ma Chenhui Fu +2 位作者 Jiayi Yang Jiehui Zhang Chuyang Shang 《Computers, Materials & Continua》 SCIE EI 2023年第10期1103-1122,共20页
Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propo... Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propose a two-stage unsupervised low-light image enhancement algorithm called Retinex and Exposure Fusion Network(RFNet),which can overcome the problems of over-enhancement of the high dynamic range and under-enhancement of the low dynamic range in existing enhancement algorithms.This algorithm can better manage the challenges brought about by complex environments in real-world scenarios by training with unpaired low-light images and regular-light images.In the first stage,we design a multi-scale feature extraction module based on Retinex theory,capable of extracting details and structural information at different scales to generate high-quality illumination and reflection images.In the second stage,an exposure image generator is designed through the camera response mechanism function to acquire exposure images containing more dark features,and the generated images are fused with the original input images to complete the low-light image enhancement.Experiments show the effectiveness and rationality of each module designed in this paper.And the method reconstructs the details of contrast and color distribution,outperforms the current state-of-the-art methods in both qualitative and quantitative metrics,and shows excellent performance in the real world. 展开更多
关键词 Low-light image enhancement multiscale feature extraction module exposure generator exposure fusion
下载PDF
Non Sub-Sampled Contourlet with Joint Sparse Representation Based Medical Image Fusion
13
作者 Kandasamy Kittusamy Latha Shanmuga Vadivu Sampath Kumar 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期1989-2005,共17页
Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image f... Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image fusion represents an indispensible role infixing major solutions for the complicated medical predicaments,while the recent research results have an enhanced affinity towards the preservation of medical image details,leaving color distortion and halo artifacts to remain unaddressed.This paper proposes a novel method of fusing Computer Tomography(CT)and Magnetic Resonance Imaging(MRI)using a hybrid model of Non Sub-sampled Contourlet Transform(NSCT)and Joint Sparse Representation(JSR).This model gratifies the need for precise integration of medical images of different modalities,which is an essential requirement in the diagnosing process towards clinical activities and treating the patients accordingly.In the proposed model,the medical image is decomposed using NSCT which is an efficient shift variant decomposition transformation method.JSR is exercised to extricate the common features of the medical image for the fusion process.The performance analysis of the proposed system proves that the proposed image fusion technique for medical image fusion is more efficient,provides better results,and a high level of distinctness by integrating the advantages of complementary images.The comparative analysis proves that the proposed technique exhibits better-quality than the existing medical image fusion practices. 展开更多
关键词 Medical image fusion computer tomography magnetic resonance imaging non sub-sampled contourlet transform(NSCT) joint sparse representation(JSR)
下载PDF
Brain Tumor Classification Using Image Fusion and EFPA-SVM Classifier
14
作者 P.P.Fathimathul Rajeena R.Sivakumar 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期2837-2855,共19页
An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques ha... An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques have been used to analyze brain tumors,including computed tomography(CT)and magnetic reso-nance imaging(MRI).CT provides information about dense tissues,whereas MRI gives information about soft tissues.However,the fusion of CT and MRI images has little effect on enhancing the accuracy of the diagnosis of brain tumors.Therefore,machine learning methods have been adopted to diagnose brain tumors in recent years.This paper intends to develop a novel scheme to detect and classify brain tumors based on fused CT and MRI images.The pro-posed approach starts with preprocessing the images to reduce the noise.Then,fusion rules are applied to get the fused image,and a segmentation algorithm is employed to isolate the tumor region from the background to isolate the tumor region.Finally,a machine learning classifier classified the brain images into benign and malignant tumors.Computing statistical measures evaluate the classi-fication potential of the proposed scheme.Experimental outcomes are provided,and the Enhanced Flower Pollination Algorithm(EFPA)system shows that it out-performs other brain tumor classification methods considered for comparison. 展开更多
关键词 Brain tumor classification improved wavelet threshold integer wavelet transform medical image fusion
下载PDF
Multimodal Medical Image Fusion Based on Parameter Adaptive PCNN and Latent Low-rank Representation
15
作者 WANG Wenyan ZHOU Xianchun YANG Liangjian 《Instrumentation》 2023年第1期45-58,共14页
Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image ... Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image contour and detail information by traditional image fusion methods,a new multimodal medical image fusion method is proposed.This method first uses non-subsampled shearlet transform to decompose the source image to obtain high and low frequency subband coefficients,then uses the latent low rank representation algorithm to fuse the low frequency subband coefficients,and applies the improved PAPCNN algorithm to fuse the high frequency subband coefficients.Finally,based on the automatic setting of parameters,the optimization method configuration of the time decay factorαe is carried out.The experimental results show that the proposed method solves the problems of difficult parameter setting and insufficient detail protection ability in traditional PCNN algorithm fusion images,and at the same time,it has achieved great improvement in visual quality and objective evaluation indicators. 展开更多
关键词 image fusion Non-subsampled Shearlet Transform Parameter Adaptive PCNN Latent Low-rank Representation
下载PDF
Research on Infrared Image Fusion Technology Based on Road Crack Detection
16
作者 Guangjun Li Lin Nan +3 位作者 Lu Zhang Manman Feng Yan Liu Xu Meng 《Journal of World Architecture》 2023年第3期21-26,共6页
This study aimed to propose road crack detection method based on infrared image fusion technology.By analyzing the characteristics of road crack images,this method uses a variety of infrared image fusion methods to pr... This study aimed to propose road crack detection method based on infrared image fusion technology.By analyzing the characteristics of road crack images,this method uses a variety of infrared image fusion methods to process different types of images.The use of this method allows the detection of road cracks,which not only reduces the professional requirements for inspectors,but also improves the accuracy of road crack detection.Based on infrared image processing technology,on the basis of in-depth analysis of infrared image features,a road crack detection method is proposed,which can accurately identify the road crack location,direction,length,and other characteristic information.Experiments showed that this method has a good effect,and can meet the requirement of road crack detection. 展开更多
关键词 Road crack detection Infrared image fusion technology Detection quality
下载PDF
Characterization of neurovascular compression in facial neuralgia patients by 3D high-resolution MRI and image fusion technique 被引量:7
17
作者 Jing Chen Zi-Yi Guo +7 位作者 Guang Yang Xiong Wang Qing-Yu Tang Yue-Qiong Cheng Yi Guo Shui-Xi Fu Cai-Xiang Chen Xiang-Jun Han 《Asian Pacific Journal of Tropical Medicine》 SCIE CAS 2012年第6期476-479,共4页
Objective:To describe the anatomical characteristics and patterns of neurovascular compression (NVC) in patients suffering trigeminal neuralgia(TN) by 3D high-resolution magnetic resonance imaging(MRI) method and imag... Objective:To describe the anatomical characteristics and patterns of neurovascular compression (NVC) in patients suffering trigeminal neuralgia(TN) by 3D high-resolution magnetic resonance imaging(MRI) method and image fusion technique.Methods:The anatomic structure of trigeminal nerve,brain stem and blood vessel was observed in 100 consecutive TN patients by 3D high resolution MRI(3D SPGR,contrast-enhanced T1 3D MP-RAGE and T2/T1 3D FIESTA). The 3D image sources were fused and visualized using 3D DOCTOR software.Results:One or several NVC sites,which usually appeared 0-9.8 mm away from brain stem,were found on the symptomatic side in 93%of the TN cases.Superior cerebellar artery was involved in 76%(71/93) of these cases.The other vessels including antero-inferior cerebellar artery,vertebral artery, basilar artery and veins also contributed to the occurrence of NVC.The NVC sites were found to be located in the proximal segment in 42%of these cases(39/93) and in the distal segment in 45% (42/93).Nerve dislocation or distortion was observed in 32%(30/93).Conclusions:Various 3D high resolution MRI methods combined with the image fusion technique could provide pathologic anatomic information for the diagnosis and treatment of TN. 展开更多
关键词 NEUROVASCULAR compression FACIAL NEURALGIA MAGNETIC RESONANCE imaging image fusion
下载PDF
Assessment of radiofrequency ablation margin by MRI-MRI image fusion in hepatocellular carcinoma 被引量:8
18
作者 Xiao-Li Wang Kai Li +3 位作者 Zhong-Zhen Su Ze-Ping Huang Ping Wang Rong-Qin Zheng 《World Journal of Gastroenterology》 SCIE CAS 2015年第17期5345-5351,共7页
AIM:To investigate the feasibility and clinical value of magnetic resonance imaging(MRI)-MRI image fusion in assessing the ablative margin(AM) for hepatocellular carcinoma(HCC).METHODS:A newly developed ultrasound wor... AIM:To investigate the feasibility and clinical value of magnetic resonance imaging(MRI)-MRI image fusion in assessing the ablative margin(AM) for hepatocellular carcinoma(HCC).METHODS:A newly developed ultrasound workstation for MRI-MRI image fusion was used to evaluate the AM of 62 tumors in 52 HCC patients after radiofrequency ablation(RFA).The lesions were divided into two groups:group A,in which the tumor was completely ablated and 5 mm AM was achieved(n = 32);and group B,in which the tumor was completely ablated but 5 mm AM was not achieved(n = 29).To detect local tumor progression(LTP),all patients were followed every two months by contrast-enhanced ultrasound,contrast-enhanced MRI or computed tomography(CT) in the first year after RFA.Then,the follow-up interval was prolonged to every three months after the first year.RESULTS:Of the 62 tumors,MRI-MRI image fusion was successful in 61(98.4%);the remaining case had significant deformation of the liver and massive ascites after RFA.The time required for creating image fusion and AM evaluation was 15.5 ± 5.5 min(range:8-22 min) and 9.6 ± 3.2 min(range:6-14 min),respectively.The follow-up period ranged from 1-23 mo(14.2 ± 5.4 mo).In group A,no LTP was detected in 32 lesions,whereas in group B,LTP was detected in 4 of 29 tumors,which occurred at 2,7,9,and 15 mo after RFA.The frequency of LTP in group B(13.8%;4/29) was significantly higher than that in group A(0/32,P = 0.046).All of the LTPs occurred in the area in which the 5 mm AM was not achieved.CONCLUSION:The MRI-MRI image fusion using an ultrasound workstation is feasible and useful for evaluating the AM after RFA for HCC. 展开更多
关键词 Hepatocellular carcinoma RADIOFREQUENCYABLATION image fusion Magnetic resonance image ABLATIVE MARGIN
下载PDF
Contourlet transform for image fusion using cycle spinning 被引量:9
19
作者 Kun Liu Lei Guo Jingsong Chen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2011年第2期353-357,共5页
A new method for image fusion based on Contourlet transform and cycle spinning is proposed. Contourlet transform is a flexible multiresolution, local and directional image expansion, also provids a sparse representati... A new method for image fusion based on Contourlet transform and cycle spinning is proposed. Contourlet transform is a flexible multiresolution, local and directional image expansion, also provids a sparse representation for two-dimensional piece-wise smooth signals resembling images. Due to lack of translation invariance property in Contourlet transform, the conventional image fusion algorithm based on Contourlet transform introduces many artifacts. According to the theory of cycle spinning applied to image denoising, an invariance transform can reduce the artifacts through a series of processing efficiently. So the technology of cycle spinning is introduced to develop the translation invariant Contourlet fusion algorithm. This method can effectively eliminate the Gibbs-like phenomenon, extract the characteristics of original images, and preserve more important information. Experimental results show the simplicity and effectiveness of the method and its advantages over the conventional approaches. 展开更多
关键词 image processing image fusion Contourlet transform cycle spinning.
下载PDF
Multi-source image fusion algorithm based on fast weighted guided filter 被引量:6
20
作者 WANG Jian YANG Ke +2 位作者 REN Ping QIN Chunxia ZHANG Xiufei 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2019年第5期831-840,共10页
In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Fi... In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Firstly,the source images are separated into a series of high and low frequency components.Secondly,three visual features of the source image are extracted to construct a decision graph model.Thirdly,a fast weighted guided filter is raised to optimize the result obtained in the previous step and reduce the time complexity by considering the correlation among neighboring pixels.Finally,the image obtained in the previous step is combined with the weight map to realize the image fusion.The proposed algorithm is applied to multi-focus,visible-infrared and multi-modal image respectively and the final results show that the algorithm effectively solves the halo artifacts of the merged images with higher efficiency,and is better than the traditional method considering subjective visual consequent and objective evaluation. 展开更多
关键词 FAST GUIDED FILTER image fusion visual feature DECISION map
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部