期刊文献+
共找到163篇文章
< 1 2 9 >
每页显示 20 50 100
CAEFusion: A New Convolutional Autoencoder-Based Infrared and Visible Light Image Fusion Algorithm 被引量:1
1
作者 Chun-Ming Wu Mei-Ling Ren +1 位作者 Jin Lei Zi-Mu Jiang 《Computers, Materials & Continua》 SCIE EI 2024年第8期2857-2872,共16页
To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed... To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks. 展开更多
关键词 image fusion deep learning auto-encoder(AE) INFRARED visible light
下载PDF
Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding 被引量:1
2
作者 Chunming Wu Wukai Liu Xin Ma 《Computers, Materials & Continua》 SCIE EI 2024年第4期1441-1461,共21页
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne... A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations. 展开更多
关键词 image fusion Res2Net-Transformer infrared image visible image
下载PDF
Mangrove monitoring and extraction based on multi-source remote sensing data:a deep learning method based on SAR and optical image fusion
3
作者 Yiheng Xie Xiaoping Rui +2 位作者 Yarong Zou Heng Tang Ninglei Ouyang 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2024年第9期110-121,共12页
Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aimin... Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems. 展开更多
关键词 image fusion SAR image optical image MANGROVE deep learning attention mechanism
下载PDF
Multimodality Medical Image Fusion Based on Pixel Significance with Edge-Preserving Processing for Clinical Applications
4
作者 Bhawna Goyal Ayush Dogra +4 位作者 Dawa Chyophel Lepcha Rajesh Singh Hemant Sharma Ahmed Alkhayyat Manob Jyoti Saikia 《Computers, Materials & Continua》 SCIE EI 2024年第3期4317-4342,共26页
Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta... Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast. 展开更多
关键词 image fusion fractal data analysis BIOMEDICAL DISEASES research multiresolution analysis numerical analysis
下载PDF
Image Fusion Using Wavelet Transformation and XGboost Algorithm
5
作者 Shahid Naseem Tariq Mahmood +4 位作者 Amjad Rehman Khan Umer Farooq Samra Nawazish Faten S.Alamri Tanzila Saba 《Computers, Materials & Continua》 SCIE EI 2024年第4期801-817,共17页
Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful ... Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators. 展开更多
关键词 image fusion max-min average CWT XGBoost DCT inclusive innovations spatial and frequency domain
下载PDF
Enhanced Growth Optimizer and Its Application to Multispectral Image Fusion
6
作者 Jeng-Shyang Pan Wenda Li +2 位作者 Shu-Chuan Chu Xiao Sui Junzo Watada 《Computers, Materials & Continua》 SCIE EI 2024年第11期3033-3062,共30页
The growth optimizer(GO)is an innovative and robust metaheuristic optimization algorithm designed to simulate the learning and reflective processes experienced by individuals as they mature within the social environme... The growth optimizer(GO)is an innovative and robust metaheuristic optimization algorithm designed to simulate the learning and reflective processes experienced by individuals as they mature within the social environment.However,the original GO algorithm is constrained by two significant limitations:slow convergence and high mem-ory requirements.This restricts its application to large-scale and complex problems.To address these problems,this paper proposes an innovative enhanced growth optimizer(eGO).In contrast to conventional population-based optimization algorithms,the eGO algorithm utilizes a probabilistic model,designated as the virtual population,which is capable of accurately replicating the behavior of actual populations while simultaneously reducing memory consumption.Furthermore,this paper introduces the Lévy flight mechanism,which enhances the diversity and flexibility of the search process,thus further improving the algorithm’s global search capability and convergence speed.To verify the effectiveness of the eGO algorithm,a series of experiments were conducted using the CEC2014 and CEC2017 test sets.The results demonstrate that the eGO algorithm outperforms the original GO algorithm and other compact algorithms regarding memory usage and convergence speed,thus exhibiting powerful optimization capabilities.Finally,the eGO algorithm was applied to image fusion.Through a comparative analysis with the existing PSO and GO algorithms and other compact algorithms,the eGO algorithm demonstrates superior performance in image fusion. 展开更多
关键词 Growth optimizer probabilistic model Lévy flight image fusion
下载PDF
Enhancing the Quality of Low-Light Printed Circuit Board Images through Hue, Saturation, and Value Channel Processing and Improved Multi-Scale Retinex
7
作者 Huichao Shang Penglei Li Xiangqian Peng 《Journal of Computer and Communications》 2024年第1期1-10,共10页
To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. First... To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. Firstly, an improved MSRCR method was employed for brightness enhancement of the original image. Next, the color space of the original image was transformed from RGB to HSV, followed by processing the S-channel image using bilateral filtering and contrast stretching algorithms. The V-channel image was subjected to brightness enhancement using adaptive Gamma and CLAHE algorithms. Subsequently, the processed image was transformed back to the RGB color space from HSV. Finally, the images processed by the two algorithms were fused to create a new RGB image, and color restoration was performed on the fused image. Comparative experiments with other methods indicated that the contrast of the image was optimized, texture features were more abundantly preserved, brightness levels were significantly improved, and color distortion was prevented effectively, thus enhancing the quality of low-lit PCB images. 展开更多
关键词 Low-Lit PCB images Spatial Transformation image Enhancement image Fusion HSV
下载PDF
Multi-Modal Medical Image Fusion Based on Improved Parameter Adaptive PCNN and Latent Low-Rank Representation
8
作者 Zirui Tang Xianchun Zhou 《Instrumentation》 2024年第2期53-63,共11页
Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical ... Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes. 展开更多
关键词 image fusion improved parameter adaptive pcnn non-subsampled shear-wave transform latent low-rank representation
下载PDF
Hyperspectral Image Super-Resolution Meets Deep Learning:A Survey and Perspective 被引量:3
9
作者 Xinya Wang Qian Hu +1 位作者 Yingsong Cheng Jiayi Ma 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第8期1668-1691,共24页
Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,w... Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,which is beneficial for subsequent applications.The development of deep learning has promoted significant progress in hyperspectral image super-resolution,and the powerful expression capabilities of deep neural networks make the predicted results more reliable.Recently,several latest deep learning technologies have made the hyperspectral image super-resolution method explode.However,a comprehensive review and analysis of the latest deep learning methods from the hyperspectral image super-resolution perspective is absent.To this end,in this survey,we first introduce the concept of hyperspectral image super-resolution and classify the methods from the perspectives with or without auxiliary information.Then,we review the learning-based methods in three categories,including single hyperspectral image super-resolution,panchromatic-based hyperspectral image super-resolution,and multispectral-based hyperspectral image super-resolution.Subsequently,we summarize the commonly used hyperspectral dataset,and the evaluations for some representative methods in three categories are performed qualitatively and quantitatively.Moreover,we briefly introduce several typical applications of hyperspectral image super-resolution,including ground object classification,urban change detection,and ecosystem monitoring.Finally,we provide the conclusion and challenges in existing learning-based methods,looking forward to potential future research directions. 展开更多
关键词 Deep learning hyperspectral image image fusion image super-resolution SURVEY
下载PDF
Medical Image Fusion Based on Anisotropic Diffusion and Non-Subsampled Contourlet Transform 被引量:1
10
作者 Bhawna Goyal Ayush Dogra +3 位作者 Rahul Khoond Dawa Chyophel Lepcha Vishal Goyal Steven LFernandes 《Computers, Materials & Continua》 SCIE EI 2023年第7期311-327,共17页
The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedi... The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis. 展开更多
关键词 Anisotropic diffusion BIOMEDICAL MEDICAL HEALTH DISEASES adversarial attacks image fusion research and development precision
下载PDF
An Efficient Medical Image Deep Fusion Model Based on Convolutional Neural Networks 被引量:1
11
作者 Walid El-Shafai Noha A.El-Hag +5 位作者 Ahmed Sedik Ghada Elbanby Fathi E.Abd El-Samie Naglaa F.Soliman Hussah Nasser AlEisa Mohammed E.Abdel Samea 《Computers, Materials & Continua》 SCIE EI 2023年第2期2905-2925,共21页
Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis app... Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis applications.This paper proposes a deep learning model for the medical image fusion process.This model depends on Convolutional Neural Network(CNN).The basic idea of the proposed model is to extract features from both CT and MR images.Then,an additional process is executed on the extracted features.After that,the fused feature map is reconstructed to obtain the resulting fused image.Finally,the quality of the resulting fused image is enhanced by various enhancement techniques such as Histogram Matching(HM),Histogram Equalization(HE),fuzzy technique,fuzzy type,and Contrast Limited Histogram Equalization(CLAHE).The performance of the proposed fusion-based CNN model is measured by various metrics of the fusion and enhancement quality.Different realistic datasets of different modalities and diseases are tested and implemented.Also,real datasets are tested in the simulation analysis. 展开更多
关键词 image fusion CNN deep learning feature extraction evaluation metrics medical diagnosis
下载PDF
Multimodal Medical Image Fusion Based on Parameter Adaptive PCNN and Latent Low-rank Representation 被引量:1
12
作者 WANG Wenyan ZHOU Xianchun YANG Liangjian 《Instrumentation》 2023年第1期45-58,共14页
Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image ... Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image contour and detail information by traditional image fusion methods,a new multimodal medical image fusion method is proposed.This method first uses non-subsampled shearlet transform to decompose the source image to obtain high and low frequency subband coefficients,then uses the latent low rank representation algorithm to fuse the low frequency subband coefficients,and applies the improved PAPCNN algorithm to fuse the high frequency subband coefficients.Finally,based on the automatic setting of parameters,the optimization method configuration of the time decay factorαe is carried out.The experimental results show that the proposed method solves the problems of difficult parameter setting and insufficient detail protection ability in traditional PCNN algorithm fusion images,and at the same time,it has achieved great improvement in visual quality and objective evaluation indicators. 展开更多
关键词 image Fusion Non-subsampled Shearlet Transform Parameter Adaptive PCNN Latent Low-rank Representation
下载PDF
Fusing Satellite Images Using ABC Optimizing Algorithm
13
作者 Nguyen Hai Minh Nguyen Tu Trung +1 位作者 Tran Thi Ngan Tran Manh Tuan 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3901-3909,共9页
Fusing satellite(remote sensing)images is an interesting topic in processing satellite images.The result image is achieved through fusing information from spectral and panchromatic images for sharpening.In this paper,... Fusing satellite(remote sensing)images is an interesting topic in processing satellite images.The result image is achieved through fusing information from spectral and panchromatic images for sharpening.In this paper,a new algorithm based on based the Artificial bee colony(ABC)algorithm with peak signalto-noise ratio(PSNR)index optimization is proposed to fusing remote sensing images in this paper.Firstly,Wavelet transform is used to split the input images into components over the high and low frequency domains.Then,two fusing rules are used for obtaining the fused images.The first rule is“the high frequency components are fused by using the average values”.The second rule is“the low frequency components are fused by using the combining rule with parameter”.The parameter for fusing the low frequency components is defined by using ABC algorithm,an algorithm based on PSNR index optimization.The experimental results on different input images show that the proposed algorithm is better than some recent methods. 展开更多
关键词 Remote sensing image satellite images image fusion WAVELET PSNR optimization ABC
下载PDF
Combining Entropy Optimization and Sobel Operator for Medical Image Fusion
14
作者 Nguyen Tu Trung Tran Thi Ngan +1 位作者 Tran Manh Tuan To Huu Nguyen 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期535-544,共10页
Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fus... Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices. 展开更多
关键词 Medical image fusion WAVELET entropy optimization PSO Sobel operator
下载PDF
Visual Enhancement of Underwater Images Using Transmission Estimation and Multi-Scale Fusion
15
作者 R.Vijay Anandh S.Rukmani Devi 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期1897-1910,共14页
The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded beca... The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded because of two main factors namely,backscattering and attenuation.Therefore,visual enhancement has become an essential process to recover the required data from the images.Many algorithms had been proposed in a decade for improving the quality of images.This paper aims to propose a single image enhancement technique without the use of any external datasets.For that,the degraded images are subjected to two main processes namely,color correction and image fusion.Initially,veiling light and transmission light is estimated tofind the color required for correction.Veiling light refers to unwanted light,whereas transmission light refers to the required light for color correction.These estimated outputs are applied in the scene recovery equation.The image obtained from color correction is subjected to a fusion process where the image is categorized into two versions and applied to white balance and contrast enhancement techniques.The resultants are divided into three weight maps namely,luminance,saliency,chromaticity and fused using the Laplacian pyramid.The results obtained are graphically compared with their input data using RGB Histogram plot.Finally,image quality is measured and tabulated using underwater image quality measures. 展开更多
关键词 Underwater image BACKSCATTERING ATTENUATION image fusion veiling light white balance laplacian pyramid
下载PDF
Hyperspectral Image Sharpening Based on Deep Convolutional Neural Network and Spatial-Spectral Spread Transform Models
16
作者 陆小辰 刘晓慧 +2 位作者 杨德政 赵萍 阳云龙 《Journal of Donghua University(English Edition)》 CAS 2023年第1期88-95,共8页
In order to improve the spatial resolution of hyperspectral(HS)image and minimize the spectral distortion,an HS and multispectral(MS)image fusion approach based on convolutional neural network(CNN)is proposed.The prop... In order to improve the spatial resolution of hyperspectral(HS)image and minimize the spectral distortion,an HS and multispectral(MS)image fusion approach based on convolutional neural network(CNN)is proposed.The proposed approach incorporates the linear spectral mixture model and spatial-spectral spread transform model into the learning phase of network,aiming to fully exploit the spatial-spectral information of HS and MS images,and improve the spectral fidelity of fusion images.Experiments on two real remote sensing data under different resolutions demonstrate that compared with some state-of-the-art HS and MS image fusion methods,the proposed approach achieves superior spectral fidelities and lower fusion errors. 展开更多
关键词 convolutional neural network(CNN) hyperspectral image image fusion multispectral image unmixing method
下载PDF
Non Sub-Sampled Contourlet with Joint Sparse Representation Based Medical Image Fusion
17
作者 Kandasamy Kittusamy Latha Shanmuga Vadivu Sampath Kumar 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期1989-2005,共17页
Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image f... Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image fusion represents an indispensible role infixing major solutions for the complicated medical predicaments,while the recent research results have an enhanced affinity towards the preservation of medical image details,leaving color distortion and halo artifacts to remain unaddressed.This paper proposes a novel method of fusing Computer Tomography(CT)and Magnetic Resonance Imaging(MRI)using a hybrid model of Non Sub-sampled Contourlet Transform(NSCT)and Joint Sparse Representation(JSR).This model gratifies the need for precise integration of medical images of different modalities,which is an essential requirement in the diagnosing process towards clinical activities and treating the patients accordingly.In the proposed model,the medical image is decomposed using NSCT which is an efficient shift variant decomposition transformation method.JSR is exercised to extricate the common features of the medical image for the fusion process.The performance analysis of the proposed system proves that the proposed image fusion technique for medical image fusion is more efficient,provides better results,and a high level of distinctness by integrating the advantages of complementary images.The comparative analysis proves that the proposed technique exhibits better-quality than the existing medical image fusion practices. 展开更多
关键词 Medical image fusion computer tomography magnetic resonance imaging non sub-sampled contourlet transform(NSCT) joint sparse representation(JSR)
下载PDF
Brain Tumor Classification Using Image Fusion and EFPA-SVM Classifier
18
作者 P.P.Fathimathul Rajeena R.Sivakumar 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期2837-2855,共19页
An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques ha... An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques have been used to analyze brain tumors,including computed tomography(CT)and magnetic reso-nance imaging(MRI).CT provides information about dense tissues,whereas MRI gives information about soft tissues.However,the fusion of CT and MRI images has little effect on enhancing the accuracy of the diagnosis of brain tumors.Therefore,machine learning methods have been adopted to diagnose brain tumors in recent years.This paper intends to develop a novel scheme to detect and classify brain tumors based on fused CT and MRI images.The pro-posed approach starts with preprocessing the images to reduce the noise.Then,fusion rules are applied to get the fused image,and a segmentation algorithm is employed to isolate the tumor region from the background to isolate the tumor region.Finally,a machine learning classifier classified the brain images into benign and malignant tumors.Computing statistical measures evaluate the classi-fication potential of the proposed scheme.Experimental outcomes are provided,and the Enhanced Flower Pollination Algorithm(EFPA)system shows that it out-performs other brain tumor classification methods considered for comparison. 展开更多
关键词 Brain tumor classification improved wavelet threshold integer wavelet transform medical image fusion
下载PDF
Application of Dual-Energy X-Ray Image Detection of Dangerous Goods Based on YOLOv7
19
作者 Baosheng Liu Fei Wang +1 位作者 Ming Gao Lei Zhao 《Journal of Computer and Communications》 2023年第7期208-225,共18页
X-ray security equipment is currently a more commonly used dangerous goods detection tool, due to the increasing security work tasks, the use of target detection technology to assist security personnel to carry out wo... X-ray security equipment is currently a more commonly used dangerous goods detection tool, due to the increasing security work tasks, the use of target detection technology to assist security personnel to carry out work has become an inevitable trend. With the development of deep learning, object detection technology is becoming more and more mature, and object detection framework based on convolutional neural networks has been widely used in industrial, medical and military fields. In order to improve the efficiency of security staff, reduce the risk of dangerous goods missed detection. Based on the data collected in X-ray security equipment, this paper uses a method of inserting dangerous goods into an empty package to balance all kinds of dangerous goods data and expand the data set. The high-low energy images are combined using the high-low energy feature fusion method. Finally, the dangerous goods target detection technology based on the YOLOv7 model is used for model training. After the introduction of the above method, the detection accuracy is improved by 6% compared with the direct use of the original data set for detection, and the speed is 93FPS, which can meet the requirements of the online security system, greatly improve the work efficiency of security personnel, and eliminate the security risks caused by missed detection. 展开更多
关键词 X-RAY Dangerous Goods Detection High and Low Energy image Fusion ACCURACY Real-Time Detection
下载PDF
Research on Stitching Algorithm Based on Tree Branch Image
20
作者 Biao Huang Shiping Zou 《World Journal of Engineering and Technology》 2023年第2期381-388,共8页
Branch identification technology is a key technology to achieve automated pruning of fruit tree branches, and one of its technical bottlenecks lies in the stitching of branch images. To this end, we propose a set of b... Branch identification technology is a key technology to achieve automated pruning of fruit tree branches, and one of its technical bottlenecks lies in the stitching of branch images. To this end, we propose a set of branch image stitching technology algorithms. The algorithm is based on the grey-scale prime centroid method to determine the detection feature points, and uses the coordinate transformation matrix H of the corresponding points of the image to carry out the image geometric transformation, and realises the feature matching through sample comparison and classification methods. The experimental results show that the matched point images are more correct and less time-consuming. 展开更多
关键词 Stitching Techniques image Fusion image Recognition Branch images
下载PDF
上一页 1 2 9 下一页 到第
使用帮助 返回顶部