To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed...To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks.展开更多
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded beca...The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded because of two main factors namely,backscattering and attenuation.Therefore,visual enhancement has become an essential process to recover the required data from the images.Many algorithms had been proposed in a decade for improving the quality of images.This paper aims to propose a single image enhancement technique without the use of any external datasets.For that,the degraded images are subjected to two main processes namely,color correction and image fusion.Initially,veiling light and transmission light is estimated tofind the color required for correction.Veiling light refers to unwanted light,whereas transmission light refers to the required light for color correction.These estimated outputs are applied in the scene recovery equation.The image obtained from color correction is subjected to a fusion process where the image is categorized into two versions and applied to white balance and contrast enhancement techniques.The resultants are divided into three weight maps namely,luminance,saliency,chromaticity and fused using the Laplacian pyramid.The results obtained are graphically compared with their input data using RGB Histogram plot.Finally,image quality is measured and tabulated using underwater image quality measures.展开更多
Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta...Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.展开更多
Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful ...Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators.展开更多
To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. First...To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. Firstly, an improved MSRCR method was employed for brightness enhancement of the original image. Next, the color space of the original image was transformed from RGB to HSV, followed by processing the S-channel image using bilateral filtering and contrast stretching algorithms. The V-channel image was subjected to brightness enhancement using adaptive Gamma and CLAHE algorithms. Subsequently, the processed image was transformed back to the RGB color space from HSV. Finally, the images processed by the two algorithms were fused to create a new RGB image, and color restoration was performed on the fused image. Comparative experiments with other methods indicated that the contrast of the image was optimized, texture features were more abundantly preserved, brightness levels were significantly improved, and color distortion was prevented effectively, thus enhancing the quality of low-lit PCB images.展开更多
Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical ...Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes.展开更多
The diagnostic potential of brain positron emission tomography (PET) imaging is limited by low spatial resolution. For solving this problem we propose a technique for the fusion of PET and MRI images. This fusion is...The diagnostic potential of brain positron emission tomography (PET) imaging is limited by low spatial resolution. For solving this problem we propose a technique for the fusion of PET and MRI images. This fusion is a trade-off between the spectral information extracted from PET images and the spatial information extracted from high spatial resolution MRI. The proposed method can control this trade-off. To achieve this goal, it is necessary to build a multiscale fusion model, based on the retinal cell photoreceptors model. This paper introduces general prospects of this model, and its application in multispectral medical image fusion. Results showed that the proposed method preserves more spectral features with less spatial distortion. Comparing with hue-intensity-saturation (HIS), discrete wavelet transform (DWT), wavelet-based sharpening and wavelet-a trous transform methods, the best spectral and spatial quality is only achieved simultaneously with the proposed feature-based data fusion method. This method does not require resampling images, which is an advantage over the other methods, and can perform in any aspect ratio between the pixels of MRI and PET images.展开更多
Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared(IR)/visible(VS)images.Dissimilarities in various kind of features in these images are vital to preserve i...Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared(IR)/visible(VS)images.Dissimilarities in various kind of features in these images are vital to preserve in the single fused image.Hence,simultaneous preservation of both the aspects at the same time is a challenging task.However,most of the existing methods utilize the manual extraction of features;and manual complicated designing of fusion rules resulted in a blurry artifact in the fused image.Therefore,this study has proposed a hybrid algorithm for the integration of multi-features among two heterogeneous images.Firstly,fuzzification of two IR/VS images has been done by feeding it to the fuzzy sets to remove the uncertainty present in the background and object of interest of the image.Secondly,images have been learned by two parallel branches of the siamese convolutional neural network(CNN)to extract prominent features from the images as well as high-frequency information to produce focus maps containing source image information.Finally,the obtained focused maps which contained the detailed integrated information are directly mapped with the source image via pixelwise strategy to result in fused image.Different parameters have been used to evaluate the performance of the proposed image fusion by achieving 1.008 for mutual information(MI),0.841 for entropy(EG),0.655 for edge information(EI),0.652 for human perception(HP),and 0.980 for image structural similarity(ISS).Experimental results have shown that the proposed technique has attained the best qualitative and quantitative results using 78 publically available images in comparison to the existing discrete cosine transform(DCT),anisotropic diffusion&karhunen-loeve(ADKL),guided filter(GF),random walk(RW),principal component analysis(PCA),and convolutional neural network(CNN)methods.展开更多
A homological multi-information image fusion method was introduced for recognition of the gastric tumor pathological tissue images.The main purpose is that fewer procedures are used to provide more information and the...A homological multi-information image fusion method was introduced for recognition of the gastric tumor pathological tissue images.The main purpose is that fewer procedures are used to provide more information and the result images could be easier to be understood than any other methods.First,multi-scale wavelet transform was used to extract edge feature,and then watershed morphology was used to form multi-threshold grayscale contours.The research laid emphasis upon the homological tissue image fusion based on extended Bayesian algorithm,which fusion result images of linear weighted algorithm was used to compare with the ones of extended Bayesian algorithm.The final fusion images are shown in Fig 5.The final image evaluation was made by information entropy,information correlativity and statistics methods.It is indicated that this method has more advantages for clinical application.展开更多
Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Tr...Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Transform(DWT)with the energy compaction of the Discrete Wavelet Transform(DCT).The multi-level Encryption-based Hybrid Fusion Technique(EbhFT)aims to achieve great advances in terms of imperceptibility and security of medical images.A DWT disintegrated sub-band of a cover image is reformed simultaneously using the DCT transform.Afterwards,a 64-bit hex key is employed to encrypt the host image as well as participate in the second key creation process to encode the watermark.Lastly,a PN-sequence key is formed along with a supplementary key in the third layer of the EbHFT.Thus,the watermarked image is generated by enclosing both keys into DWT and DCT coefficients.The fusions ability of the proposed EbHFT technique makes the best use of the distinct privileges of using both DWT and DCT methods.In order to validate the proposed technique,a standard dataset of medical images is used.Simulation results show higher performance of the visual quality(i.e.,57.65)for the watermarked forms of all types of medical images.In addition,EbHFT robustness outperforms an existing scheme tested for the same dataset in terms of Normalized Correlation(NC).Finally,extra protection for digital images from against illegal replicating and unapproved tampering using the proposed technique.展开更多
In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical ima...In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical images.In this paper,the traditional method of wavelet fusion is improved and a new fusion algorithm of anatomical and functional medical images,in which high-frequency and low-frequency coefficients are studied respectively.When choosing high-frequency coefficients,the global gradient of each sub-image is calculated to realize adaptive fusion,so that the fused image can reserve the functional information;while choosing the low coefficients is based on the analysis of the neighborbood region energy,so that the fused image can reserve the anatomical image's edge and texture feature.Experimental results and the quality evaluation parameters show that the improved fusion algorithm can enhance the edge and texture feature and retain the function information and anatomical information effectively.展开更多
Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis app...Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis applications.This paper proposes a deep learning model for the medical image fusion process.This model depends on Convolutional Neural Network(CNN).The basic idea of the proposed model is to extract features from both CT and MR images.Then,an additional process is executed on the extracted features.After that,the fused feature map is reconstructed to obtain the resulting fused image.Finally,the quality of the resulting fused image is enhanced by various enhancement techniques such as Histogram Matching(HM),Histogram Equalization(HE),fuzzy technique,fuzzy type,and Contrast Limited Histogram Equalization(CLAHE).The performance of the proposed fusion-based CNN model is measured by various metrics of the fusion and enhancement quality.Different realistic datasets of different modalities and diseases are tested and implemented.Also,real datasets are tested in the simulation analysis.展开更多
The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedi...The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis.展开更多
Several image fusion approaches for CCD/SAR images are studied and the performance evaluation of these fusion approaches is completed in this paper. Firstly, the preprocessing of CCD/SAR images before fusion is fulfil...Several image fusion approaches for CCD/SAR images are studied and the performance evaluation of these fusion approaches is completed in this paper. Firstly, the preprocessing of CCD/SAR images before fusion is fulfilled. Then, the image fusion methods including linear superposition, nonlinear operator method and multiresolution methods, of which the multiresolution methods include Laplacian pyramid, ratio pyramid, contrast pyramid, gradient pyramid, morphological pyramid and discrete wavelet transform, are adopted to fuse two types of images. Lastly, the four performance measures, standard deviation, entropy, cross entropy and spatial frequency, are calculated to compare the fusion results by different fusion approaches in this paper. Experimental results show that contrast pyramid, morphology pyramid and discrete wavelet transformation in multiresolution approaches are more suitable for CCD/SAR image fusion than other ones proposed in this paper and the objective performance evaluation of CCD/SAR image fusion approaches are effective.展开更多
This paper presents a low intricate,profoundly energy effective MRI Images combination intended for remote visual sensor frameworks which leads to improved understanding and implementation of treatment;especially for ...This paper presents a low intricate,profoundly energy effective MRI Images combination intended for remote visual sensor frameworks which leads to improved understanding and implementation of treatment;especially for radiology.This is done by combining the original picture which leads to a significant reduction in the computation time and frequency.The proposed technique conquers the calculation and energy impediment of low power tools and is examined as far as picture quality and energy is concerned.Reenactments are performed utilizing MATLAB 2018a,to quantify the resultant vitality investment funds and the reproduction results show that the proposed calculation is very quick and devours just around 1%of vitality decomposition by the hybrid combination plans.Likewise,the effortlessness of our proposed strategy makes it increasingly suitable for continuous applications.展开更多
A new method based on resolution degradation model is proposed to improve both spatial and spectral quality of the synthetic images. Some ETM+ panchromatic and multispectral images are used to assess the new method. I...A new method based on resolution degradation model is proposed to improve both spatial and spectral quality of the synthetic images. Some ETM+ panchromatic and multispectral images are used to assess the new method. Its spatial and spectral effects are evaluated by qualitative and quantitative measures and the results are compared with those of IHS, PCA, Brovey, OWT(Orthogonal Wavelet Transform) and RWT(Redundant Wavelet Transform). The results show that the new method can keep almost the same spatial resolution as the panchromatic images, and the spectral effect of the new method is as good as those of wavelet-based methods.展开更多
Objective. To compare and match metabolic images of PET with anatomic images of CT and MRI. Methods. The CT or MRI images of the patients were obtained through a photo scanner, and then transferred to the remote works...Objective. To compare and match metabolic images of PET with anatomic images of CT and MRI. Methods. The CT or MRI images of the patients were obtained through a photo scanner, and then transferred to the remote workstation of PET scanner with a floppy disk. A fusion method was developed to match the 2- dimensional CT or MRI slices with the correlative slices of 3- dimensional volume PET images. Results. Twenty- nine metabolically changed foci were accurately localized in 21 epilepsy patients’ MRI images, while MRI alone had only 6 true positive findings. In 53 cancer or suspicious cancer patients, 53 positive lesions detected by PET were compared and matched with the corresponding lesions in CT or MRI images, in which 10 lesions were missed. On the other hand, 23 lesions detected from the patients’ CT or MRI images were negative or with low uptake in the PET images, and they were finally proved as benign. Conclusions. Comparing and matching metabolic images with anatomic images helped obtain a full understanding about the lesion and its peripheral structures. The fusion method was simple, practical and useful for localizing metabolically changed lesions.展开更多
In this paper,we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform(2D-SMCWT).The fusion of the detail 2D-SMCWT cofficients is performed via a Bayesian Maximum a Poste...In this paper,we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform(2D-SMCWT).The fusion of the detail 2D-SMCWT cofficients is performed via a Bayesian Maximum a Posteriori(MAP)approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients.For the approx imation coefficients,a new fusion rule based on the Principal Component Analysis(PCA)is applied.We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method.The obt ained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics.Robustness of the proposed method is further tested against different types of noise.The plots of fusion met rics establish the accuracy of the proposed fusion method.展开更多
In this paper, an electrical resistance tomography(ERT) imaging method is used as a classifier, and then the Dempster-Shafer's evidence theory with fuzzy clustering is integrated to improve the ERT image quality. ...In this paper, an electrical resistance tomography(ERT) imaging method is used as a classifier, and then the Dempster-Shafer's evidence theory with fuzzy clustering is integrated to improve the ERT image quality. The fuzzy clustering is applied to determining the key mass function, and dealing with the uncertain, incomplete and inconsistent measured imaging data in ERT. The proposed method was applied to images with the same investigated object under eight typical current drive patterns. Experiments were performed on a group of simulations using COMSOL Multiphysics tool and measurements with a piece of porcine lung and a pair of porcine kidneys as test materials. Compared with any single drive pattern, the proposed method can provide images with a spatial resolution of about 10% higher, while the time resolution was almost the same.展开更多
文摘To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks.
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.
文摘The demand for the exploration of ocean resources is increasing exponentially.Underwater image data plays a significant role in many research areas.Despite this,the visual quality of underwater images is degraded because of two main factors namely,backscattering and attenuation.Therefore,visual enhancement has become an essential process to recover the required data from the images.Many algorithms had been proposed in a decade for improving the quality of images.This paper aims to propose a single image enhancement technique without the use of any external datasets.For that,the degraded images are subjected to two main processes namely,color correction and image fusion.Initially,veiling light and transmission light is estimated tofind the color required for correction.Veiling light refers to unwanted light,whereas transmission light refers to the required light for color correction.These estimated outputs are applied in the scene recovery equation.The image obtained from color correction is subjected to a fusion process where the image is categorized into two versions and applied to white balance and contrast enhancement techniques.The resultants are divided into three weight maps namely,luminance,saliency,chromaticity and fused using the Laplacian pyramid.The results obtained are graphically compared with their input data using RGB Histogram plot.Finally,image quality is measured and tabulated using underwater image quality measures.
文摘Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.
基金Princess Nourah bint Abdulrahman University and Researchers Supporting Project Number(PNURSP2024R346)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators.
文摘To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. Firstly, an improved MSRCR method was employed for brightness enhancement of the original image. Next, the color space of the original image was transformed from RGB to HSV, followed by processing the S-channel image using bilateral filtering and contrast stretching algorithms. The V-channel image was subjected to brightness enhancement using adaptive Gamma and CLAHE algorithms. Subsequently, the processed image was transformed back to the RGB color space from HSV. Finally, the images processed by the two algorithms were fused to create a new RGB image, and color restoration was performed on the fused image. Comparative experiments with other methods indicated that the contrast of the image was optimized, texture features were more abundantly preserved, brightness levels were significantly improved, and color distortion was prevented effectively, thus enhancing the quality of low-lit PCB images.
基金funded by the National Natural Science Foundation of China,grant number 61302188.
文摘Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes.
基金Project (No. TMU 85-05-33) supported in part by the Iran Telecommunication Research Center (ITRC)
文摘The diagnostic potential of brain positron emission tomography (PET) imaging is limited by low spatial resolution. For solving this problem we propose a technique for the fusion of PET and MRI images. This fusion is a trade-off between the spectral information extracted from PET images and the spatial information extracted from high spatial resolution MRI. The proposed method can control this trade-off. To achieve this goal, it is necessary to build a multiscale fusion model, based on the retinal cell photoreceptors model. This paper introduces general prospects of this model, and its application in multispectral medical image fusion. Results showed that the proposed method preserves more spectral features with less spatial distortion. Comparing with hue-intensity-saturation (HIS), discrete wavelet transform (DWT), wavelet-based sharpening and wavelet-a trous transform methods, the best spectral and spatial quality is only achieved simultaneously with the proposed feature-based data fusion method. This method does not require resampling images, which is an advantage over the other methods, and can perform in any aspect ratio between the pixels of MRI and PET images.
文摘Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared(IR)/visible(VS)images.Dissimilarities in various kind of features in these images are vital to preserve in the single fused image.Hence,simultaneous preservation of both the aspects at the same time is a challenging task.However,most of the existing methods utilize the manual extraction of features;and manual complicated designing of fusion rules resulted in a blurry artifact in the fused image.Therefore,this study has proposed a hybrid algorithm for the integration of multi-features among two heterogeneous images.Firstly,fuzzification of two IR/VS images has been done by feeding it to the fuzzy sets to remove the uncertainty present in the background and object of interest of the image.Secondly,images have been learned by two parallel branches of the siamese convolutional neural network(CNN)to extract prominent features from the images as well as high-frequency information to produce focus maps containing source image information.Finally,the obtained focused maps which contained the detailed integrated information are directly mapped with the source image via pixelwise strategy to result in fused image.Different parameters have been used to evaluate the performance of the proposed image fusion by achieving 1.008 for mutual information(MI),0.841 for entropy(EG),0.655 for edge information(EI),0.652 for human perception(HP),and 0.980 for image structural similarity(ISS).Experimental results have shown that the proposed technique has attained the best qualitative and quantitative results using 78 publically available images in comparison to the existing discrete cosine transform(DCT),anisotropic diffusion&karhunen-loeve(ADKL),guided filter(GF),random walk(RW),principal component analysis(PCA),and convolutional neural network(CNN)methods.
基金Supported by the National Science Foundation of China(No.30370403 )
文摘A homological multi-information image fusion method was introduced for recognition of the gastric tumor pathological tissue images.The main purpose is that fewer procedures are used to provide more information and the result images could be easier to be understood than any other methods.First,multi-scale wavelet transform was used to extract edge feature,and then watershed morphology was used to form multi-threshold grayscale contours.The research laid emphasis upon the homological tissue image fusion based on extended Bayesian algorithm,which fusion result images of linear weighted algorithm was used to compare with the ones of extended Bayesian algorithm.The final fusion images are shown in Fig 5.The final image evaluation was made by information entropy,information correlativity and statistics methods.It is indicated that this method has more advantages for clinical application.
文摘Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Transform(DWT)with the energy compaction of the Discrete Wavelet Transform(DCT).The multi-level Encryption-based Hybrid Fusion Technique(EbhFT)aims to achieve great advances in terms of imperceptibility and security of medical images.A DWT disintegrated sub-band of a cover image is reformed simultaneously using the DCT transform.Afterwards,a 64-bit hex key is employed to encrypt the host image as well as participate in the second key creation process to encode the watermark.Lastly,a PN-sequence key is formed along with a supplementary key in the third layer of the EbHFT.Thus,the watermarked image is generated by enclosing both keys into DWT and DCT coefficients.The fusions ability of the proposed EbHFT technique makes the best use of the distinct privileges of using both DWT and DCT methods.In order to validate the proposed technique,a standard dataset of medical images is used.Simulation results show higher performance of the visual quality(i.e.,57.65)for the watermarked forms of all types of medical images.In addition,EbHFT robustness outperforms an existing scheme tested for the same dataset in terms of Normalized Correlation(NC).Finally,extra protection for digital images from against illegal replicating and unapproved tampering using the proposed technique.
基金The National High Technology Research and Development Program of China(‘863’Program)grant number:2007AA02Z4A9+1 种基金National Natural Science Foundation of Chinagrant number:30671997
文摘In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical images.In this paper,the traditional method of wavelet fusion is improved and a new fusion algorithm of anatomical and functional medical images,in which high-frequency and low-frequency coefficients are studied respectively.When choosing high-frequency coefficients,the global gradient of each sub-image is calculated to realize adaptive fusion,so that the fused image can reserve the functional information;while choosing the low coefficients is based on the analysis of the neighborbood region energy,so that the fused image can reserve the anatomical image's edge and texture feature.Experimental results and the quality evaluation parameters show that the improved fusion algorithm can enhance the edge and texture feature and retain the function information and anatomical information effectively.
文摘Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis applications.This paper proposes a deep learning model for the medical image fusion process.This model depends on Convolutional Neural Network(CNN).The basic idea of the proposed model is to extract features from both CT and MR images.Then,an additional process is executed on the extracted features.After that,the fused feature map is reconstructed to obtain the resulting fused image.Finally,the quality of the resulting fused image is enhanced by various enhancement techniques such as Histogram Matching(HM),Histogram Equalization(HE),fuzzy technique,fuzzy type,and Contrast Limited Histogram Equalization(CLAHE).The performance of the proposed fusion-based CNN model is measured by various metrics of the fusion and enhancement quality.Different realistic datasets of different modalities and diseases are tested and implemented.Also,real datasets are tested in the simulation analysis.
文摘The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis.
基金Under the auspices of Astronautical Innovation Fund of China.
文摘Several image fusion approaches for CCD/SAR images are studied and the performance evaluation of these fusion approaches is completed in this paper. Firstly, the preprocessing of CCD/SAR images before fusion is fulfilled. Then, the image fusion methods including linear superposition, nonlinear operator method and multiresolution methods, of which the multiresolution methods include Laplacian pyramid, ratio pyramid, contrast pyramid, gradient pyramid, morphological pyramid and discrete wavelet transform, are adopted to fuse two types of images. Lastly, the four performance measures, standard deviation, entropy, cross entropy and spatial frequency, are calculated to compare the fusion results by different fusion approaches in this paper. Experimental results show that contrast pyramid, morphology pyramid and discrete wavelet transformation in multiresolution approaches are more suitable for CCD/SAR image fusion than other ones proposed in this paper and the objective performance evaluation of CCD/SAR image fusion approaches are effective.
文摘This paper presents a low intricate,profoundly energy effective MRI Images combination intended for remote visual sensor frameworks which leads to improved understanding and implementation of treatment;especially for radiology.This is done by combining the original picture which leads to a significant reduction in the computation time and frequency.The proposed technique conquers the calculation and energy impediment of low power tools and is examined as far as picture quality and energy is concerned.Reenactments are performed utilizing MATLAB 2018a,to quantify the resultant vitality investment funds and the reproduction results show that the proposed calculation is very quick and devours just around 1%of vitality decomposition by the hybrid combination plans.Likewise,the effortlessness of our proposed strategy makes it increasingly suitable for continuous applications.
文摘A new method based on resolution degradation model is proposed to improve both spatial and spectral quality of the synthetic images. Some ETM+ panchromatic and multispectral images are used to assess the new method. Its spatial and spectral effects are evaluated by qualitative and quantitative measures and the results are compared with those of IHS, PCA, Brovey, OWT(Orthogonal Wavelet Transform) and RWT(Redundant Wavelet Transform). The results show that the new method can keep almost the same spatial resolution as the panchromatic images, and the spectral effect of the new method is as good as those of wavelet-based methods.
文摘Objective. To compare and match metabolic images of PET with anatomic images of CT and MRI. Methods. The CT or MRI images of the patients were obtained through a photo scanner, and then transferred to the remote workstation of PET scanner with a floppy disk. A fusion method was developed to match the 2- dimensional CT or MRI slices with the correlative slices of 3- dimensional volume PET images. Results. Twenty- nine metabolically changed foci were accurately localized in 21 epilepsy patients’ MRI images, while MRI alone had only 6 true positive findings. In 53 cancer or suspicious cancer patients, 53 positive lesions detected by PET were compared and matched with the corresponding lesions in CT or MRI images, in which 10 lesions were missed. On the other hand, 23 lesions detected from the patients’ CT or MRI images were negative or with low uptake in the PET images, and they were finally proved as benign. Conclusions. Comparing and matching metabolic images with anatomic images helped obtain a full understanding about the lesion and its peripheral structures. The fusion method was simple, practical and useful for localizing metabolically changed lesions.
文摘In this paper,we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform(2D-SMCWT).The fusion of the detail 2D-SMCWT cofficients is performed via a Bayesian Maximum a Posteriori(MAP)approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients.For the approx imation coefficients,a new fusion rule based on the Principal Component Analysis(PCA)is applied.We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method.The obt ained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics.Robustness of the proposed method is further tested against different types of noise.The plots of fusion met rics establish the accuracy of the proposed fusion method.
基金Supported by National Natural Science Foundation of China(No.61774014 and No.60772080)
文摘In this paper, an electrical resistance tomography(ERT) imaging method is used as a classifier, and then the Dempster-Shafer's evidence theory with fuzzy clustering is integrated to improve the ERT image quality. The fuzzy clustering is applied to determining the key mass function, and dealing with the uncertain, incomplete and inconsistent measured imaging data in ERT. The proposed method was applied to images with the same investigated object under eight typical current drive patterns. Experiments were performed on a group of simulations using COMSOL Multiphysics tool and measurements with a piece of porcine lung and a pair of porcine kidneys as test materials. Compared with any single drive pattern, the proposed method can provide images with a spatial resolution of about 10% higher, while the time resolution was almost the same.