Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imag...Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.展开更多
Infrared and visible light images can be obtained simultaneously by building fluorescence imaging system,which includes fluorescence excitation,images acquisition,mechanical part,image transmission and processing sect...Infrared and visible light images can be obtained simultaneously by building fluorescence imaging system,which includes fluorescence excitation,images acquisition,mechanical part,image transmission and processing section.This system studied the 2charge-coupled device(CCD)camera(AD-080CL)of the JAI company.Fusion algorithm of visible light and near infrared images was designed for the fluorescence imaging system with wavelet transform image fusion algorithm.In order to enhance the fluorescent moiety of the fusion image,the luminance value of the green component of the color image was changed.And using microsoft foundation classes(MFC)application architecture,the supporting software system was bulit in VS2010 environment.展开更多
Objective: The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, inform...Objective: The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. Date Sources: The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Study Selection: Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Results: Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Conclusion: Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.展开更多
Objective: The aim of our study was to compare the value of computed tomography (CT) and 99mTc-methylene- diphosphonate (MDP) SPECT (single photon emission computed tomography)/CT fusion imaging in determining ...Objective: The aim of our study was to compare the value of computed tomography (CT) and 99mTc-methylene- diphosphonate (MDP) SPECT (single photon emission computed tomography)/CT fusion imaging in determining the extent of mandibular invasion by malignant tumor of the oral cavity. Methods: This study had local ethical committee approval, and all patients gave written informed consent. Fifty-three patients were revealed mandibular invasion by malignant tumor of the oral cavity underwent CT and SPECT/CT. The patients were divided into two groups: group A (invasion-periphery-type) and group B (invasion-center- type). Two radiologists assessed the CT images and two nuclear medicine physicians separately assessed the $PECT/CT images in consensus and without knowledge of the results of other imaging tests. The extent of bone involvement suggested with an imaging modality was compared with pathological findings in the surgical specimen. Results: With pathological findings as the standard of reference, Group A: The extent of mandibular invasion by malignant tumor under- went SPECT/CT was 1.02 _+ 0.20 cm larger than that underwent pathological findings. And the extent of mandibular invasion underwent CT was 1.42 + 0.35 cm smaller than that underwent pathological examination. There were significant difference among the three methods (P 〈 0.01). Group B: The extent of mandibular invasion by malignant tumor underwent SPECT/CT was 1.3 + 0.39 cm larger than that underwent pathological examination. The extent of mandibular invasion underwent CT was 2.55 + 1.44 cm smaller than that underwent pathological findings. There were significant difference among the three methods (P 〈 0.01). The extent of mandibular invasion underwent SPECT/CT was the extent which surgeon must excise to get clear margins. Conclusion: SPECT/CT fusion imaging has significant clinical value in determining the extent of mandibular inva- sion by malignant tumor of oral cavity.展开更多
To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed...To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks.展开更多
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
Objective: The aim of the study was to evaluate the clinical value of ^99mTc-methylene diphosphonic acid (MDP) SPECT/CT fusion imaging and CT scanning in diagnosis of infiltrated mandible by gingival carcinoma. Met...Objective: The aim of the study was to evaluate the clinical value of ^99mTc-methylene diphosphonic acid (MDP) SPECT/CT fusion imaging and CT scanning in diagnosis of infiltrated mandible by gingival carcinoma. Methods: 18 cases of gingival carcinoma were processed infiltrated mandible by ^99mTc-MDP SPECT/CT fusion image and CT, and their scanning results compared with pathology findings. Results: Eleven of 13 cases with well-differentiated squamous cell carcinoma showed positive images, one of 11 cases was false positive images by pathology findings, and 10 cases were exhibited infiltrated mandibles; 5 cases with moderately differentiated and poorly differentiated squamous call carcinoma showed positive images, pathology showed carcinoma call had infiltrated cavum ossis of mandible. Five of 18 cases were positive images by CT. Conclusion: ^99mTc-MDP SPECT/CT fusion imaging is a useful method in diagnosis of infiltrated mandible by gingival carcinoma.展开更多
Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aimin...Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems.展开更多
Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta...Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.展开更多
Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful ...Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators.展开更多
Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical ...Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes.展开更多
In view of the problem that current mainstream fusion method of infrared polarization image—Multiscale Geometry Analysis method only focuses on a certain characteristic to image representation.And spatial domain fusi...In view of the problem that current mainstream fusion method of infrared polarization image—Multiscale Geometry Analysis method only focuses on a certain characteristic to image representation.And spatial domain fusion method,Principal Component Analysis(PCA)method has the shortcoming of losing small target,this paper presents a new fusion method of infrared polarization images based on combination of Nonsubsampled Shearlet Transformation(NSST)and improved PCA.This method can make full use of the effectiveness to image details expressed by NSST and the characteristics that PCA can highlight the main features of images.The combination of the two methods can integrate the complementary features of themselves to retain features of targets and image details fully.Firstly,intensity and polarization images are decomposed into low frequency and high frequency components with different directions by NSST.Secondly,the low frequency components are fused with improved PCA,while the high frequency components are fused by joint decision making rule with local energy and local variance.Finally,the fused image is reconstructed with the inverse NSST to obtain the final fused image of infrared polarization.The experiment results show that the method proposed has higher advantages than other methods in terms of detail preservation and visual effect.展开更多
This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achi...This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction.More specifically,the proposed method involves an intra-domain fusion unit based on self-attention and an interdomain fusion unit based on cross-attention,which mine and integrate long dependencies within the same domain and across domains.Through long-range dependency modeling,the network is able to fully implement domain-specific information extraction and cross-domain complementary information integration as well as maintaining the appropriate apparent intensity from a global perspective.In particular,we introduce the shifted windows mechanism into the self-attention and cross-attention,which allows our model to receive images with arbitrary sizes.On the other hand,the multi-scene image fusion problems are generalized to a unified framework with structure maintenance,detail preservation,and proper intensity control.Moreover,an elaborate loss function,consisting of SSIM loss,texture loss,and intensity loss,drives the network to preserve abundant texture details and structural information,as well as presenting optimal apparent intensity.Extensive experiments on both multi-modal image fusion and digital photography image fusion demonstrate the superiority of our SwinFusion compared to the state-of-theart unified image fusion algorithms and task-specific alternatives.Implementation code and pre-trained weights can be accessed at https://github.com/Linfeng-Tang/SwinFusion.展开更多
A new method for image fusion based on Contourlet transform and cycle spinning is proposed. Contourlet transform is a flexible multiresolution, local and directional image expansion, also provids a sparse representati...A new method for image fusion based on Contourlet transform and cycle spinning is proposed. Contourlet transform is a flexible multiresolution, local and directional image expansion, also provids a sparse representation for two-dimensional piece-wise smooth signals resembling images. Due to lack of translation invariance property in Contourlet transform, the conventional image fusion algorithm based on Contourlet transform introduces many artifacts. According to the theory of cycle spinning applied to image denoising, an invariance transform can reduce the artifacts through a series of processing efficiently. So the technology of cycle spinning is introduced to develop the translation invariant Contourlet fusion algorithm. This method can effectively eliminate the Gibbs-like phenomenon, extract the characteristics of original images, and preserve more important information. Experimental results show the simplicity and effectiveness of the method and its advantages over the conventional approaches.展开更多
Image fusion aims to integrate complementary information in source images to synthesize a fused image comprehensively characterizing the imaging scene. However, existing image fusion algorithms are only applicable to ...Image fusion aims to integrate complementary information in source images to synthesize a fused image comprehensively characterizing the imaging scene. However, existing image fusion algorithms are only applicable to strictly aligned source images and cause severe artifacts in the fusion results when input images have slight shifts or deformations. In addition,the fusion results typically only have good visual effect, but neglect the semantic requirements of high-level vision tasks.This study incorporates image registration, image fusion, and semantic requirements of high-level vision tasks into a single framework and proposes a novel image registration and fusion method, named Super Fusion. Specifically, we design a registration network to estimate bidirectional deformation fields to rectify geometric distortions of input images under the supervision of both photometric and end-point constraints. The registration and fusion are combined in a symmetric scheme, in which while mutual promotion can be achieved by optimizing the naive fusion loss, it is further enhanced by the mono-modal consistent constraint on symmetric fusion outputs. In addition, the image fusion network is equipped with the global spatial attention mechanism to achieve adaptive feature integration. Moreover, the semantic constraint based on the pre-trained segmentation model and Lovasz-Softmax loss is deployed to guide the fusion network to focus more on the semantic requirements of high-level vision tasks. Extensive experiments on image registration, image fusion,and semantic segmentation tasks demonstrate the superiority of our Super Fusion compared to the state-of-the-art alternatives.The source code and pre-trained model are publicly available at https://github.com/Linfeng-Tang/Super Fusion.展开更多
In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directl...In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directly affects the leaching of useful components.In this study,the pore throat,pore size distribution,and mineral composition of low-permeability uranium-bearing sandstone were quantitatively analyzed by high pressure mercury injection,nuclear magnetic resonance,X-ray diffraction,and wavelength-dispersive X-ray fluorescence.The distribution characteristics of pores and minerals in the samples were qualitatively analyzed using energy-dispersive scanning electron microscopy and multi-resolution CT images.Image registration with the landmarks algorithm provided by FEI Avizo was used to accurately match the CT images with different resolutions.The multi-scale and multi-mineral digital core model of low-permeability uranium-bearing sandstone is reconstructed through pore segmentation and mineral segmentation of fusion core scanning images.The results show that the pore structure of low-permeability uranium-bearing sandstone is complex and has multi-scale and multi-crossing characteristics.The intergranular pores determine the main seepage channel in the pore space,and the secondary pores have poor connectivity with other pores.Pyrite and coffinite are isolated from the connected pores and surrounded by a large number of clay minerals and ankerite cements,which increases the difficulty of uranium leaching.Clays and a large amount of ankerite cement are filled in the primary and secondary pores and pore throats of the low-permeability uraniumbearing sandstone,which significantly reduces the porosity of the movable fluid and results in low overall permeability of the cores.The multi-scale and multi-mineral digital core proposed in this study provides a basis for characterizing macroscopic and microscopic pore-throat structures and mineral distributions of low-permeability uranium-bearing sandstone and can better understand the seepage characteristics.展开更多
Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of br...Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of breast cancer.Lifestyle and inheritance patterns may be a reason behind its spread among women.However,some preventive measures,such as tests and periodic clinical checks can mitigate its risk thereby,improving its survival chances substantially.Early diagnosis and initial stage treatment can help increase the survival rate.For that purpose,pathologists can gather support from nondestructive and efficient computer-aided diagnosis(CAD)systems.This study explores the breast cancer CAD method relying on multimodal medical imaging and decision-based fusion.In multimodal medical imaging fusion,a deep learning approach is applied,obtaining 97.5%accuracy with a 2.5%miss rate for breast cancer prediction.A deep extreme learning machine technique applied on feature-based data provided a 97.41%accuracy.Finally,decisionbased fusion applied to both breast cancer prediction models to diagnose its stages,resulted in an overall accuracy of 97.97%.The proposed system model provides more accurate results compared with other state-of-the-art approaches,rapidly diagnosing breast cancer to decrease its mortality rate.展开更多
Objective: We studied the application of CT image fusion in the evaluation of radiation treatment planning for non-small cell lung cancer (NSCLC). Methods: Eleven patients with NSCLC, who were treated with three-dimen...Objective: We studied the application of CT image fusion in the evaluation of radiation treatment planning for non-small cell lung cancer (NSCLC). Methods: Eleven patients with NSCLC, who were treated with three-dimensional con-formal radiation therapy, were studied. Each patient underwent twice sequential planning CT scan, i.e., at pre-treatment, and at mid-treatment for field reduction planning. Three treatment plans were established in each patient: treatment plan A was based on the pre-treatment planning CT scans for the first course of treatment, plan B on the mid-treatment planning CT scans for the second course of treatment, and treatment plan F on the fused images for the whole treatment. The irradiation doses received by organs at risk in the whole treatment with treatment A and B plans were estimated by the plus of the parameters in treatment plan A and B, assuming that the parameters involve the different tissues (i.e. V20=AV20+BV20), or the same tissues within an organ (i.e. Dmax=ADmax+BDmax). The assessment parameters in the treatment plan F were calculated on the basis of the DVH of the whole treatment. Then the above assessment results were compared. Results: There were marked differ-ences between the assessment results derived from the plus of assessment parameters in treatment plan A and B, and the ones derived from treatment plan F. Conclusion: When a treatment plan is altered during the course of radiation treatment, image fusion technique should be performed in the establishment of a new one. The estimation of the assessment parameters for the whole treatment with treatment plan A and B by simple plus, is inaccurate.展开更多
High resolution image fusion is a significant focus in the field of image processing. A new image fusion model is presented based on the characteristic level of empirical mode decomposition (EMD). The intensity hue ...High resolution image fusion is a significant focus in the field of image processing. A new image fusion model is presented based on the characteristic level of empirical mode decomposition (EMD). The intensity hue saturation (IHS) transform of the multi-spectral image first gives the intensity image. Thereafter, the 2D EMD in terms of row-column extension of the 1D EMD model is used to decompose the detailed scale image and coarse scale image from the high-resolution band image and the intensity image. Finally, a fused intensity image is obtained by reconstruction with high frequency of the high-resolution image and low frequency of the intensity image and IHS inverse transform result in the fused image. After presenting the EMD principle, a multi-scale decomposition and reconstruction algorithm of 2D EMD is defined and a fusion technique scheme is advanced based on EMD. Panchromatic band and multi-spectral band 3,2,1 of Quickbird are used to assess the quality of the fusion algorithm. After selecting the appropriate intrinsic mode function (IMF) for the merger on the basis of EMD analysis on specific row (column) pixel gray value series, the fusion scheme gives a fused image, which is compared with generally used fusion algorithms (wavelet, IHS, Brovey). The objectives of image fusion include enhancing the visibility of the image and improving the spatial resolution and the spectral information of the original images. To assess quality of an image after fusion, information entropy and standard deviation are applied to assess spatial details of the fused images and correlation coefficient, bias index and warping degree for measuring distortion between the original image and fused image in terms of spectral information. For the proposed fusion algorithm, better results are obtained when EMD algorithm is used to perform the fusion experience.展开更多
Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to ...Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.展开更多
文摘Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.
基金National Natural Science Foundation of China(No.61171177)National Major Scientific Equipment Development Projects of China(No.2013YQ240803)+1 种基金Natural Science Foundation for Young Scientists of Shanxi Province(No.2012021011-1)Scientific and Technological Project in Shanxi Province(No.20140321010-02)
文摘Infrared and visible light images can be obtained simultaneously by building fluorescence imaging system,which includes fluorescence excitation,images acquisition,mechanical part,image transmission and processing section.This system studied the 2charge-coupled device(CCD)camera(AD-080CL)of the JAI company.Fusion algorithm of visible light and near infrared images was designed for the fluorescence imaging system with wavelet transform image fusion algorithm.In order to enhance the fluorescent moiety of the fusion image,the luminance value of the green component of the color image was changed.And using microsoft foundation classes(MFC)application architecture,the supporting software system was bulit in VS2010 environment.
文摘Objective: The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. Date Sources: The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Study Selection: Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Results: Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Conclusion: Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.
文摘Objective: The aim of our study was to compare the value of computed tomography (CT) and 99mTc-methylene- diphosphonate (MDP) SPECT (single photon emission computed tomography)/CT fusion imaging in determining the extent of mandibular invasion by malignant tumor of the oral cavity. Methods: This study had local ethical committee approval, and all patients gave written informed consent. Fifty-three patients were revealed mandibular invasion by malignant tumor of the oral cavity underwent CT and SPECT/CT. The patients were divided into two groups: group A (invasion-periphery-type) and group B (invasion-center- type). Two radiologists assessed the CT images and two nuclear medicine physicians separately assessed the $PECT/CT images in consensus and without knowledge of the results of other imaging tests. The extent of bone involvement suggested with an imaging modality was compared with pathological findings in the surgical specimen. Results: With pathological findings as the standard of reference, Group A: The extent of mandibular invasion by malignant tumor under- went SPECT/CT was 1.02 _+ 0.20 cm larger than that underwent pathological findings. And the extent of mandibular invasion underwent CT was 1.42 + 0.35 cm smaller than that underwent pathological examination. There were significant difference among the three methods (P 〈 0.01). Group B: The extent of mandibular invasion by malignant tumor underwent SPECT/CT was 1.3 + 0.39 cm larger than that underwent pathological examination. The extent of mandibular invasion underwent CT was 2.55 + 1.44 cm smaller than that underwent pathological findings. There were significant difference among the three methods (P 〈 0.01). The extent of mandibular invasion underwent SPECT/CT was the extent which surgeon must excise to get clear margins. Conclusion: SPECT/CT fusion imaging has significant clinical value in determining the extent of mandibular inva- sion by malignant tumor of oral cavity.
文摘To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks.
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.
文摘Objective: The aim of the study was to evaluate the clinical value of ^99mTc-methylene diphosphonic acid (MDP) SPECT/CT fusion imaging and CT scanning in diagnosis of infiltrated mandible by gingival carcinoma. Methods: 18 cases of gingival carcinoma were processed infiltrated mandible by ^99mTc-MDP SPECT/CT fusion image and CT, and their scanning results compared with pathology findings. Results: Eleven of 13 cases with well-differentiated squamous cell carcinoma showed positive images, one of 11 cases was false positive images by pathology findings, and 10 cases were exhibited infiltrated mandibles; 5 cases with moderately differentiated and poorly differentiated squamous call carcinoma showed positive images, pathology showed carcinoma call had infiltrated cavum ossis of mandible. Five of 18 cases were positive images by CT. Conclusion: ^99mTc-MDP SPECT/CT fusion imaging is a useful method in diagnosis of infiltrated mandible by gingival carcinoma.
基金The Key R&D Project of Hainan Province under contract No.ZDYF2023SHFZ097the National Natural Science Foundation of China under contract No.42376180。
文摘Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems.
文摘Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.
基金Princess Nourah bint Abdulrahman University and Researchers Supporting Project Number(PNURSP2024R346)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators.
基金funded by the National Natural Science Foundation of China,grant number 61302188.
文摘Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes.
基金Open Fund Project of Key Laboratory of Instrumentation Science&Dynamic Measurement(No.2DSYSJ2015005)Specialized Research Fund for the Doctoral Program of Ministry of Education Colleges(No.20121420110004)
文摘In view of the problem that current mainstream fusion method of infrared polarization image—Multiscale Geometry Analysis method only focuses on a certain characteristic to image representation.And spatial domain fusion method,Principal Component Analysis(PCA)method has the shortcoming of losing small target,this paper presents a new fusion method of infrared polarization images based on combination of Nonsubsampled Shearlet Transformation(NSST)and improved PCA.This method can make full use of the effectiveness to image details expressed by NSST and the characteristics that PCA can highlight the main features of images.The combination of the two methods can integrate the complementary features of themselves to retain features of targets and image details fully.Firstly,intensity and polarization images are decomposed into low frequency and high frequency components with different directions by NSST.Secondly,the low frequency components are fused with improved PCA,while the high frequency components are fused by joint decision making rule with local energy and local variance.Finally,the fused image is reconstructed with the inverse NSST to obtain the final fused image of infrared polarization.The experiment results show that the method proposed has higher advantages than other methods in terms of detail preservation and visual effect.
基金This work was supported by the National Natural Science Foundation of China(62075169,62003247,62061160370)the Key Research and Development Program of Hubei Province(2020BAB113).
文摘This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction.More specifically,the proposed method involves an intra-domain fusion unit based on self-attention and an interdomain fusion unit based on cross-attention,which mine and integrate long dependencies within the same domain and across domains.Through long-range dependency modeling,the network is able to fully implement domain-specific information extraction and cross-domain complementary information integration as well as maintaining the appropriate apparent intensity from a global perspective.In particular,we introduce the shifted windows mechanism into the self-attention and cross-attention,which allows our model to receive images with arbitrary sizes.On the other hand,the multi-scene image fusion problems are generalized to a unified framework with structure maintenance,detail preservation,and proper intensity control.Moreover,an elaborate loss function,consisting of SSIM loss,texture loss,and intensity loss,drives the network to preserve abundant texture details and structural information,as well as presenting optimal apparent intensity.Extensive experiments on both multi-modal image fusion and digital photography image fusion demonstrate the superiority of our SwinFusion compared to the state-of-theart unified image fusion algorithms and task-specific alternatives.Implementation code and pre-trained weights can be accessed at https://github.com/Linfeng-Tang/SwinFusion.
基金supported by the National Natural Science Foundation of China (60802084)
文摘A new method for image fusion based on Contourlet transform and cycle spinning is proposed. Contourlet transform is a flexible multiresolution, local and directional image expansion, also provids a sparse representation for two-dimensional piece-wise smooth signals resembling images. Due to lack of translation invariance property in Contourlet transform, the conventional image fusion algorithm based on Contourlet transform introduces many artifacts. According to the theory of cycle spinning applied to image denoising, an invariance transform can reduce the artifacts through a series of processing efficiently. So the technology of cycle spinning is introduced to develop the translation invariant Contourlet fusion algorithm. This method can effectively eliminate the Gibbs-like phenomenon, extract the characteristics of original images, and preserve more important information. Experimental results show the simplicity and effectiveness of the method and its advantages over the conventional approaches.
基金supported by the National Natural Science Foundation of China(62276192,62075169,62061160370)the Key Research and Development Program of Hubei Province(2020BAB113)。
文摘Image fusion aims to integrate complementary information in source images to synthesize a fused image comprehensively characterizing the imaging scene. However, existing image fusion algorithms are only applicable to strictly aligned source images and cause severe artifacts in the fusion results when input images have slight shifts or deformations. In addition,the fusion results typically only have good visual effect, but neglect the semantic requirements of high-level vision tasks.This study incorporates image registration, image fusion, and semantic requirements of high-level vision tasks into a single framework and proposes a novel image registration and fusion method, named Super Fusion. Specifically, we design a registration network to estimate bidirectional deformation fields to rectify geometric distortions of input images under the supervision of both photometric and end-point constraints. The registration and fusion are combined in a symmetric scheme, in which while mutual promotion can be achieved by optimizing the naive fusion loss, it is further enhanced by the mono-modal consistent constraint on symmetric fusion outputs. In addition, the image fusion network is equipped with the global spatial attention mechanism to achieve adaptive feature integration. Moreover, the semantic constraint based on the pre-trained segmentation model and Lovasz-Softmax loss is deployed to guide the fusion network to focus more on the semantic requirements of high-level vision tasks. Extensive experiments on image registration, image fusion,and semantic segmentation tasks demonstrate the superiority of our Super Fusion compared to the state-of-the-art alternatives.The source code and pre-trained model are publicly available at https://github.com/Linfeng-Tang/Super Fusion.
基金This work was supported by the National Natural Science Foundation of China(No.11775107)the Key Projects of Education Department of Hunan Province of China(No.16A184).
文摘In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directly affects the leaching of useful components.In this study,the pore throat,pore size distribution,and mineral composition of low-permeability uranium-bearing sandstone were quantitatively analyzed by high pressure mercury injection,nuclear magnetic resonance,X-ray diffraction,and wavelength-dispersive X-ray fluorescence.The distribution characteristics of pores and minerals in the samples were qualitatively analyzed using energy-dispersive scanning electron microscopy and multi-resolution CT images.Image registration with the landmarks algorithm provided by FEI Avizo was used to accurately match the CT images with different resolutions.The multi-scale and multi-mineral digital core model of low-permeability uranium-bearing sandstone is reconstructed through pore segmentation and mineral segmentation of fusion core scanning images.The results show that the pore structure of low-permeability uranium-bearing sandstone is complex and has multi-scale and multi-crossing characteristics.The intergranular pores determine the main seepage channel in the pore space,and the secondary pores have poor connectivity with other pores.Pyrite and coffinite are isolated from the connected pores and surrounded by a large number of clay minerals and ankerite cements,which increases the difficulty of uranium leaching.Clays and a large amount of ankerite cement are filled in the primary and secondary pores and pore throats of the low-permeability uraniumbearing sandstone,which significantly reduces the porosity of the movable fluid and results in low overall permeability of the cores.The multi-scale and multi-mineral digital core proposed in this study provides a basis for characterizing macroscopic and microscopic pore-throat structures and mineral distributions of low-permeability uranium-bearing sandstone and can better understand the seepage characteristics.
基金supported by the KIAS(Research No.CG076601)in part by Sejong University Faculty Research Fund.
文摘Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of breast cancer.Lifestyle and inheritance patterns may be a reason behind its spread among women.However,some preventive measures,such as tests and periodic clinical checks can mitigate its risk thereby,improving its survival chances substantially.Early diagnosis and initial stage treatment can help increase the survival rate.For that purpose,pathologists can gather support from nondestructive and efficient computer-aided diagnosis(CAD)systems.This study explores the breast cancer CAD method relying on multimodal medical imaging and decision-based fusion.In multimodal medical imaging fusion,a deep learning approach is applied,obtaining 97.5%accuracy with a 2.5%miss rate for breast cancer prediction.A deep extreme learning machine technique applied on feature-based data provided a 97.41%accuracy.Finally,decisionbased fusion applied to both breast cancer prediction models to diagnose its stages,resulted in an overall accuracy of 97.97%.The proposed system model provides more accurate results compared with other state-of-the-art approaches,rapidly diagnosing breast cancer to decrease its mortality rate.
基金a grant from the Key Program of Science and Technology Foundation of Hubei Province (No. 2007A301B33).
文摘Objective: We studied the application of CT image fusion in the evaluation of radiation treatment planning for non-small cell lung cancer (NSCLC). Methods: Eleven patients with NSCLC, who were treated with three-dimensional con-formal radiation therapy, were studied. Each patient underwent twice sequential planning CT scan, i.e., at pre-treatment, and at mid-treatment for field reduction planning. Three treatment plans were established in each patient: treatment plan A was based on the pre-treatment planning CT scans for the first course of treatment, plan B on the mid-treatment planning CT scans for the second course of treatment, and treatment plan F on the fused images for the whole treatment. The irradiation doses received by organs at risk in the whole treatment with treatment A and B plans were estimated by the plus of the parameters in treatment plan A and B, assuming that the parameters involve the different tissues (i.e. V20=AV20+BV20), or the same tissues within an organ (i.e. Dmax=ADmax+BDmax). The assessment parameters in the treatment plan F were calculated on the basis of the DVH of the whole treatment. Then the above assessment results were compared. Results: There were marked differ-ences between the assessment results derived from the plus of assessment parameters in treatment plan A and B, and the ones derived from treatment plan F. Conclusion: When a treatment plan is altered during the course of radiation treatment, image fusion technique should be performed in the establishment of a new one. The estimation of the assessment parameters for the whole treatment with treatment plan A and B by simple plus, is inaccurate.
文摘High resolution image fusion is a significant focus in the field of image processing. A new image fusion model is presented based on the characteristic level of empirical mode decomposition (EMD). The intensity hue saturation (IHS) transform of the multi-spectral image first gives the intensity image. Thereafter, the 2D EMD in terms of row-column extension of the 1D EMD model is used to decompose the detailed scale image and coarse scale image from the high-resolution band image and the intensity image. Finally, a fused intensity image is obtained by reconstruction with high frequency of the high-resolution image and low frequency of the intensity image and IHS inverse transform result in the fused image. After presenting the EMD principle, a multi-scale decomposition and reconstruction algorithm of 2D EMD is defined and a fusion technique scheme is advanced based on EMD. Panchromatic band and multi-spectral band 3,2,1 of Quickbird are used to assess the quality of the fusion algorithm. After selecting the appropriate intrinsic mode function (IMF) for the merger on the basis of EMD analysis on specific row (column) pixel gray value series, the fusion scheme gives a fused image, which is compared with generally used fusion algorithms (wavelet, IHS, Brovey). The objectives of image fusion include enhancing the visibility of the image and improving the spatial resolution and the spectral information of the original images. To assess quality of an image after fusion, information entropy and standard deviation are applied to assess spatial details of the fused images and correlation coefficient, bias index and warping degree for measuring distortion between the original image and fused image in terms of spectral information. For the proposed fusion algorithm, better results are obtained when EMD algorithm is used to perform the fusion experience.
基金supported by the National Natural Science Foundation of China(6157206361401308)+6 种基金the Fundamental Research Funds for the Central Universities(2016YJS039)the Natural Science Foundation of Hebei Province(F2016201142F2016201187)the Natural Social Foundation of Hebei Province(HB15TQ015)the Science Research Project of Hebei Province(QN2016085ZC2016040)the Natural Science Foundation of Hebei University(2014-303)
文摘Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.