Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
Because of cloudy and rainy weather in south China, optical remote sens-ing images often can't be obtained easily. With the regional trial results in Baoying, Jiangsu province, this paper explored the fusion model an...Because of cloudy and rainy weather in south China, optical remote sens-ing images often can't be obtained easily. With the regional trial results in Baoying, Jiangsu province, this paper explored the fusion model and effect of ENVISAT/SAR and HJ-1A satel ite multispectral remote sensing images. Based on the ARSIS strat-egy, using the wavelet transform and the Interaction between the Band Structure Model (IBSM), the research progressed the ENVISAT satel ite SAR and the HJ-1A satel ite CCD images wavelet decomposition, and low/high frequency coefficient re-construction, and obtained the fusion images through the inverse wavelet transform. In the light of low and high-frequency images have different characteristics in differ-ent areas, different fusion rules which can enhance the integration process of self-adaptive were taken, with comparisons with the PCA transformation, IHS transfor-mation and other traditional methods by subjective and the corresponding quantita-tive evaluation. Furthermore, the research extracted the bands and NDVI values around the fusion with GPS samples, analyzed and explained the fusion effect. The results showed that the spectral distortion of wavelet fusion, IHS transform, PCA transform images was 0.101 6, 0.326 1 and 1.277 2, respectively and entropy was 14.701 5, 11.899 3 and 13.229 3, respectively, the wavelet fusion is the highest. The method of wavelet maintained good spectral capability, and visual effects while improved the spatial resolution, the information interpretation effect was much better than other two methods.展开更多
In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, whi...In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, which will make the energy expression insufficient to reflect the local clarity. Therefore,in this paper,a novel construction method for activity measurement is proposed. Firstly,it uses the wavelet decomposition for the fusion resource image, and then utilizes the high and low frequency wavelet coefficients synthetically. Meantime,it takes the normalized variance as the weight of high-frequency energy. Secondly,it calculates the measurement by the weighted energy,which can be used to measure the local character. Finally,the fusion coefficients can be got. In order to illustrate the superiority of this new method,three kinds of assessing indicators are provided. The experiment results show that,comparing with the traditional methods,this new method weakens the fuzzy and promotes the indicator value. Therefore,it has much more advantages for practical application.展开更多
To overcome the shortcomings of 1 D and 2 D Otsu’s thresholding techniques, the 3 D Otsu method has been developed.Among all Otsu’s methods, 3 D Otsu technique provides the best threshold values for the multi-level ...To overcome the shortcomings of 1 D and 2 D Otsu’s thresholding techniques, the 3 D Otsu method has been developed.Among all Otsu’s methods, 3 D Otsu technique provides the best threshold values for the multi-level thresholding processes. In this paper, to improve the quality of segmented images, a simple and effective multilevel thresholding method is introduced. The proposed approach focuses on preserving edge detail by computing the 3 D Otsu along the fusion phenomena. The advantages of the presented scheme include higher quality outcomes, better preservation of tiny details and boundaries and reduced execution time with rising threshold levels. The fusion approach depends upon the differences between pixel intensity values within a small local space of an image;it aims to improve localized information after the thresholding process. The fusion of images based on local contrast can improve image segmentation performance by minimizing the loss of local contrast, loss of details and gray-level distributions. Results show that the proposed method yields more promising segmentation results when compared to conventional1 D Otsu, 2 D Otsu and 3 D Otsu methods, as evident from the objective and subjective evaluations.展开更多
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients ar...The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.展开更多
A new method based on resolution degradation model is proposed to improve both spatial and spectral quality of the synthetic images. Some ETM+ panchromatic and multispectral images are used to assess the new method. I...A new method based on resolution degradation model is proposed to improve both spatial and spectral quality of the synthetic images. Some ETM+ panchromatic and multispectral images are used to assess the new method. Its spatial and spectral effects are evaluated by qualitative and quantitative measures and the results are compared with those of IHS, PCA, Brovey, OWT(Orthogonal Wavelet Transform) and RWT(Redundant Wavelet Transform). The results show that the new method can keep almost the same spatial resolution as the panchromatic images, and the spectral effect of the new method is as good as those of wavelet-based methods.展开更多
Image fusion has been developing into an important area of research. In remote sensing, the use of the same image sensor in different working modes, or different image sensors, can provide reinforcing or complementary...Image fusion has been developing into an important area of research. In remote sensing, the use of the same image sensor in different working modes, or different image sensors, can provide reinforcing or complementary information. Therefore, it is highly valuable to fuse outputs from multiple sensors (or the same sensor in different working modes) to improve the overall performance of the remote images, which are very useful for human visual perception and image processing task. Accordingly, in this paper, we first provide a comprehensive survey of the state of the art of multi-sensor image fusion methods in terms of three aspects: pixel-level fusion, feature-level fusion and decision-level fusion. An overview of existing fusion strategies is then introduced, after which the existing fusion quality measures are summarized. Finally, this review analyzes the development trends in fusion algorithms that may attract researchers to further explore the research in this field.展开更多
The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper...The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.展开更多
Considering that there is no single full reference image quality assessment method that could give the best performance in all situations, some multi-method fusion metrics were proposed. Machine learning techniques ar...Considering that there is no single full reference image quality assessment method that could give the best performance in all situations, some multi-method fusion metrics were proposed. Machine learning techniques are often involved in such multi-method fusion metrics so that its output would be more consistent with human visual perceptions. On the other hand, the robustness and generalization ability of these multi-method fusion metrics are questioned because of the scarce of images with mean opinion scores. In order to comprehensively validate whether or not the generalization ability of such multi-method fusion IQA metrics are satisfying, we construct a new image database which contains up to 60 reference images. The newly built image database is then used to test the generalization ability of different multi-method fusion IQA metrics. Cross database validation experiment indicates that in our new image database, the performances of all the multi-method fusion IQA metrics have no statistical significant different with some single-method IQA metrics such as FSIM and MAD. In the end, a thorough analysis is given to explain why the performance of multi-method fusion IQA framework drop significantly in cross database validation.展开更多
We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both sp...We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both spatial and feature domains.We commence by simultaneously learning a projection matrix,which preserves spatial localities,and a similarity matrix,which encodes feature similarities.We map the pixels of multi-source images by the projection matrix to a set fusion vectors that preserve spatial localities of the image.On the other hand,by applying the Laplacian eigen-decomposition to the similarity matrix,we obtain another set of fusion vectors that preserve the feature local similarities.We concatenate the fusion vectors for both spatial and feature locality preservation and obtain the fusion image.Finally,we classify the fusion image pixels by a novel sliding ensemble strategy,which enhances the locality preservation in classification.Our locality preserving fusion framework is effective in classifying multi-source sea-ice images(e.g.,multi-spectral and synthetic aperture radar(SAR)images)because it not only comprehensively captures the spatial neighboring relationships but also intrinsically characterizes the feature associations between different types of sea-ices.Experimental evaluations validate the effectiveness of our framework.展开更多
Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and g...Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and get more reliable results, a novel medical image fusion algorithm based on pulse coupled neural networks (PCNN) and multi-feature fuzzy clustering is proposed, which makes use of the multi-feature of image and combines the advantages of the local entropy and variance of local entropy based PCNN. The results of experiments indicate that the proposed image fusion method can better preserve the image details and robustness and significantly improve the image visual effect than the other fusion methods with less information distortion.展开更多
Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus ...Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus image fusion extracts the focused information from the source images to construct a global in-focus image which includes more information than any of the source images. In this paper, a novel multi-focus image fusion based on Laplacian operator and region optimization is proposed. The evaluation of image saliency based on Laplacian operator can easily distinguish the focus region and out of focus region. And the decision map obtained by Laplacian operator processing has less the residual information than other methods. For getting precise decision map, focus area and edge optimization based on regional connectivity and edge detection have been taken. Finally, the original images are fused through the decision map. Experimental results indicate that the proposed algorithm outperforms the other series of algorithms in terms of both subjective and objective evaluations.展开更多
Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coeffi- cients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed...Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coeffi- cients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed in literatures. However, the traditional clarity measures are not designed for compressive imaging measurements which are maps of source sense with random or likely ran- dom measurements matrix. This paper presents a novel efficient multi-focus image fusion frame- work for compressive imaging sensor network. Here the clarity measure of the raw compressive measurements is not obtained from the random sampling data itself but from the selected Hada- mard coefficients which can also be acquired from compressive imaging system efficiently. Then, the compressive measurements with different images are fused by selecting fusion rule. Finally, the block-based CS which coupled with iterative projection-based reconstruction is used to re- cover the fused image. Experimental results on common used testing data demonstrate the effectiveness of the proposed method.展开更多
In order to solve complex algorithm that is difficult to achieve real-time processing of Multiband image fusion within large amount of data, a real-time image fusion system based on FPGA and multi-DSP is designed. Fiv...In order to solve complex algorithm that is difficult to achieve real-time processing of Multiband image fusion within large amount of data, a real-time image fusion system based on FPGA and multi-DSP is designed. Five-band image acquisition, image registration, image fusion and display output can be done within the system which uses FPGA as the main processor and the other three DSP as an algorithm processor. Making full use of Flexible and high-speed characteristics of FPGA, while an image fusion algorithm based on multi-wavelet transform is optimized and applied to the system. The final experimental results show that the frame rate of 15 Hz, with a resolution of 1392 × 1040 of the five-band image can be used by the system to complete processing within 41ms.展开更多
A construction method of two channels non-separable wavelets filter bank which dilation matrix is [1, 1; 1,-1] and its application in the fusion of multi-spectral image are presented. Many 4×4 filter banks are de...A construction method of two channels non-separable wavelets filter bank which dilation matrix is [1, 1; 1,-1] and its application in the fusion of multi-spectral image are presented. Many 4×4 filter banks are designed. The multi-spectral image fusion algorithm based on this kind of wavelet is proposed. Using this filter bank, multi-resolution wavelet decomposition of the intensity of multi-spectral image and panchromatic image is performed, and the two low-frequency components of the intensity and the panchromatic image are merged by using a tradeoff parameter. The experiment results show that this method is good in the preservation of spectral quality and high spatial resolution information. Its performance in preserving spectral quality and high spatial information is better than the fusion method based on DWFT and IHS. When the parameter t is closed to 1, the fused image can obtain rich spectral information from the original MS image. The amount of computation reduced to only half of the fusion method based on four channels wavelet transform.展开更多
Hyperspectral remote sensing image(HSI)fusion with multispectral remote sensing images(MSI)improves data resolution.However,current fusion algorithms focus on local information and overlook long-range dependencies.The...Hyperspectral remote sensing image(HSI)fusion with multispectral remote sensing images(MSI)improves data resolution.However,current fusion algorithms focus on local information and overlook long-range dependencies.The parameter of network tuning prioritizes global optimization,neglecting spatial and spectral constraints,and limiting spatial and spectral reconstruction capabilities.This study introduces SwinGAN,a fusion network combining Swin Transformer,CNN,and GAN architectures.SwinGAN’s generator employs a detail injection framework to separately extract HSI and MSI features,fusing them to generate spatial residuals.These residuals are injected into the supersampled HSI to produce thefinal image,while a pure CNN architecture acts as the discriminator,enhancing the fusion quality.Additionally,we introduce a new adaptive loss function that improves image fusion accuracy.The loss function uses L1 loss as the content loss,and spatial and spectral gradient loss functions are introduced to improve the spatial representation and spectralfidelity of the fused images.Our experimental results on several datasets demonstrate that SwinGAN outperforms current popular algorithms in both spatial and spectral reconstruction capabilities.The ablation experiments also demonstrate the rationality of the various components of the proposed loss function.展开更多
To eliminate unnecessary background information,such as soft tissues in original CT images and the adverse impact of the similarity of adjacent spines on lumbar image segmentation and surgical path planning,a two‐sta...To eliminate unnecessary background information,such as soft tissues in original CT images and the adverse impact of the similarity of adjacent spines on lumbar image segmentation and surgical path planning,a two‐stage approach for localising lumbar segments is proposed.First,based on the multi‐scale feature fusion technology,a non‐linear regression method is used to achieve accurate localisation of the overall spatial region of the lumbar spine,effectively eliminating useless background information,such as soft tissues.In the second stage,we directly realised the precise positioning of each segment in the lumbar spine space region based on the non‐linear regression method,thus effectively eliminating the interference caused by the adjacent spine.The 3D Intersection over Union(3D_IOU)is used as the main evaluation indicator for the positioning accuracy.On an open dataset,3D_IOU values of 0.8339�0.0990 and 0.8559�0.0332 in the first and second stages,respectively is achieved.In addition,the average time required for the proposed method in the two stages is 0.3274 and 0.2105 s respectively.Therefore,the proposed method performs very well in terms of both pre-cision and speed and can effectively improve the accuracy of lumbar image segmentation and the effect of surgical path planning.展开更多
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusio...Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.展开更多
A novel multi-focus polychromatic image fusion algorithm based on filtering in the frequency domain using fast Fourier transform(FFT) and synthesis in the space domain(FFDSSD) is presented in this paper.First,the orig...A novel multi-focus polychromatic image fusion algorithm based on filtering in the frequency domain using fast Fourier transform(FFT) and synthesis in the space domain(FFDSSD) is presented in this paper.First,the original multi-focus images are transformed into their frequency data by FFT for easy and accurate clarity determination.Then a Gaussian low-pass filter is used to filter the high frequency information corresponding to the image saliencies.After an inverse FFT,the filtered images are obtained.The deviation between the filtered images and the original ones,representing the clarity of the image,is used to select the pixels from the multi-focus images to reconstruct a completely focused image.These operations in space domain preserve the original information as much as possible and are relatively insensitive to misregistration scenarios with respect to transform domain methods.The polychromatic noise is well considered and successfully avoided while the information in different chromatic channels is preserved.A natural,nice-looking fused microscopic image for human visual evaluations is obtained in a dedicated experiment.The experimental results indicate that the proposed algorithm has a good performance in objective quality metrics and runtime efficiency.展开更多
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
基金supported by the National Natural Science Foundation of China(41171336)the Project of Jiangsu Province Agricultural Science and Technology Innovation Fund(CX12-3054)
文摘Because of cloudy and rainy weather in south China, optical remote sens-ing images often can't be obtained easily. With the regional trial results in Baoying, Jiangsu province, this paper explored the fusion model and effect of ENVISAT/SAR and HJ-1A satel ite multispectral remote sensing images. Based on the ARSIS strat-egy, using the wavelet transform and the Interaction between the Band Structure Model (IBSM), the research progressed the ENVISAT satel ite SAR and the HJ-1A satel ite CCD images wavelet decomposition, and low/high frequency coefficient re-construction, and obtained the fusion images through the inverse wavelet transform. In the light of low and high-frequency images have different characteristics in differ-ent areas, different fusion rules which can enhance the integration process of self-adaptive were taken, with comparisons with the PCA transformation, IHS transfor-mation and other traditional methods by subjective and the corresponding quantita-tive evaluation. Furthermore, the research extracted the bands and NDVI values around the fusion with GPS samples, analyzed and explained the fusion effect. The results showed that the spectral distortion of wavelet fusion, IHS transform, PCA transform images was 0.101 6, 0.326 1 and 1.277 2, respectively and entropy was 14.701 5, 11.899 3 and 13.229 3, respectively, the wavelet fusion is the highest. The method of wavelet maintained good spectral capability, and visual effects while improved the spatial resolution, the information interpretation effect was much better than other two methods.
基金Sponsored by the Nation Nature Science Foundation of China(Grant No.61275010,61201237)the Fundamental Research Funds for the Central Universities(Grant No.HEUCFZ1129,No.HEUCF120805)
文摘In the fusion of image,how to measure the local character and clarity is called activity measurement. According to the problem,the traditional measurement is decided only by the high-frequency detail coefficients, which will make the energy expression insufficient to reflect the local clarity. Therefore,in this paper,a novel construction method for activity measurement is proposed. Firstly,it uses the wavelet decomposition for the fusion resource image, and then utilizes the high and low frequency wavelet coefficients synthetically. Meantime,it takes the normalized variance as the weight of high-frequency energy. Secondly,it calculates the measurement by the weighted energy,which can be used to measure the local character. Finally,the fusion coefficients can be got. In order to illustrate the superiority of this new method,three kinds of assessing indicators are provided. The experiment results show that,comparing with the traditional methods,this new method weakens the fuzzy and promotes the indicator value. Therefore,it has much more advantages for practical application.
文摘To overcome the shortcomings of 1 D and 2 D Otsu’s thresholding techniques, the 3 D Otsu method has been developed.Among all Otsu’s methods, 3 D Otsu technique provides the best threshold values for the multi-level thresholding processes. In this paper, to improve the quality of segmented images, a simple and effective multilevel thresholding method is introduced. The proposed approach focuses on preserving edge detail by computing the 3 D Otsu along the fusion phenomena. The advantages of the presented scheme include higher quality outcomes, better preservation of tiny details and boundaries and reduced execution time with rising threshold levels. The fusion approach depends upon the differences between pixel intensity values within a small local space of an image;it aims to improve localized information after the thresholding process. The fusion of images based on local contrast can improve image segmentation performance by minimizing the loss of local contrast, loss of details and gray-level distributions. Results show that the proposed method yields more promising segmentation results when compared to conventional1 D Otsu, 2 D Otsu and 3 D Otsu methods, as evident from the objective and subjective evaluations.
基金Project supported by the National Natural Science Foundation of China(Grant No.61402368)Aerospace Support Fund,China(Grant No.2017-HT-XGD)Aerospace Science and Technology Innovation Foundation,China(Grant No.2017 ZD 53047)
文摘The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.
文摘A new method based on resolution degradation model is proposed to improve both spatial and spectral quality of the synthetic images. Some ETM+ panchromatic and multispectral images are used to assess the new method. Its spatial and spectral effects are evaluated by qualitative and quantitative measures and the results are compared with those of IHS, PCA, Brovey, OWT(Orthogonal Wavelet Transform) and RWT(Redundant Wavelet Transform). The results show that the new method can keep almost the same spatial resolution as the panchromatic images, and the spectral effect of the new method is as good as those of wavelet-based methods.
文摘Image fusion has been developing into an important area of research. In remote sensing, the use of the same image sensor in different working modes, or different image sensors, can provide reinforcing or complementary information. Therefore, it is highly valuable to fuse outputs from multiple sensors (or the same sensor in different working modes) to improve the overall performance of the remote images, which are very useful for human visual perception and image processing task. Accordingly, in this paper, we first provide a comprehensive survey of the state of the art of multi-sensor image fusion methods in terms of three aspects: pixel-level fusion, feature-level fusion and decision-level fusion. An overview of existing fusion strategies is then introduced, after which the existing fusion quality measures are summarized. Finally, this review analyzes the development trends in fusion algorithms that may attract researchers to further explore the research in this field.
文摘The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.
基金supported by “the Fundamental Research Funds for the Central Universities” No.2018CUCTJ081
文摘Considering that there is no single full reference image quality assessment method that could give the best performance in all situations, some multi-method fusion metrics were proposed. Machine learning techniques are often involved in such multi-method fusion metrics so that its output would be more consistent with human visual perceptions. On the other hand, the robustness and generalization ability of these multi-method fusion metrics are questioned because of the scarce of images with mean opinion scores. In order to comprehensively validate whether or not the generalization ability of such multi-method fusion IQA metrics are satisfying, we construct a new image database which contains up to 60 reference images. The newly built image database is then used to test the generalization ability of different multi-method fusion IQA metrics. Cross database validation experiment indicates that in our new image database, the performances of all the multi-method fusion IQA metrics have no statistical significant different with some single-method IQA metrics such as FSIM and MAD. In the end, a thorough analysis is given to explain why the performance of multi-method fusion IQA framework drop significantly in cross database validation.
基金The National Natural Science Foundation of China under contract No.61671481the Qingdao Applied Fundamental Research under contract No.16-5-1-11-jchthe Fundamental Research Funds for Central Universities under contract No.18CX05014A
文摘We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both spatial and feature domains.We commence by simultaneously learning a projection matrix,which preserves spatial localities,and a similarity matrix,which encodes feature similarities.We map the pixels of multi-source images by the projection matrix to a set fusion vectors that preserve spatial localities of the image.On the other hand,by applying the Laplacian eigen-decomposition to the similarity matrix,we obtain another set of fusion vectors that preserve the feature local similarities.We concatenate the fusion vectors for both spatial and feature locality preservation and obtain the fusion image.Finally,we classify the fusion image pixels by a novel sliding ensemble strategy,which enhances the locality preservation in classification.Our locality preserving fusion framework is effective in classifying multi-source sea-ice images(e.g.,multi-spectral and synthetic aperture radar(SAR)images)because it not only comprehensively captures the spatial neighboring relationships but also intrinsically characterizes the feature associations between different types of sea-ices.Experimental evaluations validate the effectiveness of our framework.
文摘Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and get more reliable results, a novel medical image fusion algorithm based on pulse coupled neural networks (PCNN) and multi-feature fuzzy clustering is proposed, which makes use of the multi-feature of image and combines the advantages of the local entropy and variance of local entropy based PCNN. The results of experiments indicate that the proposed image fusion method can better preserve the image details and robustness and significantly improve the image visual effect than the other fusion methods with less information distortion.
文摘Considering the continuous advancement in the field of imaging sensor, a host of other new issues have emerged. A major problem is how to find focus areas more accurately for multi-focus image fusion. The multi-focus image fusion extracts the focused information from the source images to construct a global in-focus image which includes more information than any of the source images. In this paper, a novel multi-focus image fusion based on Laplacian operator and region optimization is proposed. The evaluation of image saliency based on Laplacian operator can easily distinguish the focus region and out of focus region. And the decision map obtained by Laplacian operator processing has less the residual information than other methods. For getting precise decision map, focus area and edge optimization based on regional connectivity and edge detection have been taken. Finally, the original images are fused through the decision map. Experimental results indicate that the proposed algorithm outperforms the other series of algorithms in terms of both subjective and objective evaluations.
文摘Two key points of pixel-level multi-focus image fusion are the clarity measure and the pixel coeffi- cients fusion rule. Along with different improvements on these two points, various fusion schemes have been proposed in literatures. However, the traditional clarity measures are not designed for compressive imaging measurements which are maps of source sense with random or likely ran- dom measurements matrix. This paper presents a novel efficient multi-focus image fusion frame- work for compressive imaging sensor network. Here the clarity measure of the raw compressive measurements is not obtained from the random sampling data itself but from the selected Hada- mard coefficients which can also be acquired from compressive imaging system efficiently. Then, the compressive measurements with different images are fused by selecting fusion rule. Finally, the block-based CS which coupled with iterative projection-based reconstruction is used to re- cover the fused image. Experimental results on common used testing data demonstrate the effectiveness of the proposed method.
文摘In order to solve complex algorithm that is difficult to achieve real-time processing of Multiband image fusion within large amount of data, a real-time image fusion system based on FPGA and multi-DSP is designed. Five-band image acquisition, image registration, image fusion and display output can be done within the system which uses FPGA as the main processor and the other three DSP as an algorithm processor. Making full use of Flexible and high-speed characteristics of FPGA, while an image fusion algorithm based on multi-wavelet transform is optimized and applied to the system. The final experimental results show that the frame rate of 15 Hz, with a resolution of 1392 × 1040 of the five-band image can be used by the system to complete processing within 41ms.
基金the National Natural Science Foundation of China (Grant No. 10477007)Natural Science Foundation of Hubei Province (Grant No. 2006ABA015)the Key Project of Hubei Provincial Department of Education (Grant No. D200510004)
文摘A construction method of two channels non-separable wavelets filter bank which dilation matrix is [1, 1; 1,-1] and its application in the fusion of multi-spectral image are presented. Many 4×4 filter banks are designed. The multi-spectral image fusion algorithm based on this kind of wavelet is proposed. Using this filter bank, multi-resolution wavelet decomposition of the intensity of multi-spectral image and panchromatic image is performed, and the two low-frequency components of the intensity and the panchromatic image are merged by using a tradeoff parameter. The experiment results show that this method is good in the preservation of spectral quality and high spatial resolution information. Its performance in preserving spectral quality and high spatial information is better than the fusion method based on DWFT and IHS. When the parameter t is closed to 1, the fused image can obtain rich spectral information from the original MS image. The amount of computation reduced to only half of the fusion method based on four channels wavelet transform.
基金supported by the National Key Research and Development Program of China(No.2020YFA0714103).
文摘Hyperspectral remote sensing image(HSI)fusion with multispectral remote sensing images(MSI)improves data resolution.However,current fusion algorithms focus on local information and overlook long-range dependencies.The parameter of network tuning prioritizes global optimization,neglecting spatial and spectral constraints,and limiting spatial and spectral reconstruction capabilities.This study introduces SwinGAN,a fusion network combining Swin Transformer,CNN,and GAN architectures.SwinGAN’s generator employs a detail injection framework to separately extract HSI and MSI features,fusing them to generate spatial residuals.These residuals are injected into the supersampled HSI to produce thefinal image,while a pure CNN architecture acts as the discriminator,enhancing the fusion quality.Additionally,we introduce a new adaptive loss function that improves image fusion accuracy.The loss function uses L1 loss as the content loss,and spatial and spectral gradient loss functions are introduced to improve the spatial representation and spectralfidelity of the fused images.Our experimental results on several datasets demonstrate that SwinGAN outperforms current popular algorithms in both spatial and spectral reconstruction capabilities.The ablation experiments also demonstrate the rationality of the various components of the proposed loss function.
基金Original Innovation Joint Fund:L202010 and the National Key Research and Development Program of China:2018YFB1307604National Key Research and Development Program of China,Grant/Award Numbers:2018YFB1307604。
文摘To eliminate unnecessary background information,such as soft tissues in original CT images and the adverse impact of the similarity of adjacent spines on lumbar image segmentation and surgical path planning,a two‐stage approach for localising lumbar segments is proposed.First,based on the multi‐scale feature fusion technology,a non‐linear regression method is used to achieve accurate localisation of the overall spatial region of the lumbar spine,effectively eliminating useless background information,such as soft tissues.In the second stage,we directly realised the precise positioning of each segment in the lumbar spine space region based on the non‐linear regression method,thus effectively eliminating the interference caused by the adjacent spine.The 3D Intersection over Union(3D_IOU)is used as the main evaluation indicator for the positioning accuracy.On an open dataset,3D_IOU values of 0.8339�0.0990 and 0.8559�0.0332 in the first and second stages,respectively is achieved.In addition,the average time required for the proposed method in the two stages is 0.3274 and 0.2105 s respectively.Therefore,the proposed method performs very well in terms of both pre-cision and speed and can effectively improve the accuracy of lumbar image segmentation and the effect of surgical path planning.
基金This study is supported by the Natural Science Foundation of China (Grant No. 51274150) and Shanxi Province Natural Science Foundation of China (Grant No. 201601 D011059).
文摘Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
文摘A novel multi-focus polychromatic image fusion algorithm based on filtering in the frequency domain using fast Fourier transform(FFT) and synthesis in the space domain(FFDSSD) is presented in this paper.First,the original multi-focus images are transformed into their frequency data by FFT for easy and accurate clarity determination.Then a Gaussian low-pass filter is used to filter the high frequency information corresponding to the image saliencies.After an inverse FFT,the filtered images are obtained.The deviation between the filtered images and the original ones,representing the clarity of the image,is used to select the pixels from the multi-focus images to reconstruct a completely focused image.These operations in space domain preserve the original information as much as possible and are relatively insensitive to misregistration scenarios with respect to transform domain methods.The polychromatic noise is well considered and successfully avoided while the information in different chromatic channels is preserved.A natural,nice-looking fused microscopic image for human visual evaluations is obtained in a dedicated experiment.The experimental results indicate that the proposed algorithm has a good performance in objective quality metrics and runtime efficiency.