Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aimin...Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems.展开更多
We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both sp...We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both spatial and feature domains.We commence by simultaneously learning a projection matrix,which preserves spatial localities,and a similarity matrix,which encodes feature similarities.We map the pixels of multi-source images by the projection matrix to a set fusion vectors that preserve spatial localities of the image.On the other hand,by applying the Laplacian eigen-decomposition to the similarity matrix,we obtain another set of fusion vectors that preserve the feature local similarities.We concatenate the fusion vectors for both spatial and feature locality preservation and obtain the fusion image.Finally,we classify the fusion image pixels by a novel sliding ensemble strategy,which enhances the locality preservation in classification.Our locality preserving fusion framework is effective in classifying multi-source sea-ice images(e.g.,multi-spectral and synthetic aperture radar(SAR)images)because it not only comprehensively captures the spatial neighboring relationships but also intrinsically characterizes the feature associations between different types of sea-ices.Experimental evaluations validate the effectiveness of our framework.展开更多
When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ...When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves.展开更多
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Fi...In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Firstly,the source images are separated into a series of high and low frequency components.Secondly,three visual features of the source image are extracted to construct a decision graph model.Thirdly,a fast weighted guided filter is raised to optimize the result obtained in the previous step and reduce the time complexity by considering the correlation among neighboring pixels.Finally,the image obtained in the previous step is combined with the weight map to realize the image fusion.The proposed algorithm is applied to multi-focus,visible-infrared and multi-modal image respectively and the final results show that the algorithm effectively solves the halo artifacts of the merged images with higher efficiency,and is better than the traditional method considering subjective visual consequent and objective evaluation.展开更多
Remote Sensing image fusion is an effective way to use the large volume ofdata from multi-source images. This paper introduces a new method of remote sensing image fusionbased on support vector machine (SVM), using hi...Remote Sensing image fusion is an effective way to use the large volume ofdata from multi-source images. This paper introduces a new method of remote sensing image fusionbased on support vector machine (SVM), using high spatial resolution data SPIN-2 and multi-spectralremote sensing data SPOT-4. Firstly, the new method is established by building a model of remotesensing image fusion based on SVM. Then by using SPIN-2 data and SPOT-4 data, image classificationfusion is tested. Finally, an evaluation of the fusion result is made in two ways. 1) Fromsubjectivity assessment, the spatial resolution of the fused image is improved compared to theSPOT-4. And it is clearly that the texture of the fused image is distinctive. 2) From quantitativeanalysis, the effect of classification fusion is better. As a whole, the re-suit shows that theaccuracy of image fusion based on SVM is high and the SVM algorithm can be recommended forapplication in remote sensing image fusion processes.展开更多
Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concept...Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concepts, such as independent single evaluation index, union single evaluation index, synthetic evaluation index were proposed. Based on these concepts, synthetic evaluation system for digital image fusion was formed. The experiments with the wavelet fusion method, which was applied to fuse the multi-spectral image and panchromatic remote sensing image, the IR image and visible image, the CT and MRI image, and the multi-focus images show that it is an objective, uniform and effective quantitative method for image fusion evaluation.展开更多
This paper aims at providing multi-source remote sensing images registered in geometric space for image fusion.Focusing on the characteristics and differences of multi-source remote sensing images,a feature-based regi...This paper aims at providing multi-source remote sensing images registered in geometric space for image fusion.Focusing on the characteristics and differences of multi-source remote sensing images,a feature-based registration algorithm is implemented.The key technologies include image scale-space for implementing multi-scale properties,Harris corner detection for keypoints extraction,and partial intensity invariant feature descriptor(PIIFD)for keypoints description.Eventually,a multi-scale Harris-PIIFD image registration algorithm framework is proposed.The experimental results of fifteen sets of representative real data show that the algorithm has excellent,stable performance in multi-source remote sensing image registration,and can achieve accurate spatial alignment,which has strong practical application value and certain generalization ability.展开更多
The automatic registration of multi-source remote sensing images (RSI) is a research hotspot of remote sensing image preprocessing currently. A special automatic image registration module named the Image Autosync has ...The automatic registration of multi-source remote sensing images (RSI) is a research hotspot of remote sensing image preprocessing currently. A special automatic image registration module named the Image Autosync has been embedded into the ERDAS IMAGINE software of version 9.0 and above. The registration accuracies of the module verified for the remote sensing images obtained from different platforms or their different spatial resolution. Four tested registration experiments are discussed in this article to analyze the accuracy differences based on the remote sensing data which have different spatial resolution. The impact factors inducing the differences of registration accuracy are also analyzed.展开更多
The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper...The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.展开更多
Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational com...Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational complexity and neglecting the local features of an image,resulting in limited image detail retention and a high registration misalignment sensitivity.In order to overcome these shortcomings and the noise existing in the image of the fusion process,this paper proposes a new signal decomposition model,namely the multi-source image fusion algorithm of the gradient regularization convolution SR(CSR).The main innovation of this work is using the sparse optimization function to perform two-scale decomposition of the source image to obtain high-frequency components and low-frequency components.The sparse coefficient is obtained by the gradient regularization CSR model,and the sparse coefficient is taken as the maximum value to get the optimal high frequency component of the fused image.The best low frequency component is obtained by using the fusion strategy of the extreme or the average value.The final fused image is obtained by adding two optimal components.Experimental results demonstrate that this method greatly improves the ability to maintain image details and reduces image registration sensitivity.展开更多
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed...To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks.展开更多
AIM: To evaluate the functional differences between the 2 liver lobes in non-cirrhotic patients by using computed tomography/99mTc-galactosyl human serum albumin (CT/99mTc-GSA) single-photon emission computed tomograp...AIM: To evaluate the functional differences between the 2 liver lobes in non-cirrhotic patients by using computed tomography/99mTc-galactosyl human serum albumin (CT/99mTc-GSA) single-photon emission computed tomography (SPECT) fusion images. METHODS: Between December 2008 and March 2012, 264 non-cirrhotic patients underwent preoperative liver function assessment using CT/99mTc-GSA SPECT fusion images. Of these, 30 patients, in whom the influence of a tumor on the liver parenchyma was estimated to be negligible, were selected. Specifically, the selected patients were required to meet either of the following criteria: (1) the presence of an extrahepatic tumor; or (2) presence of a single small intrahepatic tumor. These 30 patients were retrospectively analyzed to calculate the percentage volume (%Volume) and the percentage function (%Function) of each lobe. The ratio between the %Function and %Volume (function-to-volume ratio) of each lobe was also calculated, and the ratios were compared between the 2 lobes. Furthermore, the correlations between the function-to-volume ratio and each of 2 liver parameters [lobe volume and diameter ratio of the left portal vein to the right portal vein (LPV-to-RPV diameter ratio)] were investigated. RESULTS: The median values of %Volume and %Function were 62.6% and 67.1% in the right lobe, with %Function being significantly higher than %Volume (P < 0.01). The median values of %Volume and %Function were 31.0% and 28.7% in the left lobe, with %Function being significantly lower than %Volume (P < 0.01). The function-to-volume ratios of the right lobe (1.04-1.14) were significantly higher than those of the left lobe (0.74-0.99) (P < 0.01). The function-to-volume ratio showed no significant correlation between the lobe volume in either lobe. In contrast, the function-to-volume ratio showed significant correlations with the LPV-to-RPV diameter ratio in both lobes (right lobe: negative correlation, rs = -0.37, P = 0.048; left lobe: positive correlation, r s = 0.71, P < 0.001). The function-to-volume ratio in the left lobe tended to be higher, and that in the right lobe tended to be lower, in accordance with the increase in the LPV-to-RPV diameter ratio. CONCLUSION: CT/99mTc-GSA SPECT fusion images demonstrated that the function of the left lobe was significantly decreased compared with that of the right lobe in non-cirrhotic livers.展开更多
A novel feature fusion method is proposed for the edge detection of color images. Except for the typical features used in edge detection, the color contrast similarity and the orientation consistency are also selected...A novel feature fusion method is proposed for the edge detection of color images. Except for the typical features used in edge detection, the color contrast similarity and the orientation consistency are also selected as the features. The four features are combined together as a parameter to detect the edges of color images. Experimental results show that the method can inhibit noisy edges and facilitate the detection for weak edges. It has a better performance than conventional methods in noisy environments.展开更多
Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color...Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images.展开更多
The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivot...The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality.展开更多
Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data mu...Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data must be fused.In our research,self-adaptive weighted data fusion method is used to respectively integrate the data from the PH value,temperature,oxygen dissolved and NH3 concentration of water quality environment.Based on the fusion,the Grubbs method is used to detect the abnormal data so as to provide data support for estimation,prediction and early warning of the water quality.展开更多
Facing the very high-resolution( VHR) image classification problem,a feature extraction and fusion framework is presented for VHR panchromatic and multispectral image classification based on deep learning techniques. ...Facing the very high-resolution( VHR) image classification problem,a feature extraction and fusion framework is presented for VHR panchromatic and multispectral image classification based on deep learning techniques. The proposed approach combines spectral and spatial information based on the fusion of features extracted from panchromatic( PAN) and multispectral( MS) images using sparse autoencoder and its deep version. There are three steps in the proposed method,the first one is to extract spatial information of PAN image,and the second one is to describe spectral information of MS image. Finally,in the third step,the features obtained from PAN and MS images are concatenated directly as a simple fusion feature. The classification is performed using the support vector machine( SVM) and the experiments carried out on two datasets with very high spatial resolution. MS and PAN images from WorldView-2 satellite indicate that the classifier provides an efficient solution and demonstrate that the fusion of the features extracted by deep learning techniques from PAN and MS images performs better than that when these techniques are used separately. In addition,this framework shows that deep learning models can extract and fuse spatial and spectral information greatly,and have huge potential to achieve higher accuracy for classification of multispectral and panchromatic images.展开更多
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier...Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.展开更多
基金The Key R&D Project of Hainan Province under contract No.ZDYF2023SHFZ097the National Natural Science Foundation of China under contract No.42376180。
文摘Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems.
基金The National Natural Science Foundation of China under contract No.61671481the Qingdao Applied Fundamental Research under contract No.16-5-1-11-jchthe Fundamental Research Funds for Central Universities under contract No.18CX05014A
文摘We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both spatial and feature domains.We commence by simultaneously learning a projection matrix,which preserves spatial localities,and a similarity matrix,which encodes feature similarities.We map the pixels of multi-source images by the projection matrix to a set fusion vectors that preserve spatial localities of the image.On the other hand,by applying the Laplacian eigen-decomposition to the similarity matrix,we obtain another set of fusion vectors that preserve the feature local similarities.We concatenate the fusion vectors for both spatial and feature locality preservation and obtain the fusion image.Finally,we classify the fusion image pixels by a novel sliding ensemble strategy,which enhances the locality preservation in classification.Our locality preserving fusion framework is effective in classifying multi-source sea-ice images(e.g.,multi-spectral and synthetic aperture radar(SAR)images)because it not only comprehensively captures the spatial neighboring relationships but also intrinsically characterizes the feature associations between different types of sea-ices.Experimental evaluations validate the effectiveness of our framework.
文摘When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves.
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
基金supported by the National Natural Science Foundation of China(61472324 61671383)+1 种基金Shaanxi Key Industry Innovation Chain Project(2018ZDCXL-G-12-2 2019ZDLGY14-02-02)
文摘In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Firstly,the source images are separated into a series of high and low frequency components.Secondly,three visual features of the source image are extracted to construct a decision graph model.Thirdly,a fast weighted guided filter is raised to optimize the result obtained in the previous step and reduce the time complexity by considering the correlation among neighboring pixels.Finally,the image obtained in the previous step is combined with the weight map to realize the image fusion.The proposed algorithm is applied to multi-focus,visible-infrared and multi-modal image respectively and the final results show that the algorithm effectively solves the halo artifacts of the merged images with higher efficiency,and is better than the traditional method considering subjective visual consequent and objective evaluation.
文摘Remote Sensing image fusion is an effective way to use the large volume ofdata from multi-source images. This paper introduces a new method of remote sensing image fusionbased on support vector machine (SVM), using high spatial resolution data SPIN-2 and multi-spectralremote sensing data SPOT-4. Firstly, the new method is established by building a model of remotesensing image fusion based on SVM. Then by using SPIN-2 data and SPOT-4 data, image classificationfusion is tested. Finally, an evaluation of the fusion result is made in two ways. 1) Fromsubjectivity assessment, the spatial resolution of the fused image is improved compared to theSPOT-4. And it is clearly that the texture of the fused image is distinctive. 2) From quantitativeanalysis, the effect of classification fusion is better. As a whole, the re-suit shows that theaccuracy of image fusion based on SVM is high and the SVM algorithm can be recommended forapplication in remote sensing image fusion processes.
基金National Natural Science Foundation ofChina (No. 60375008) Shanghai EXPOSpecial Project ( No.2004BA908B07 )Shanghai NRC International CooperationProject (No.05SN07118)
文摘Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concepts, such as independent single evaluation index, union single evaluation index, synthetic evaluation index were proposed. Based on these concepts, synthetic evaluation system for digital image fusion was formed. The experiments with the wavelet fusion method, which was applied to fuse the multi-spectral image and panchromatic remote sensing image, the IR image and visible image, the CT and MRI image, and the multi-focus images show that it is an objective, uniform and effective quantitative method for image fusion evaluation.
文摘This paper aims at providing multi-source remote sensing images registered in geometric space for image fusion.Focusing on the characteristics and differences of multi-source remote sensing images,a feature-based registration algorithm is implemented.The key technologies include image scale-space for implementing multi-scale properties,Harris corner detection for keypoints extraction,and partial intensity invariant feature descriptor(PIIFD)for keypoints description.Eventually,a multi-scale Harris-PIIFD image registration algorithm framework is proposed.The experimental results of fifteen sets of representative real data show that the algorithm has excellent,stable performance in multi-source remote sensing image registration,and can achieve accurate spatial alignment,which has strong practical application value and certain generalization ability.
文摘The automatic registration of multi-source remote sensing images (RSI) is a research hotspot of remote sensing image preprocessing currently. A special automatic image registration module named the Image Autosync has been embedded into the ERDAS IMAGINE software of version 9.0 and above. The registration accuracies of the module verified for the remote sensing images obtained from different platforms or their different spatial resolution. Four tested registration experiments are discussed in this article to analyze the accuracy differences based on the remote sensing data which have different spatial resolution. The impact factors inducing the differences of registration accuracy are also analyzed.
文摘The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.
基金the National Natural Science Foundation of China(61671383)Shaanxi Key Industry Innovation Chain Project(2018ZDCXL-G-12-2,2019ZDLGY14-02-02,2019ZDLGY14-02-03).
文摘Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational complexity and neglecting the local features of an image,resulting in limited image detail retention and a high registration misalignment sensitivity.In order to overcome these shortcomings and the noise existing in the image of the fusion process,this paper proposes a new signal decomposition model,namely the multi-source image fusion algorithm of the gradient regularization convolution SR(CSR).The main innovation of this work is using the sparse optimization function to perform two-scale decomposition of the source image to obtain high-frequency components and low-frequency components.The sparse coefficient is obtained by the gradient regularization CSR model,and the sparse coefficient is taken as the maximum value to get the optimal high frequency component of the fused image.The best low frequency component is obtained by using the fusion strategy of the extreme or the average value.The final fused image is obtained by adding two optimal components.Experimental results demonstrate that this method greatly improves the ability to maintain image details and reduces image registration sensitivity.
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.
文摘To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks.
文摘AIM: To evaluate the functional differences between the 2 liver lobes in non-cirrhotic patients by using computed tomography/99mTc-galactosyl human serum albumin (CT/99mTc-GSA) single-photon emission computed tomography (SPECT) fusion images. METHODS: Between December 2008 and March 2012, 264 non-cirrhotic patients underwent preoperative liver function assessment using CT/99mTc-GSA SPECT fusion images. Of these, 30 patients, in whom the influence of a tumor on the liver parenchyma was estimated to be negligible, were selected. Specifically, the selected patients were required to meet either of the following criteria: (1) the presence of an extrahepatic tumor; or (2) presence of a single small intrahepatic tumor. These 30 patients were retrospectively analyzed to calculate the percentage volume (%Volume) and the percentage function (%Function) of each lobe. The ratio between the %Function and %Volume (function-to-volume ratio) of each lobe was also calculated, and the ratios were compared between the 2 lobes. Furthermore, the correlations between the function-to-volume ratio and each of 2 liver parameters [lobe volume and diameter ratio of the left portal vein to the right portal vein (LPV-to-RPV diameter ratio)] were investigated. RESULTS: The median values of %Volume and %Function were 62.6% and 67.1% in the right lobe, with %Function being significantly higher than %Volume (P < 0.01). The median values of %Volume and %Function were 31.0% and 28.7% in the left lobe, with %Function being significantly lower than %Volume (P < 0.01). The function-to-volume ratios of the right lobe (1.04-1.14) were significantly higher than those of the left lobe (0.74-0.99) (P < 0.01). The function-to-volume ratio showed no significant correlation between the lobe volume in either lobe. In contrast, the function-to-volume ratio showed significant correlations with the LPV-to-RPV diameter ratio in both lobes (right lobe: negative correlation, rs = -0.37, P = 0.048; left lobe: positive correlation, r s = 0.71, P < 0.001). The function-to-volume ratio in the left lobe tended to be higher, and that in the right lobe tended to be lower, in accordance with the increase in the LPV-to-RPV diameter ratio. CONCLUSION: CT/99mTc-GSA SPECT fusion images demonstrated that the function of the left lobe was significantly decreased compared with that of the right lobe in non-cirrhotic livers.
基金supported partly by the National Basic Research Program of China (2005CB724303)the National Natural Science Foundation of China (60671062) Shanghai Leading Academic Discipline Project (B112).
文摘A novel feature fusion method is proposed for the edge detection of color images. Except for the typical features used in edge detection, the color contrast similarity and the orientation consistency are also selected as the features. The four features are combined together as a parameter to detect the edges of color images. Experimental results show that the method can inhibit noisy edges and facilitate the detection for weak edges. It has a better performance than conventional methods in noisy environments.
基金supported by the national key research and development program (No.2020YFB1806608)Jiangsu natural science foundation for distinguished young scholars (No.BK20220054)。
文摘Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images.
基金This project is supported by the National Natural Science Foundation of China(NSFC)(No.61902158).
文摘The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality.
基金This study was supported by National Key Research and Development Project(Project No.2017YFD0301506)National Social Science Foundation(Project No.71774052)+1 种基金Hunan Education Department Scientific Research Project(Project No.17K04417A092).
文摘Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data must be fused.In our research,self-adaptive weighted data fusion method is used to respectively integrate the data from the PH value,temperature,oxygen dissolved and NH3 concentration of water quality environment.Based on the fusion,the Grubbs method is used to detect the abnormal data so as to provide data support for estimation,prediction and early warning of the water quality.
基金Supported by the National Natural Science Foundation of China(No.61472103,61772158,U.1711265)
文摘Facing the very high-resolution( VHR) image classification problem,a feature extraction and fusion framework is presented for VHR panchromatic and multispectral image classification based on deep learning techniques. The proposed approach combines spectral and spatial information based on the fusion of features extracted from panchromatic( PAN) and multispectral( MS) images using sparse autoencoder and its deep version. There are three steps in the proposed method,the first one is to extract spatial information of PAN image,and the second one is to describe spectral information of MS image. Finally,in the third step,the features obtained from PAN and MS images are concatenated directly as a simple fusion feature. The classification is performed using the support vector machine( SVM) and the experiments carried out on two datasets with very high spatial resolution. MS and PAN images from WorldView-2 satellite indicate that the classifier provides an efficient solution and demonstrate that the fusion of the features extracted by deep learning techniques from PAN and MS images performs better than that when these techniques are used separately. In addition,this framework shows that deep learning models can extract and fuse spatial and spectral information greatly,and have huge potential to achieve higher accuracy for classification of multispectral and panchromatic images.
基金Major Program of National Natural Science Foundation of China(NSFC12292980,NSFC12292984)National Key R&D Program of China(2023YFA1009000,2023YFA1009004,2020YFA0712203,2020YFA0712201)+2 种基金Major Program of National Natural Science Foundation of China(NSFC12031016)Beijing Natural Science Foundation(BNSFZ210003)Department of Science,Technology and Information of the Ministry of Education(8091B042240).
文摘Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.