期刊文献+
共找到10,415篇文章
< 1 2 250 >
每页显示 20 50 100
Mangrove monitoring and extraction based on multi-source remote sensing data:a deep learning method based on SAR and optical image fusion
1
作者 Yiheng Xie Xiaoping Rui +2 位作者 Yarong Zou Heng Tang Ninglei Ouyang 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2024年第9期110-121,共12页
Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aimin... Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems. 展开更多
关键词 image fusion SAR image optical image MANGROVE deep learning attention mechanism
下载PDF
Locality preserving fusion of multi-source images for sea-ice classification 被引量:1
2
作者 Zhiqiang Yu Tingwei Wang +2 位作者 Xi Zhang Jie Zhang Peng Ren 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2019年第7期129-136,共8页
We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both sp... We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both spatial and feature domains.We commence by simultaneously learning a projection matrix,which preserves spatial localities,and a similarity matrix,which encodes feature similarities.We map the pixels of multi-source images by the projection matrix to a set fusion vectors that preserve spatial localities of the image.On the other hand,by applying the Laplacian eigen-decomposition to the similarity matrix,we obtain another set of fusion vectors that preserve the feature local similarities.We concatenate the fusion vectors for both spatial and feature locality preservation and obtain the fusion image.Finally,we classify the fusion image pixels by a novel sliding ensemble strategy,which enhances the locality preservation in classification.Our locality preserving fusion framework is effective in classifying multi-source sea-ice images(e.g.,multi-spectral and synthetic aperture radar(SAR)images)because it not only comprehensively captures the spatial neighboring relationships but also intrinsically characterizes the feature associations between different types of sea-ices.Experimental evaluations validate the effectiveness of our framework. 展开更多
关键词 SEA-ICE CLASSIFICATION multi-source image fusion ensemble CLASSIFICATION
下载PDF
A multi-source information fusion layer counting method for penetration fuze based on TCN-LSTM
3
作者 Yili Wang Changsheng Li Xiaofeng Wang 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第3期463-474,共12页
When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ... When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves. 展开更多
关键词 Penetration fuze Temporal convolutional network(TCN) Long short-term memory(LSTM) Layer counting multi-source fusion
下载PDF
A deep learning fusion model for accurate classification of brain tumours in Magnetic Resonance images
4
作者 Nechirvan Asaad Zebari Chira Nadheef Mohammed +8 位作者 Dilovan Asaad Zebari Mazin Abed Mohammed Diyar Qader Zeebaree Haydar Abdulameer Marhoon Karrar Hameed Abdulkareem Seifedine Kadry Wattana Viriyasitavat Jan Nedoma Radek Martinek 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第4期790-804,共15页
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods... Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly. 展开更多
关键词 brain tumour deep learning feature fusion model MRI images multi‐classification
下载PDF
Multi-source image fusion algorithm based on fast weighted guided filter 被引量:6
5
作者 WANG Jian YANG Ke +2 位作者 REN Ping QIN Chunxia ZHANG Xiufei 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2019年第5期831-840,共10页
In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Fi... In last few years,guided image fusion algorithms become more and more popular.However,the current algorithms cannot solve the halo artifacts.We propose an image fusion algorithm based on fast weighted guided filter.Firstly,the source images are separated into a series of high and low frequency components.Secondly,three visual features of the source image are extracted to construct a decision graph model.Thirdly,a fast weighted guided filter is raised to optimize the result obtained in the previous step and reduce the time complexity by considering the correlation among neighboring pixels.Finally,the image obtained in the previous step is combined with the weight map to realize the image fusion.The proposed algorithm is applied to multi-focus,visible-infrared and multi-modal image respectively and the final results show that the algorithm effectively solves the halo artifacts of the merged images with higher efficiency,and is better than the traditional method considering subjective visual consequent and objective evaluation. 展开更多
关键词 FAST GUIDED FILTER image fusion visual feature DECISION map
下载PDF
MULTI-SOURCE REMOTE SENSING IMAGE FUSION BASED ON SUPPORT VECTOR MACHINE 被引量:3
6
作者 ZHAOShu-he FENGXue-zhi 《Chinese Geographical Science》 SCIE CSCD 2002年第3期244-248,共5页
Remote Sensing image fusion is an effective way to use the large volume ofdata from multi-source images. This paper introduces a new method of remote sensing image fusionbased on support vector machine (SVM), using hi... Remote Sensing image fusion is an effective way to use the large volume ofdata from multi-source images. This paper introduces a new method of remote sensing image fusionbased on support vector machine (SVM), using high spatial resolution data SPIN-2 and multi-spectralremote sensing data SPOT-4. Firstly, the new method is established by building a model of remotesensing image fusion based on SVM. Then by using SPIN-2 data and SPOT-4 data, image classificationfusion is tested. Finally, an evaluation of the fusion result is made in two ways. 1) Fromsubjectivity assessment, the spatial resolution of the fused image is improved compared to theSPOT-4. And it is clearly that the texture of the fused image is distinctive. 2) From quantitativeanalysis, the effect of classification fusion is better. As a whole, the re-suit shows that theaccuracy of image fusion based on SVM is high and the SVM algorithm can be recommended forapplication in remote sensing image fusion processes. 展开更多
关键词 image fusion SVM multi-spectral image panchromatic image
下载PDF
Synthetically Evaluation System for Multi-source Image Fusion and Experimental Analysis 被引量:2
7
作者 肖刚 敬忠良 +1 位作者 吴建民 刘从义 《Journal of Shanghai Jiaotong university(Science)》 EI 2006年第3期263-270,共8页
Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concept... Study on the evaluation system for multi-source image fusion is an important and necessary part of image fusion. Qualitative evaluation indexes and quantitative evaluation indexes were studied. A series of new concepts, such as independent single evaluation index, union single evaluation index, synthetic evaluation index were proposed. Based on these concepts, synthetic evaluation system for digital image fusion was formed. The experiments with the wavelet fusion method, which was applied to fuse the multi-spectral image and panchromatic remote sensing image, the IR image and visible image, the CT and MRI image, and the multi-focus images show that it is an objective, uniform and effective quantitative method for image fusion evaluation. 展开更多
关键词 image fusion independent single evaluation union single evaluation synthetic evaluation evaluation system
下载PDF
Multi-Scale PIIFD for Registration of Multi-Source Remote Sensing Images 被引量:1
8
作者 Chenzhong Gao Wei Li 《Journal of Beijing Institute of Technology》 EI CAS 2021年第2期113-124,共12页
This paper aims at providing multi-source remote sensing images registered in geometric space for image fusion.Focusing on the characteristics and differences of multi-source remote sensing images,a feature-based regi... This paper aims at providing multi-source remote sensing images registered in geometric space for image fusion.Focusing on the characteristics and differences of multi-source remote sensing images,a feature-based registration algorithm is implemented.The key technologies include image scale-space for implementing multi-scale properties,Harris corner detection for keypoints extraction,and partial intensity invariant feature descriptor(PIIFD)for keypoints description.Eventually,a multi-scale Harris-PIIFD image registration algorithm framework is proposed.The experimental results of fifteen sets of representative real data show that the algorithm has excellent,stable performance in multi-source remote sensing image registration,and can achieve accurate spatial alignment,which has strong practical application value and certain generalization ability. 展开更多
关键词 image registration multi-source remote sensing SCALE-SPACE Harris corner partial intensity invariant feature descriptor(PIIFD)
下载PDF
Accuracy Analysis on the Automatic Registration of Multi-Source Remote Sensing Images Based on the Software of ERDAS Imagine 被引量:1
9
作者 Debao Yuan Ximin Cui +2 位作者 Yahui Qiu Xueyun Gu Li Zhang 《Advances in Remote Sensing》 2013年第2期140-148,共9页
The automatic registration of multi-source remote sensing images (RSI) is a research hotspot of remote sensing image preprocessing currently. A special automatic image registration module named the Image Autosync has ... The automatic registration of multi-source remote sensing images (RSI) is a research hotspot of remote sensing image preprocessing currently. A special automatic image registration module named the Image Autosync has been embedded into the ERDAS IMAGINE software of version 9.0 and above. The registration accuracies of the module verified for the remote sensing images obtained from different platforms or their different spatial resolution. Four tested registration experiments are discussed in this article to analyze the accuracy differences based on the remote sensing data which have different spatial resolution. The impact factors inducing the differences of registration accuracy are also analyzed. 展开更多
关键词 multi-source REMOTE SENSING images Automatic REGISTRATION image Autosync REGISTRATION ACCURACY
下载PDF
Image Processing on Geological Data in Vector Format and Multi-Source Spatial Data Fusion
10
作者 Liu Xing Hu Guangdao Qiu Yubao Faculty of Earth Resources, China University of Geosciences, Wuhan 430074 《Journal of China University of Geosciences》 SCIE CSCD 2003年第3期278-282,共5页
The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper... The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly. 展开更多
关键词 geological data GIS-based vector data conversion image processing multi-source data fusion
下载PDF
A multi-source image fusion algorithm based on gradient regularized convolution sparse representation
11
作者 WANG Jian QIN Chunxia +2 位作者 ZHANG Xiufei YANG Ke REN Ping 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2020年第3期447-459,共13页
Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational com... Image fusion based on the sparse representation(SR)has become the primary research direction of the transform domain method.However,the SR-based image fusion algorithm has the characteristics of high computational complexity and neglecting the local features of an image,resulting in limited image detail retention and a high registration misalignment sensitivity.In order to overcome these shortcomings and the noise existing in the image of the fusion process,this paper proposes a new signal decomposition model,namely the multi-source image fusion algorithm of the gradient regularization convolution SR(CSR).The main innovation of this work is using the sparse optimization function to perform two-scale decomposition of the source image to obtain high-frequency components and low-frequency components.The sparse coefficient is obtained by the gradient regularization CSR model,and the sparse coefficient is taken as the maximum value to get the optimal high frequency component of the fused image.The best low frequency component is obtained by using the fusion strategy of the extreme or the average value.The final fused image is obtained by adding two optimal components.Experimental results demonstrate that this method greatly improves the ability to maintain image details and reduces image registration sensitivity. 展开更多
关键词 gradient regularization convolution sparse representation(CSR) image fusion
下载PDF
Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding 被引量:1
12
作者 Chunming Wu Wukai Liu Xin Ma 《Computers, Materials & Continua》 SCIE EI 2024年第4期1441-1461,共21页
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne... A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations. 展开更多
关键词 image fusion Res2Net-Transformer infrared image visible image
下载PDF
CAEFusion: A New Convolutional Autoencoder-Based Infrared and Visible Light Image Fusion Algorithm 被引量:1
13
作者 Chun-Ming Wu Mei-Ling Ren +1 位作者 Jin Lei Zi-Mu Jiang 《Computers, Materials & Continua》 SCIE EI 2024年第8期2857-2872,共16页
To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed... To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks. 展开更多
关键词 image fusion deep learning auto-encoder(AE) INFRARED visible light
下载PDF
CT/^(99m)Tc-GSA SPECT fusion images demonstrate functional differences between the liver lobes 被引量:4
14
作者 Tatsuaki Sumiyoshi Yasuo Shima +7 位作者 Ryoutarou Tokorodani Takehiro Okabayashi Akihito Kozuki Yasuhiro Hata Yoshihiro Noda Yoriko Murata Toshio Nakamura Kiminori Uka 《World Journal of Gastroenterology》 SCIE CAS 2013年第21期3217-3225,共9页
AIM: To evaluate the functional differences between the 2 liver lobes in non-cirrhotic patients by using computed tomography/99mTc-galactosyl human serum albumin (CT/99mTc-GSA) single-photon emission computed tomograp... AIM: To evaluate the functional differences between the 2 liver lobes in non-cirrhotic patients by using computed tomography/99mTc-galactosyl human serum albumin (CT/99mTc-GSA) single-photon emission computed tomography (SPECT) fusion images. METHODS: Between December 2008 and March 2012, 264 non-cirrhotic patients underwent preoperative liver function assessment using CT/99mTc-GSA SPECT fusion images. Of these, 30 patients, in whom the influence of a tumor on the liver parenchyma was estimated to be negligible, were selected. Specifically, the selected patients were required to meet either of the following criteria: (1) the presence of an extrahepatic tumor; or (2) presence of a single small intrahepatic tumor. These 30 patients were retrospectively analyzed to calculate the percentage volume (%Volume) and the percentage function (%Function) of each lobe. The ratio between the %Function and %Volume (function-to-volume ratio) of each lobe was also calculated, and the ratios were compared between the 2 lobes. Furthermore, the correlations between the function-to-volume ratio and each of 2 liver parameters [lobe volume and diameter ratio of the left portal vein to the right portal vein (LPV-to-RPV diameter ratio)] were investigated. RESULTS: The median values of %Volume and %Function were 62.6% and 67.1% in the right lobe, with %Function being significantly higher than %Volume (P < 0.01). The median values of %Volume and %Function were 31.0% and 28.7% in the left lobe, with %Function being significantly lower than %Volume (P < 0.01). The function-to-volume ratios of the right lobe (1.04-1.14) were significantly higher than those of the left lobe (0.74-0.99) (P < 0.01). The function-to-volume ratio showed no significant correlation between the lobe volume in either lobe. In contrast, the function-to-volume ratio showed significant correlations with the LPV-to-RPV diameter ratio in both lobes (right lobe: negative correlation, rs = -0.37, P = 0.048; left lobe: positive correlation, r s = 0.71, P < 0.001). The function-to-volume ratio in the left lobe tended to be higher, and that in the right lobe tended to be lower, in accordance with the increase in the LPV-to-RPV diameter ratio. CONCLUSION: CT/99mTc-GSA SPECT fusion images demonstrated that the function of the left lobe was significantly decreased compared with that of the right lobe in non-cirrhotic livers. 展开更多
关键词 COMPUTED TOMOGRAPHY 99mTc neogalactoalbumin SINGLE-PHOTON emission COMPUTED TOMOGRAPHY fusion image LIVER Portal system
下载PDF
Feature fusion method for edge detection of color images 被引量:4
15
作者 Ma Yu Gu Xiaodong Wang Yuanyuan 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2009年第2期394-399,共6页
A novel feature fusion method is proposed for the edge detection of color images. Except for the typical features used in edge detection, the color contrast similarity and the orientation consistency are also selected... A novel feature fusion method is proposed for the edge detection of color images. Except for the typical features used in edge detection, the color contrast similarity and the orientation consistency are also selected as the features. The four features are combined together as a parameter to detect the edges of color images. Experimental results show that the method can inhibit noisy edges and facilitate the detection for weak edges. It has a better performance than conventional methods in noisy environments. 展开更多
关键词 color image processing edge detection feature extraction feature fusion
下载PDF
A Novel Multi-Stream Fusion Network for Underwater Image Enhancement
16
作者 Guijin Tang Lian Duan +1 位作者 Haitao Zhao Feng Liu 《China Communications》 SCIE CSCD 2024年第2期166-182,共17页
Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color... Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images. 展开更多
关键词 image enhancement multi-stream fusion underwater image
下载PDF
Advancements in Remote Sensing Image Dehazing: Introducing URA-Net with Multi-Scale Dense Feature Fusion Clusters and Gated Jump Connection
17
作者 Hongchi Liu Xing Deng Haijian Shao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2397-2424,共28页
The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivot... The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality. 展开更多
关键词 Remote sensing image image dehazing deep learning feature fusion
下载PDF
Research on Data Fusion of Adaptive Weighted Multi-Source Sensor 被引量:3
18
作者 Donghui Li Cong Shen +5 位作者 Xiaopeng Dai Xinghui Zhu Jian Luo Xueting Li Haiwen Chen Zhiyao Liang 《Computers, Materials & Continua》 SCIE EI 2019年第9期1217-1231,共15页
Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data mu... Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data must be fused.In our research,self-adaptive weighted data fusion method is used to respectively integrate the data from the PH value,temperature,oxygen dissolved and NH3 concentration of water quality environment.Based on the fusion,the Grubbs method is used to detect the abnormal data so as to provide data support for estimation,prediction and early warning of the water quality. 展开更多
关键词 Adaptive weighting multi-source sensor data fusion loss of data processing grubbs elimination
下载PDF
D-SS Frame:deep spectral-spatial feature extraction and fusion for classification of panchromatic and multispectral images 被引量:2
19
作者 Teffahi Hanane Yao Hongxun 《High Technology Letters》 EI CAS 2018年第4期378-386,共9页
Facing the very high-resolution( VHR) image classification problem,a feature extraction and fusion framework is presented for VHR panchromatic and multispectral image classification based on deep learning techniques. ... Facing the very high-resolution( VHR) image classification problem,a feature extraction and fusion framework is presented for VHR panchromatic and multispectral image classification based on deep learning techniques. The proposed approach combines spectral and spatial information based on the fusion of features extracted from panchromatic( PAN) and multispectral( MS) images using sparse autoencoder and its deep version. There are three steps in the proposed method,the first one is to extract spatial information of PAN image,and the second one is to describe spectral information of MS image. Finally,in the third step,the features obtained from PAN and MS images are concatenated directly as a simple fusion feature. The classification is performed using the support vector machine( SVM) and the experiments carried out on two datasets with very high spatial resolution. MS and PAN images from WorldView-2 satellite indicate that the classifier provides an efficient solution and demonstrate that the fusion of the features extracted by deep learning techniques from PAN and MS images performs better than that when these techniques are used separately. In addition,this framework shows that deep learning models can extract and fuse spatial and spectral information greatly,and have huge potential to achieve higher accuracy for classification of multispectral and panchromatic images. 展开更多
关键词 imagE classification FEATURE extraction(FE) FEATURE fusion SPARSE autoencoder stacked SPARSE autoencoder support vector machine(SVM) multispectral(MS)image panchromatic(PAN)image
下载PDF
Research on Multi-Scale Feature Fusion Network Algorithm Based on Brain Tumor Medical Image Classification
20
作者 Yuting Zhou Xuemei Yang +1 位作者 Junping Yin Shiqi Liu 《Computers, Materials & Continua》 SCIE EI 2024年第6期5313-5333,共21页
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier... Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect. 展开更多
关键词 Medical image classification feature fusion TRANSFORMER
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部