期刊文献+
共找到10,535篇文章
< 1 2 250 >
每页显示 20 50 100
Research and Realization of Medical Image Fusion Based on Three-Dimensional Reconstruction 被引量:5
1
作者 TAO Ling QIAN Zhi-yu CHEN Chun-xiao 《Chinese Journal of Biomedical Engineering(English Edition)》 2007年第3期117-122,共6页
A new medical image fusion technique is presented.The method is based on three-dimensional reconstruction.After reconstruction,the three-dimensional volume data is normalized by three-dimensional coordinate conversion... A new medical image fusion technique is presented.The method is based on three-dimensional reconstruction.After reconstruction,the three-dimensional volume data is normalized by three-dimensional coordinate conversion in the same way and intercepted through setting up cutting plane including anatomical structure,as a result two images in entire registration on space and geometry are obtained and the images are fused at last.Compared with traditional two-dimensional fusion technique,three-dimensional fusion technique can not only resolve the different problems existed in the two kinds of images,but also avoid the registration error of the two kinds of images when they have different scan and imaging parameter.The research proves this fusion technique is more exact and has no registration,so it is more adapt to arbitrary medical image fusion with different equipments. 展开更多
关键词 medical image volume data three-dimensional reconstruction image cutting image fusion
下载PDF
Training image analysis for three-dimensional reconstruction of porous media
2
作者 滕奇志 杨丹 +2 位作者 徐智 李征骥 何小海 《Journal of Southeast University(English Edition)》 EI CAS 2012年第4期415-421,共7页
In order to obtain a better sandstone three-dimensional (3D) reconstruction result which is more similar to the original sample, an algorithm based on stationarity for a two-dimensional (2D) training image is prop... In order to obtain a better sandstone three-dimensional (3D) reconstruction result which is more similar to the original sample, an algorithm based on stationarity for a two-dimensional (2D) training image is proposed. The second-order statistics based on texture features are analyzed to evaluate the scale stationarity of the training image. The multiple-point statistics of the training image are applied to obtain the multiple-point statistics stationarity estimation by the multi-point density function. The results show that the reconstructed 3D structures are closer to reality when the training image has better scale stationarity and multiple-point statistics stationarity by the indications of local percolation probability and two-point probability. Moreover, training images with higher multiple-point statistics stationarity and lower scale stationarity are likely to obtain closer results to the real 3D structure, and vice versa. Thus, stationarity analysis of the training image has far-reaching significance in choosing a better 2D thin section image for the 3D reconstruction of porous media. Especially, high-order statistics perform better than low-order statistics. 展开更多
关键词 three-dimensional reconstruction training image stationarity porous media multiple-point statistics
下载PDF
Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding 被引量:1
3
作者 Chunming Wu Wukai Liu Xin Ma 《Computers, Materials & Continua》 SCIE EI 2024年第4期1441-1461,共21页
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne... A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations. 展开更多
关键词 image fusion Res2Net-Transformer infrared image visible image
下载PDF
CAEFusion: A New Convolutional Autoencoder-Based Infrared and Visible Light Image Fusion Algorithm 被引量:1
4
作者 Chun-Ming Wu Mei-Ling Ren +1 位作者 Jin Lei Zi-Mu Jiang 《Computers, Materials & Continua》 SCIE EI 2024年第8期2857-2872,共16页
To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed... To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks. 展开更多
关键词 image fusion deep learning auto-encoder(AE) INFRARED visible light
下载PDF
Mangrove monitoring and extraction based on multi-source remote sensing data:a deep learning method based on SAR and optical image fusion
5
作者 Yiheng Xie Xiaoping Rui +2 位作者 Yarong Zou Heng Tang Ninglei Ouyang 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2024年第9期110-121,共12页
Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aimin... Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems. 展开更多
关键词 image fusion SAR image optical image MANGROVE deep learning attention mechanism
下载PDF
A Novel Multi-Stream Fusion Network for Underwater Image Enhancement
6
作者 Guijin Tang Lian Duan +1 位作者 Haitao Zhao Feng Liu 《China Communications》 SCIE CSCD 2024年第2期166-182,共17页
Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color... Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images. 展开更多
关键词 image enhancement multi-stream fusion underwater image
下载PDF
Advancements in Remote Sensing Image Dehazing: Introducing URA-Net with Multi-Scale Dense Feature Fusion Clusters and Gated Jump Connection
7
作者 Hongchi Liu Xing Deng Haijian Shao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2397-2424,共28页
The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivot... The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality. 展开更多
关键词 Remote sensing image image dehazing deep learning feature fusion
下载PDF
Development of a toroidal soft x-ray imaging system and application for investigating three-dimensional plasma on J-TEXT
8
作者 赵传旭 李建超 +9 位作者 张晓卿 王能超 丁永华 杨州军 江中和 严伟 李杨波 毛飞越 任正康 the J-TEXT Team 《Plasma Science and Technology》 SCIE EI CAS CSCD 2024年第3期94-99,共6页
A toroidal soft x-ray imaging(T-SXRI)system has been developed to investigate threedimensional(3D)plasma physics on J-TEXT.This T-SXRI system consists of three sets of SXR arrays.Two sets are newly developed and locat... A toroidal soft x-ray imaging(T-SXRI)system has been developed to investigate threedimensional(3D)plasma physics on J-TEXT.This T-SXRI system consists of three sets of SXR arrays.Two sets are newly developed and located on the vacuum chamber wall at toroidal positionsφof 126.4°and 272.6°,respectively,while one set was established previously atφ=65.50.Each set of SXR arrays consists of three arrays viewing the plasma poloidally,and hence can be used separately to obtain SXR images via the tomographic method.The sawtooth precursor oscillations are measured by T-SXRI,and the corresponding images of perturbative SXR signals are successfully reconstructed at these three toroidal positions,hence providing measurement of the 3D structure of precursor oscillations.The observed 3D structure is consistent with the helical structure of the m/n=1/1 mode.The experimental observation confirms that the T-SXRI system is able to observe 3D structures in the J-TEXT plasma. 展开更多
关键词 SXR imaging J-TEXT tokamak three-dimensional measurement MHD
下载PDF
Research on Multi-Scale Feature Fusion Network Algorithm Based on Brain Tumor Medical Image Classification
9
作者 Yuting Zhou Xuemei Yang +1 位作者 Junping Yin Shiqi Liu 《Computers, Materials & Continua》 SCIE EI 2024年第6期5313-5333,共21页
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier... Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect. 展开更多
关键词 Medical image classification feature fusion TRANSFORMER
下载PDF
Multimodality Medical Image Fusion Based on Pixel Significance with Edge-Preserving Processing for Clinical Applications
10
作者 Bhawna Goyal Ayush Dogra +4 位作者 Dawa Chyophel Lepcha Rajesh Singh Hemant Sharma Ahmed Alkhayyat Manob Jyoti Saikia 《Computers, Materials & Continua》 SCIE EI 2024年第3期4317-4342,共26页
Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta... Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast. 展开更多
关键词 image fusion fractal data analysis BIOMEDICAL DISEASES research multiresolution analysis numerical analysis
下载PDF
PSMFNet:Lightweight Partial Separation and Multiscale Fusion Network for Image Super-Resolution
11
作者 Shuai Cao Jianan Liang +2 位作者 Yongjun Cao Jinglun Huang Zhishu Yang 《Computers, Materials & Continua》 SCIE EI 2024年第10期1491-1509,共19页
The employment of deep convolutional neural networks has recently contributed to significant progress in single image super-resolution(SISR)research.However,the high computational demands of most SR techniques hinder ... The employment of deep convolutional neural networks has recently contributed to significant progress in single image super-resolution(SISR)research.However,the high computational demands of most SR techniques hinder their applicability to edge devices,despite their satisfactory reconstruction performance.These methods commonly use standard convolutions,which increase the convolutional operation cost of the model.In this paper,a lightweight Partial Separation and Multiscale Fusion Network(PSMFNet)is proposed to alleviate this problem.Specifically,this paper introduces partial convolution(PConv),which reduces the redundant convolution operations throughout the model by separating some of the features of an image while retaining features useful for image reconstruction.Additionally,it is worth noting that the existing methods have not fully utilized the rich feature information,leading to information loss,which reduces the ability to learn feature representations.Inspired by self-attention,this paper develops a multiscale feature fusion block(MFFB),which can better utilize the non-local features of an image.MFFB can learn long-range dependencies from the spatial dimension and extract features from the channel dimension,thereby obtaining more comprehensive and rich feature information.As the role of the MFFB is to capture rich global features,this paper further introduces an efficient inverted residual block(EIRB)to supplement the local feature extraction ability of PSMFNet.A comprehensive analysis of the experimental results shows that PSMFNet maintains a better performance with fewer parameters than the state-of-the-art models. 展开更多
关键词 Deep learning single image super-resolution lightweight network multiscale fusion
下载PDF
A deep learning fusion model for accurate classification of brain tumours in Magnetic Resonance images
12
作者 Nechirvan Asaad Zebari Chira Nadheef Mohammed +8 位作者 Dilovan Asaad Zebari Mazin Abed Mohammed Diyar Qader Zeebaree Haydar Abdulameer Marhoon Karrar Hameed Abdulkareem Seifedine Kadry Wattana Viriyasitavat Jan Nedoma Radek Martinek 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第4期790-804,共15页
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods... Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly. 展开更多
关键词 brain tumour deep learning feature fusion model MRI images multi‐classification
下载PDF
Image Fusion Using Wavelet Transformation and XGboost Algorithm
13
作者 Shahid Naseem Tariq Mahmood +4 位作者 Amjad Rehman Khan Umer Farooq Samra Nawazish Faten S.Alamri Tanzila Saba 《Computers, Materials & Continua》 SCIE EI 2024年第4期801-817,共17页
Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful ... Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators. 展开更多
关键词 image fusion max-min average CWT XGBoost DCT inclusive innovations spatial and frequency domain
下载PDF
Enhanced Growth Optimizer and Its Application to Multispectral Image Fusion
14
作者 Jeng-Shyang Pan Wenda Li +2 位作者 Shu-Chuan Chu Xiao Sui Junzo Watada 《Computers, Materials & Continua》 SCIE EI 2024年第11期3033-3062,共30页
The growth optimizer(GO)is an innovative and robust metaheuristic optimization algorithm designed to simulate the learning and reflective processes experienced by individuals as they mature within the social environme... The growth optimizer(GO)is an innovative and robust metaheuristic optimization algorithm designed to simulate the learning and reflective processes experienced by individuals as they mature within the social environment.However,the original GO algorithm is constrained by two significant limitations:slow convergence and high mem-ory requirements.This restricts its application to large-scale and complex problems.To address these problems,this paper proposes an innovative enhanced growth optimizer(eGO).In contrast to conventional population-based optimization algorithms,the eGO algorithm utilizes a probabilistic model,designated as the virtual population,which is capable of accurately replicating the behavior of actual populations while simultaneously reducing memory consumption.Furthermore,this paper introduces the Lévy flight mechanism,which enhances the diversity and flexibility of the search process,thus further improving the algorithm’s global search capability and convergence speed.To verify the effectiveness of the eGO algorithm,a series of experiments were conducted using the CEC2014 and CEC2017 test sets.The results demonstrate that the eGO algorithm outperforms the original GO algorithm and other compact algorithms regarding memory usage and convergence speed,thus exhibiting powerful optimization capabilities.Finally,the eGO algorithm was applied to image fusion.Through a comparative analysis with the existing PSO and GO algorithms and other compact algorithms,the eGO algorithm demonstrates superior performance in image fusion. 展开更多
关键词 Growth optimizer probabilistic model Lévy flight image fusion
下载PDF
DCFNet:An Effective Dual-Branch Cross-Attention Fusion Network for Medical Image Segmentation
15
作者 Chengzhang Zhu Renmao Zhang +5 位作者 Yalong Xiao Beiji Zou Xian Chai Zhangzheng Yang Rong Hu Xuanchu Duan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1103-1128,共26页
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans... Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance. 展开更多
关键词 Convolutional neural networks Swin Transformer dual branch medical image segmentation feature cross fusion
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
16
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet image Classification Lightweight Convolutional Neural Network Depthwise Dilated Separable Convolution Hierarchical Multi-Scale Feature fusion
下载PDF
Multi-Modal Medical Image Fusion Based on Improved Parameter Adaptive PCNN and Latent Low-Rank Representation
17
作者 Zirui Tang Xianchun Zhou 《Instrumentation》 2024年第2期53-63,共11页
Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical ... Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes. 展开更多
关键词 image fusion improved parameter adaptive pcnn non-subsampled shear-wave transform latent low-rank representation
下载PDF
Three-dimensional positions of scattering centers reconstruction from multiple SAR images based on radargrammetry 被引量:3
18
作者 钟金荣 文贡坚 +1 位作者 回丙伟 李德仁 《Journal of Central South University》 SCIE EI CAS CSCD 2015年第5期1776-1789,共14页
A method and procedure is presented to reconstruct three-dimensional(3D) positions of scattering centers from multiple synthetic aperture radar(SAR) images. Firstly, two-dimensional(2D) attribute scattering centers of... A method and procedure is presented to reconstruct three-dimensional(3D) positions of scattering centers from multiple synthetic aperture radar(SAR) images. Firstly, two-dimensional(2D) attribute scattering centers of targets are extracted from 2D SAR images. Secondly, similarity measure is developed based on 2D attributed scatter centers' location, type, and radargrammetry principle between multiple SAR images. By this similarity, we can associate 2D scatter centers and then obtain candidate 3D scattering centers. Thirdly, these candidate scattering centers are clustered in 3D space to reconstruct final 3D positions. Compared with presented methods, the proposed method has a capability of describing distributed scattering center, reduces false and missing 3D scattering centers, and has fewer restrictionson modeling data. Finally, results of experiments have demonstrated the effectiveness of the proposed method. 展开更多
关键词 multiple synthetic aperture radar(SAR) images three-dimensional scattering center position reconstruction radargrammetry
下载PDF
Three-Dimensional Model Reconstruction of Nonwovens from Multi-Focus Images 被引量:2
19
作者 DONG Gaige WANG Rongwu +1 位作者 LI Chengzu YOU Xiangyin 《Journal of Donghua University(English Edition)》 CAS 2022年第3期185-192,共8页
The three-dimensional(3D)model is of great significance to analyze the performance of nonwovens.However,the existing modelling methods could not reconstruct the 3D structure of nonwovens at low cost.A new method based... The three-dimensional(3D)model is of great significance to analyze the performance of nonwovens.However,the existing modelling methods could not reconstruct the 3D structure of nonwovens at low cost.A new method based on deep learning was proposed to reconstruct 3D models of nonwovens from multi-focus images.A convolutional neural network was trained to extract clear fibers from sequence images.Image processing algorithms were used to obtain the radius,the central axis,and depth information of fibers from the extraction results.Based on this information,3D models were built in 3D space.Furthermore,self-developed algorithms optimized the central axis and depth of fibers,which made fibers more realistic and continuous.The method with lower cost could reconstruct 3D models of nonwovens conveniently. 展开更多
关键词 three-dimensional(3D)model reconstruction deep learning MICROSCOPY NONWOVEN image processing
下载PDF
An image encryption scheme based on three-dimensional Brownian motion and chaotic system 被引量:6
20
作者 Xiu-Li Chai Zhi-Hua Gan +2 位作者 Ke Yuan l Yang Lu Yi-Ran Chen 《Chinese Physics B》 SCIE EI CAS CSCD 2017年第2期99-113,共15页
At present, many chaos-based image encryption algorithms have proved to be unsafe, few encryption schemes permute the plain images as three-dimensional(3D) bit matrices, and thus bits cannot move to any position, th... At present, many chaos-based image encryption algorithms have proved to be unsafe, few encryption schemes permute the plain images as three-dimensional(3D) bit matrices, and thus bits cannot move to any position, the movement range of bits are limited, and based on them, in this paper we present a novel image encryption algorithm based on 3D Brownian motion and chaotic systems. The architecture of confusion and diffusion is adopted. Firstly, the plain image is converted into a 3D bit matrix and split into sub blocks. Secondly, block confusion based on 3D Brownian motion(BCB3DBM)is proposed to permute the position of the bits within the sub blocks, and the direction of particle movement is generated by logistic-tent system(LTS). Furthermore, block confusion based on position sequence group(BCBPSG) is introduced, a four-order memristive chaotic system is utilized to give random chaotic sequences, and the chaotic sequences are sorted and a position sequence group is chosen based on the plain image, then the sub blocks are confused. The proposed confusion strategy can change the positions of the bits and modify their weights, and effectively improve the statistical performance of the algorithm. Finally, a pixel level confusion is employed to enhance the encryption effect. The initial values and parameters of chaotic systems are produced by the SHA 256 hash function of the plain image. Simulation results and security analyses illustrate that our algorithm has excellent encryption performance in terms of security and speed. 展开更多
关键词 image encryption logistic-tent system(LTS) memristive chaotic system three-dimensional(3D) Brownian motion
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部