期刊文献+
共找到15,050篇文章
< 1 2 250 >
每页显示 20 50 100
Pyramid Separable Channel Attention Network for Single Image Super-Resolution
1
作者 Congcong Ma Jiaqi Mi +1 位作者 Wanlin Gao Sha Tao 《Computers, Materials & Continua》 SCIE EI 2024年第9期4687-4701,共15页
Single Image Super-Resolution(SISR)technology aims to reconstruct a clear,high-resolution image with more information from an input low-resolution image that is blurry and contains less information.This technology has... Single Image Super-Resolution(SISR)technology aims to reconstruct a clear,high-resolution image with more information from an input low-resolution image that is blurry and contains less information.This technology has significant research value and is widely used in fields such as medical imaging,satellite image processing,and security surveillance.Despite significant progress in existing research,challenges remain in reconstructing clear and complex texture details,with issues such as edge blurring and artifacts still present.The visual perception effect still needs further enhancement.Therefore,this study proposes a Pyramid Separable Channel Attention Network(PSCAN)for the SISR task.Thismethod designs a convolutional backbone network composed of Pyramid Separable Channel Attention blocks to effectively extract and fuse multi-scale features.This expands the model’s receptive field,reduces resolution loss,and enhances the model’s ability to reconstruct texture details.Additionally,an innovative artifact loss function is designed to better distinguish between artifacts and real edge details,reducing artifacts in the reconstructed images.We conducted comprehensive ablation and comparative experiments on the Arabidopsis root image dataset and several public datasets.The experimental results show that the proposed PSCAN method achieves the best-known performance in both subjective visual effects and objective evaluation metrics,with improvements of 0.84 in Peak Signal-to-Noise Ratio(PSNR)and 0.017 in Structural Similarity Index(SSIM).This demonstrates that the method can effectively preserve high-frequency texture details,reduce artifacts,and have good generalization performance. 展开更多
关键词 Deep learning single image super-resolution ARTIFACTS texture details
下载PDF
Multi-prior physics-enhanced neural network enables pixel super-resolution and twin-imagefree phase retrieval from single-shot hologram
2
作者 Xuan Tian Runze Li +5 位作者 Tong Peng Yuge Xue Junwei Min Xing Li Chen Bai Baoli Yao 《Opto-Electronic Advances》 SCIE EI CAS CSCD 2024年第9期22-38,共17页
Digital in-line holographic microscopy(DIHM)is a widely used interference technique for real-time reconstruction of living cells’morphological information with large space-bandwidth product and compact setup.However,... Digital in-line holographic microscopy(DIHM)is a widely used interference technique for real-time reconstruction of living cells’morphological information with large space-bandwidth product and compact setup.However,the need for a larger pixel size of detector to improve imaging photosensitivity,field-of-view,and signal-to-noise ratio often leads to the loss of sub-pixel information and limited pixel resolution.Additionally,the twin-image appearing in the reconstruction severely degrades the quality of the reconstructed image.The deep learning(DL)approach has emerged as a powerful tool for phase retrieval in DIHM,effectively addressing these challenges.However,most DL-based strategies are datadriven or end-to-end net approaches,suffering from excessive data dependency and limited generalization ability.Herein,a novel multi-prior physics-enhanced neural network with pixel super-resolution(MPPN-PSR)for phase retrieval of DIHM is proposed.It encapsulates the physical model prior,sparsity prior and deep image prior in an untrained deep neural network.The effectiveness and feasibility of MPPN-PSR are demonstrated by comparing it with other traditional and learning-based phase retrieval methods.With the capabilities of pixel super-resolution,twin-image elimination and high-throughput jointly from a single-shot intensity measurement,the proposed DIHM approach is expected to be widely adopted in biomedical workflow and industrial measurement. 展开更多
关键词 optical microscopy quantitative phase imaging digital holographic microscopy deep learning super-resolution
下载PDF
Shear Let Transform Residual Learning Approach for Single-Image Super-Resolution
3
作者 Israa Ismail Ghada Eltaweel Mohamed Meselhy Eltoukhy 《Computers, Materials & Continua》 SCIE EI 2024年第5期3193-3209,共17页
Super-resolution techniques are employed to enhance image resolution by reconstructing high-resolution images from one or more low-resolution inputs.Super-resolution is of paramount importance in the context of remote... Super-resolution techniques are employed to enhance image resolution by reconstructing high-resolution images from one or more low-resolution inputs.Super-resolution is of paramount importance in the context of remote sensing,satellite,aerial,security and surveillance imaging.Super-resolution remote sensing imagery is essential for surveillance and security purposes,enabling authorities to monitor remote or sensitive areas with greater clarity.This study introduces a single-image super-resolution approach for remote sensing images,utilizing deep shearlet residual learning in the shearlet transform domain,and incorporating the Enhanced Deep Super-Resolution network(EDSR).Unlike conventional approaches that estimate residuals between high and low-resolution images,the proposed approach calculates the shearlet coefficients for the desired high-resolution image using the provided low-resolution image instead of estimating a residual image between the high-and low-resolution image.The shearlet transform is chosen for its excellent sparse approximation capabilities.Initially,remote sensing images are transformed into the shearlet domain,which divides the input image into low and high frequencies.The shearlet coefficients are fed into the EDSR network.The high-resolution image is subsequently reconstructed using the inverse shearlet transform.The incorporation of the EDSR network enhances training stability,leading to improved generated images.The experimental results from the Deep Shearlet Residual Learning approach demonstrate its superior performance in remote sensing image recovery,effectively restoring both global topology and local edge detail information,thereby enhancing image quality.Compared to other networks,our proposed approach outperforms the state-of-the-art in terms of image quality,achieving an average peak signal-to-noise ratio of 35 and a structural similarity index measure of approximately 0.9. 展开更多
关键词 super-resolution shearlet transform shearlet coefficients enhanced deep super-resolution network
下载PDF
RGB-guided hyperspectral image super-resolution with deep progressive learning
4
作者 Tao Zhang Ying Fu +3 位作者 Liwei Huang Siyuan Li Shaodi You Chenggang Yan 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第3期679-694,共16页
Due to hardware limitations,existing hyperspectral(HS)camera often suffer from low spatial/temporal resolution.Recently,it has been prevalent to super-resolve a low reso-lution(LR)HS image into a high resolution(HR)HS... Due to hardware limitations,existing hyperspectral(HS)camera often suffer from low spatial/temporal resolution.Recently,it has been prevalent to super-resolve a low reso-lution(LR)HS image into a high resolution(HR)HS image with a HR RGB(or mul-tispectral)image guidance.Previous approaches for this guided super-resolution task often model the intrinsic characteristic of the desired HR HS image using hand-crafted priors.Recently,researchers pay more attention to deep learning methods with direct supervised or unsupervised learning,which exploit deep prior only from training dataset or testing data.In this article,an efficient convolutional neural network-based method is presented to progressively super-resolve HS image with RGB image guidance.Specif-ically,a progressive HS image super-resolution network is proposed,which progressively super-resolve the LR HS image with pixel shuffled HR RGB image guidance.Then,the super-resolution network is progressively trained with supervised pre-training and un-supervised adaption,where supervised pre-training learns the general prior on training data and unsupervised adaptation generalises the general prior to specific prior for variant testing scenes.The proposed method can effectively exploit prior from training dataset and testing HS and RGB images with spectral-spatial constraint.It has a good general-isation capability,especially for blind HS image super-resolution.Comprehensive experimental results show that the proposed deep progressive learning method out-performs the existing state-of-the-art methods for HS image super-resolution in non-blind and blind cases. 展开更多
关键词 computer vision deep neural networks image processing image resolution unsupervised learning
下载PDF
IRMIRS:Inception-ResNet-Based Network for MRI Image Super-Resolution 被引量:1
5
作者 Wazir Muhammad Zuhaibuddin Bhutto +3 位作者 Salman Masroor Murtaza Hussain Shaikh Jalal Shah Ayaz Hussain 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第8期1121-1142,共22页
Medical image super-resolution is a fundamental challenge due to absorption and scattering in tissues.These challenges are increasing the interest in the quality of medical images.Recent research has proven that the r... Medical image super-resolution is a fundamental challenge due to absorption and scattering in tissues.These challenges are increasing the interest in the quality of medical images.Recent research has proven that the rapid progress in convolutional neural networks(CNNs)has achieved superior performance in the area of medical image super-resolution.However,the traditional CNN approaches use interpolation techniques as a preprocessing stage to enlarge low-resolution magnetic resonance(MR)images,adding extra noise in the models and more memory consumption.Furthermore,conventional deep CNN approaches used layers in series-wise connection to create the deeper mode,because this later end layer cannot receive complete information and work as a dead layer.In this paper,we propose Inception-ResNet-based Network for MRI Image Super-Resolution known as IRMRIS.In our proposed approach,a bicubic interpolation is replaced with a deconvolution layer to learn the upsampling filters.Furthermore,a residual skip connection with the Inception block is used to reconstruct a high-resolution output image from a low-quality input image.Quantitative and qualitative evaluations of the proposed method are supported through extensive experiments in reconstructing sharper and clean texture details as compared to the state-of-the-art methods. 展开更多
关键词 super-resolution magnetic resonance imaging ResNet block inception block convolutional neural network deconvolution layer
下载PDF
Residual Feature Attentional Fusion Network for Lightweight Chest CT Image Super-Resolution 被引量:1
6
作者 Kun Yang Lei Zhao +4 位作者 Xianghui Wang Mingyang Zhang Linyan Xue Shuang Liu Kun Liu 《Computers, Materials & Continua》 SCIE EI 2023年第6期5159-5176,共18页
The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study s... The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study super-resolution(SR)algorithms applied to CT images to improve the reso-lution of CT images.However,most of the existing SR algorithms are studied based on natural images,which are not suitable for medical images;and most of these algorithms improve the reconstruction quality by increasing the network depth,which is not suitable for machines with limited resources.To alleviate these issues,we propose a residual feature attentional fusion network for lightweight chest CT image super-resolution(RFAFN).Specifically,we design a contextual feature extraction block(CFEB)that can extract CT image features more efficiently and accurately than ordinary residual blocks.In addition,we propose a feature-weighted cascading strategy(FWCS)based on attentional feature fusion blocks(AFFB)to utilize the high-frequency detail information extracted by CFEB as much as possible via selectively fusing adjacent level feature information.Finally,we suggest a global hierarchical feature fusion strategy(GHFFS),which can utilize the hierarchical features more effectively than dense concatenation by progressively aggregating the feature information at various levels.Numerous experiments show that our method performs better than most of the state-of-the-art(SOTA)methods on the COVID-19 chest CT dataset.In detail,the peak signal-to-noise ratio(PSNR)is 0.11 dB and 0.47 dB higher on CTtest1 and CTtest2 at×3 SR compared to the suboptimal method,but the number of parameters and multi-adds are reduced by 22K and 0.43G,respectively.Our method can better recover chest CT image quality with fewer computational resources and effectively assist in COVID-19. 展开更多
关键词 super-resolution COVID-19 chest CT lightweight network contextual feature extraction attentional feature fusion
下载PDF
Image super‐resolution via dynamic network 被引量:1
7
作者 Chunwei Tian Xuanyu Zhang +2 位作者 Qi Zhang Mingming Yang Zhaojie Ju 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第4期837-849,共13页
Convolutional neural networks depend on deep network architectures to extract accurate information for image super‐resolution.However,obtained information of these con-volutional neural networks cannot completely exp... Convolutional neural networks depend on deep network architectures to extract accurate information for image super‐resolution.However,obtained information of these con-volutional neural networks cannot completely express predicted high‐quality images for complex scenes.A dynamic network for image super‐resolution(DSRNet)is presented,which contains a residual enhancement block,wide enhancement block,feature refine-ment block and construction block.The residual enhancement block is composed of a residual enhanced architecture to facilitate hierarchical features for image super‐resolution.To enhance robustness of obtained super‐resolution model for complex scenes,a wide enhancement block achieves a dynamic architecture to learn more robust information to enhance applicability of an obtained super‐resolution model for varying scenes.To prevent interference of components in a wide enhancement block,a refine-ment block utilises a stacked architecture to accurately learn obtained features.Also,a residual learning operation is embedded in the refinement block to prevent long‐term dependency problem.Finally,a construction block is responsible for reconstructing high‐quality images.Designed heterogeneous architecture can not only facilitate richer structural information,but also be lightweight,which is suitable for mobile digital devices.Experimental results show that our method is more competitive in terms of performance,recovering time of image super‐resolution and complexity.The code of DSRNet can be obtained at https://github.com/hellloxiaotian/DSRNet. 展开更多
关键词 CNN dynamic network image super‐resolution lightweight network
下载PDF
A generalized deep neural network approach for improving resolution of fluorescence microscopy images
8
作者 Zichen Jin Qing He +1 位作者 Yang Liu Kaige Wang 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第6期53-65,共13页
Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural netwo... Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural network based on a generative adversarial network(GAN).The generator employs a U-Net-based network,which integrates Dense Net for the downsampling component.The proposed method has excellent properties,for example,the network model is trained with several different datasets of biological structures;the trained model can improve the imaging resolution of different microscopy imaging modalities such as confocal imaging and wide-field imaging;and the model demonstrates a generalized ability to improve the resolution of different biological structures even out of the datasets.In addition,experimental results showed that the method improved the resolution of caveolin-coated pits(CCPs)structures from 264 nm to 138 nm,a 1.91-fold increase,and nearly doubled the resolution of DNA molecules imaged while being transported through microfluidic channels. 展开更多
关键词 Deep learning super-resolution imaging generalized model framework generation adversarial networks image reconstruction.
下载PDF
Design of a novel hybrid quantum deep neural network in INEQR images classification
9
作者 王爽 王柯涵 +3 位作者 程涛 赵润盛 马鸿洋 郭帅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第6期230-238,共9页
We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantu... We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantum circuit, thereby propose a novel hybrid quantum deep neural network(HQDNN) used for image classification. After bilinear interpolation reduces the original image to a suitable size, an improved novel enhanced quantum representation(INEQR) is used to encode it into quantum states as the input of the HQDNN. Multi-layer parameterized quantum circuits are used as the main structure to implement feature extraction and classification. The output results of parameterized quantum circuits are converted into classical data through quantum measurements and then optimized on a classical computer. To verify the performance of the HQDNN, we conduct binary classification and three classification experiments on the MNIST(Modified National Institute of Standards and Technology) data set. In the first binary classification, the accuracy of 0 and 4 exceeds98%. Then we compare the performance of three classification with other algorithms, the results on two datasets show that the classification accuracy is higher than that of quantum deep neural network and general quantum convolutional neural network. 展开更多
关键词 quantum computing image classification quantum–classical hybrid neural network quantum image representation INTERPOLATION
下载PDF
CMMCAN:Lightweight Feature Extraction and Matching Network for Endoscopic Images Based on Adaptive Attention
10
作者 Nannan Chong Fan Yang 《Computers, Materials & Continua》 SCIE EI 2024年第8期2761-2783,共23页
In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clini... In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clinical operating environments,endoscopic images often suffer from challenges such as low texture,uneven illumination,and non-rigid structures,which affect feature observation and extraction.This can severely impact surgical navigation or clinical diagnosis due to missing feature points in endoscopic images,leading to treatment and postoperative recovery issues for patients.To address these challenges,this paper introduces,for the first time,a Cross-Channel Multi-Modal Adaptive Spatial Feature Fusion(ASFF)module based on the lightweight architecture of EfficientViT.Additionally,a novel lightweight feature extraction and matching network based on attention mechanism is proposed.This network dynamically adjusts attention weights for cross-modal information from grayscale images and optical flow images through a dual-branch Siamese network.It extracts static and dynamic information features ranging from low-level to high-level,and from local to global,ensuring robust feature extraction across different widths,noise levels,and blur scenarios.Global and local matching are performed through a multi-level cascaded attention mechanism,with cross-channel attention introduced to simultaneously extract low-level and high-level features.Extensive ablation experiments and comparative studies are conducted on the HyperKvasir,EAD,M2caiSeg,CVC-ClinicDB,and UCL synthetic datasets.Experimental results demonstrate that the proposed network improves upon the baseline EfficientViT-B3 model by 75.4%in accuracy(Acc),while also enhancing runtime performance and storage efficiency.When compared with the complex DenseDescriptor feature extraction network,the difference in Acc is less than 7.22%,and IoU calculation results on specific datasets outperform complex dense models.Furthermore,this method increases the F1 score by 33.2%and accelerates runtime by 70.2%.It is noteworthy that the speed of CMMCAN surpasses that of comparative lightweight models,with feature extraction and matching performance comparable to existing complex models but with faster speed and higher cost-effectiveness. 展开更多
关键词 Feature extraction and matching lightweighted network medical images ENDOSCOPIC ATTENTION
下载PDF
Coexistence behavior of asymmetric attractors in hyperbolic-type memristive Hopfield neural network and its application in image encryption
11
作者 李晓霞 何倩倩 +2 位作者 余天意 才壮 徐桂芝 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期302-315,共14页
The neuron model has been widely employed in neural-morphic computing systems and chaotic circuits.This study aims to develop a novel circuit simulation of a three-neuron Hopfield neural network(HNN)with coupled hyper... The neuron model has been widely employed in neural-morphic computing systems and chaotic circuits.This study aims to develop a novel circuit simulation of a three-neuron Hopfield neural network(HNN)with coupled hyperbolic memristors through the modification of a single coupling connection weight.The bistable mode of the hyperbolic memristive HNN(mHNN),characterized by the coexistence of asymmetric chaos and periodic attractors,is effectively demonstrated through the utilization of conventional nonlinear analysis techniques.These techniques include bifurcation diagrams,two-parameter maximum Lyapunov exponent plots,local attractor basins,and phase trajectory diagrams.Moreover,an encryption technique for color images is devised by leveraging the mHNN model and asymmetric structural attractors.This method demonstrates significant benefits in correlation,information entropy,and resistance to differential attacks,providing strong evidence for its effectiveness in encryption.Additionally,an improved modular circuit design method is employed to create the analog equivalent circuit of the memristive HNN.The correctness of the circuit design is confirmed through Multisim simulations,which align with numerical simulations conducted in Matlab. 展开更多
关键词 hyperbolic-type memristor Hopfield neural network(HNN) asymmetric attractors image encryption
下载PDF
Multi-Label Image Classification Based on Object Detection and Dynamic Graph Convolutional Networks
12
作者 Xiaoyu Liu Yong Hu 《Computers, Materials & Continua》 SCIE EI 2024年第9期4413-4432,共20页
Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread a... Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread adoption of convolutional neural networks(CNNs)has catalyzed the remarkable success of architectures such as ResNet-101 within the domain of image classification.However,inmulti-label image classification tasks,it is crucial to consider the correlation between labels.In order to improve the accuracy and performance of multi-label classification and fully combine visual and semantic features,many existing studies use graph convolutional networks(GCN)for modeling.Object detection and multi-label image classification exhibit a degree of conceptual overlap;however,the integration of these two tasks within a unified framework has been relatively underexplored in the existing literature.In this paper,we come up with Object-GCN framework,a model combining object detection network YOLOv5 and graph convolutional network,and we carry out a thorough experimental analysis using a range of well-established public datasets.The designed framework Object-GCN achieves significantly better performance than existing studies in public datasets COCO2014,VOC2007,VOC2012.The final results achieved are 86.9%,96.7%,and 96.3%mean Average Precision(mAP)across the three datasets. 展开更多
关键词 Deep learning multi-label image recognition object detection graph convolution networks
下载PDF
Detection of Oscillations in Process Control Loops From Visual Image Space Using Deep Convolutional Networks
13
作者 Tao Wang Qiming Chen +3 位作者 Xun Lang Lei Xie Peng Li Hongye Su 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期982-995,共14页
Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have b... Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers. 展开更多
关键词 Convolutional neural networks(CNNs) deep learning image processing oscillation detection process industries
下载PDF
DCFNet:An Effective Dual-Branch Cross-Attention Fusion Network for Medical Image Segmentation
14
作者 Chengzhang Zhu Renmao Zhang +5 位作者 Yalong Xiao Beiji Zou Xian Chai Zhangzheng Yang Rong Hu Xuanchu Duan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1103-1128,共26页
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans... Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance. 展开更多
关键词 Convolutional neural networks Swin Transformer dual branch medical image segmentation feature cross fusion
下载PDF
Underwater Image Enhancement Based on Multi-scale Adversarial Network
15
作者 ZENG Jun-yang SI Zhan-jun 《印刷与数字媒体技术研究》 CAS 北大核心 2024年第5期70-77,共8页
In this study,an underwater image enhancement method based on multi-scale adversarial network was proposed to solve the problem of detail blur and color distortion in underwater images.Firstly,the local features of ea... In this study,an underwater image enhancement method based on multi-scale adversarial network was proposed to solve the problem of detail blur and color distortion in underwater images.Firstly,the local features of each layer were enhanced into the global features by the proposed residual dense block,which ensured that the generated images retain more details.Secondly,a multi-scale structure was adopted to extract multi-scale semantic features of the original images.Finally,the features obtained from the dual channels were fused by an adaptive fusion module to further optimize the features.The discriminant network adopted the structure of the Markov discriminator.In addition,by constructing mean square error,structural similarity,and perceived color loss function,the generated image is consistent with the reference image in structure,color,and content.The experimental results showed that the enhanced underwater image deblurring effect of the proposed algorithm was good and the problem of underwater image color bias was effectively improved.In both subjective and objective evaluation indexes,the experimental results of the proposed algorithm are better than those of the comparison algorithm. 展开更多
关键词 Underwater image enhancement Generative adversarial network Multi-scale feature extraction Residual dense block
下载PDF
A Spectral Convolutional Neural Network Model Based on Adaptive Fick’s Law for Hyperspectral Image Classification
16
作者 Tsu-Yang Wu Haonan Li +1 位作者 Saru Kumari Chien-Ming Chen 《Computers, Materials & Continua》 SCIE EI 2024年第4期19-46,共28页
Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convol... Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification. 展开更多
关键词 Adaptive Fick’s law algorithm spectral convolutional neural network metaheuristic algorithm intelligent optimization algorithm hyperspectral image classification
下载PDF
Spatial and Contextual Path Network for Image Inpainting
17
作者 Dengyong Zhang Yuting Zhao +1 位作者 Feng Li Arun Kumar Sangaiah 《Intelligent Automation & Soft Computing》 2024年第2期115-133,共19页
Image inpainting is a kind of use known area of information technology to repair the loss or damage to the area.Image feature extraction is the core of image restoration.Getting enough space for information and a larg... Image inpainting is a kind of use known area of information technology to repair the loss or damage to the area.Image feature extraction is the core of image restoration.Getting enough space for information and a larger receptive field is very important to realize high-precision image inpainting.However,in the process of feature extraction,it is difficult to meet the two requirements of obtaining sufficient spatial information and large receptive fields at the same time.In order to obtain more spatial information and a larger receptive field at the same time,we put forward a kind of image restoration based on space path and context path network.For the space path,we stack three convolution layers for 1/8 of the figure,the figure retained the rich spatial details.For the context path,we use the global average pooling layer,where the accept field is the maximum of the backbone network,and the pooling module can provide global context information for the maximum accept field.In order to better integrate the features extracted from the spatial and contextual paths,we study the fusion module of the two paths.Features fusionmodule first path output of the space and context path,and then through themass normalization to balance the scale of the characteristics,finally the characteristics of the pool will be connected into a feature vector and calculate the weight vector.Features of images in order to extract context information,we add attention to the context path refinement module.Attention modules respectively from channel dimension and space dimension to weighted images,in order to obtain more effective information.Experiments show that our method is better than the existing technology in the quality and quantity of themethod,and further to expand our network to other inpainting networks,in order to achieve consistent performance improvements. 展开更多
关键词 image inpainting ATTENTION deep learning convolutional network
下载PDF
Fetal MRI Artifacts: Semi-Supervised Generative Adversarial Neural Network for Motion Artifacts Reducing in Fetal Magnetic Resonance Images
18
作者 Ítalo Messias Félix Santos Gilson Antonio Giraldi +1 位作者 Heron Werner Junior Bruno Richard Schulze 《Journal of Computer and Communications》 2024年第6期210-225,共16页
This study addresses challenges in fetal magnetic resonance imaging (MRI) related to motion artifacts, maternal respiration, and hardware limitations. To enhance MRI quality, we employ deep learning techniques, specif... This study addresses challenges in fetal magnetic resonance imaging (MRI) related to motion artifacts, maternal respiration, and hardware limitations. To enhance MRI quality, we employ deep learning techniques, specifically utilizing Cycle GAN. Synthetic pairs of images, simulating artifacts in fetal MRI, are generated to train the model. Our primary contribution is the use of Cycle GAN for fetal MRI restoration, augmented by artificially corrupted data. We compare three approaches (supervised Cycle GAN, Pix2Pix, and Mobile Unet) for artifact removal. Experimental results demonstrate that the proposed supervised Cycle GAN effectively removes artifacts while preserving image details, as validated through Structural Similarity Index Measure (SSIM) and normalized Mean Absolute Error (MAE). The method proves comparable to alternatives but avoids the generation of spurious regions, which is crucial for medical accuracy. 展开更多
关键词 Fetal MRI Artifacts Removal Deep Learning image Processing Generative Adversarial networks
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
19
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet image Classification Lightweight Convolutional Neural network Depthwise Dilated Separable Convolution Hierarchical Multi-Scale Feature Fusion
下载PDF
Hyperspectral Image Super-Resolution Meets Deep Learning:A Survey and Perspective 被引量:3
20
作者 Xinya Wang Qian Hu +1 位作者 Yingsong Cheng Jiayi Ma 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第8期1668-1691,共24页
Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,w... Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,which is beneficial for subsequent applications.The development of deep learning has promoted significant progress in hyperspectral image super-resolution,and the powerful expression capabilities of deep neural networks make the predicted results more reliable.Recently,several latest deep learning technologies have made the hyperspectral image super-resolution method explode.However,a comprehensive review and analysis of the latest deep learning methods from the hyperspectral image super-resolution perspective is absent.To this end,in this survey,we first introduce the concept of hyperspectral image super-resolution and classify the methods from the perspectives with or without auxiliary information.Then,we review the learning-based methods in three categories,including single hyperspectral image super-resolution,panchromatic-based hyperspectral image super-resolution,and multispectral-based hyperspectral image super-resolution.Subsequently,we summarize the commonly used hyperspectral dataset,and the evaluations for some representative methods in three categories are performed qualitatively and quantitatively.Moreover,we briefly introduce several typical applications of hyperspectral image super-resolution,including ground object classification,urban change detection,and ecosystem monitoring.Finally,we provide the conclusion and challenges in existing learning-based methods,looking forward to potential future research directions. 展开更多
关键词 Deep learning hyperspectral image image fusion image super-resolution SURVEY
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部