期刊文献+
共找到12,956篇文章
< 1 2 250 >
每页显示 20 50 100
Improving the Transmission Security of Vein Images Using a Bezier Curve and Long Short-Term Memory
1
作者 Ahmed H.Alhadethi Ikram Smaoui +1 位作者 Ahmed Fakhfakh Saad M.Darwish 《Computers, Materials & Continua》 SCIE EI 2024年第6期4825-4844,共20页
The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that c... The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%. 展开更多
关键词 image transmission image compression text hiding Bezier curve Histogram of Oriented Gradients(HOG) LSTM image enhancement Gaussian noise ROTATION
下载PDF
Unified deep learning model for predicting fundus fluorescein angiography image from fundus structure image
2
作者 Yiwei Chen Yi He +3 位作者 Hong Ye Lina Xing Xin Zhang Guohua Shi 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第3期105-113,共9页
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im... The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error. 展开更多
关键词 Fundus fluorescein angiography image fundus structure image image translation unified deep learning model generative adversarial networks
下载PDF
A Systematic Literature Review of Machine Learning and Deep Learning Approaches for Spectral Image Classification in Agricultural Applications Using Aerial Photography
3
作者 Usman Khan Muhammad Khalid Khan +4 位作者 Muhammad Ayub Latif Muhammad Naveed Muhammad Mansoor Alam Salman A.Khan Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第3期2967-3000,共34页
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma... Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements. 展开更多
关键词 Machine learning deep learning unmanned aerial vehicles multi-spectral images image recognition object detection hyperspectral images aerial photography
下载PDF
Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding
4
作者 Chunming Wu Wukai Liu Xin Ma 《Computers, Materials & Continua》 SCIE EI 2024年第4期1441-1461,共21页
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne... A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations. 展开更多
关键词 image fusion Res2Net-Transformer infrared image visible image
下载PDF
Enhancing the Quality of Low-Light Printed Circuit Board Images through Hue, Saturation, and Value Channel Processing and Improved Multi-Scale Retinex
5
作者 Huichao Shang Penglei Li Xiangqian Peng 《Journal of Computer and Communications》 2024年第1期1-10,共10页
To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. First... To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. Firstly, an improved MSRCR method was employed for brightness enhancement of the original image. Next, the color space of the original image was transformed from RGB to HSV, followed by processing the S-channel image using bilateral filtering and contrast stretching algorithms. The V-channel image was subjected to brightness enhancement using adaptive Gamma and CLAHE algorithms. Subsequently, the processed image was transformed back to the RGB color space from HSV. Finally, the images processed by the two algorithms were fused to create a new RGB image, and color restoration was performed on the fused image. Comparative experiments with other methods indicated that the contrast of the image was optimized, texture features were more abundantly preserved, brightness levels were significantly improved, and color distortion was prevented effectively, thus enhancing the quality of low-lit PCB images. 展开更多
关键词 Low-Lit PCB images Spatial Transformation image Enhancement image Fusion HSV
下载PDF
Background removal from global auroral images:Data-driven dayglow modeling 被引量:1
6
作者 A.Ohma M.Madelaire +4 位作者 K.M.Laundal J.P.Reistad S.M.Hatch S.Gasparini S.J.Walker 《Earth and Planetary Physics》 EI CSCD 2024年第1期247-257,共11页
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but... Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission. 展开更多
关键词 AURORA dayglow modeling global auroral images far ultraviolet images dayglow removal
下载PDF
Using restored two-dimensional X-ray images to reconstruct the three-dimensional magnetopause 被引量:1
7
作者 RongCong Wang JiaQi Wang +3 位作者 DaLin Li TianRan Sun XiaoDong Peng YiHong Guo 《Earth and Planetary Physics》 EI CSCD 2024年第1期133-154,共22页
Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosph... Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images. 展开更多
关键词 Solar wind Magnetosphere Ionosphere Link Explorer(SMILE) soft X-ray imager MAGNETOPAUSE image restoration
下载PDF
Design of a novel hybrid quantum deep neural network in INEQR images classification
8
作者 王爽 王柯涵 +3 位作者 程涛 赵润盛 马鸿洋 郭帅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第6期230-238,共9页
We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantu... We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantum circuit, thereby propose a novel hybrid quantum deep neural network(HQDNN) used for image classification. After bilinear interpolation reduces the original image to a suitable size, an improved novel enhanced quantum representation(INEQR) is used to encode it into quantum states as the input of the HQDNN. Multi-layer parameterized quantum circuits are used as the main structure to implement feature extraction and classification. The output results of parameterized quantum circuits are converted into classical data through quantum measurements and then optimized on a classical computer. To verify the performance of the HQDNN, we conduct binary classification and three classification experiments on the MNIST(Modified National Institute of Standards and Technology) data set. In the first binary classification, the accuracy of 0 and 4 exceeds98%. Then we compare the performance of three classification with other algorithms, the results on two datasets show that the classification accuracy is higher than that of quantum deep neural network and general quantum convolutional neural network. 展开更多
关键词 quantum computing image classification quantum–classical hybrid neural network quantum image representation INTERPOLATION
下载PDF
Integer multiple quantum image scaling based on NEQR and bicubic interpolation
9
作者 蔡硕 周日贵 +1 位作者 罗佳 陈思哲 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第4期259-273,共15页
As a branch of quantum image processing,quantum image scaling has been widely studied.However,most of the existing quantum image scaling algorithms are based on nearest-neighbor interpolation and bilinear interpolatio... As a branch of quantum image processing,quantum image scaling has been widely studied.However,most of the existing quantum image scaling algorithms are based on nearest-neighbor interpolation and bilinear interpolation,the quantum version of bicubic interpolation has not yet been studied.In this work,we present the first quantum image scaling scheme for bicubic interpolation based on the novel enhanced quantum representation(NEQR).Our scheme can realize synchronous enlargement and reduction of the image with the size of 2^(n)×2^(n) by integral multiple.Firstly,the image is represented by NEQR and the original image coordinates are obtained through multiple CNOT modules.Then,16 neighborhood pixels are obtained by quantum operation circuits,and the corresponding weights of these pixels are calculated by quantum arithmetic modules.Finally,a quantum matrix operation,instead of a classical convolution operation,is used to realize the sum of convolution of these pixels.Through simulation experiments and complexity analysis,we demonstrate that our scheme achieves exponential speedup over the classical bicubic interpolation algorithm,and has better effect than the quantum version of bilinear interpolation. 展开更多
关键词 quantum image processing image scaling bicubic interpolation quantum circuit
下载PDF
A Review on the Recent Trends of Image Steganography for VANET Applications
10
作者 Arshiya S.Ansari 《Computers, Materials & Continua》 SCIE EI 2024年第3期2865-2892,共28页
Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate w... Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate with one another and with roadside infrastructure to enhance safety and traffic flow provide a range of value-added services,as they are an essential component of modern smart transportation systems.VANETs steganography has been suggested by many authors for secure,reliable message transfer between terminal/hope to terminal/hope and also to secure it from attack for privacy protection.This paper aims to determine whether using steganography is possible to improve data security and secrecy in VANET applications and to analyze effective steganography techniques for incorporating data into images while minimizing visual quality loss.According to simulations in literature and real-world studies,Image steganography proved to be an effectivemethod for secure communication on VANETs,even in difficult network conditions.In this research,we also explore a variety of steganography approaches for vehicular ad-hoc network transportation systems like vector embedding,statistics,spatial domain(SD),transform domain(TD),distortion,masking,and filtering.This study possibly shall help researchers to improve vehicle networks’ability to communicate securely and lay the door for innovative steganography methods. 展开更多
关键词 STEGANOGRAPHY image steganography image steganography techniques information exchange data embedding and extracting vehicular ad hoc network(VANET) transportation system
下载PDF
Advancements in Remote Sensing Image Dehazing: Introducing URA-Net with Multi-Scale Dense Feature Fusion Clusters and Gated Jump Connection
11
作者 Hongchi Liu Xing Deng Haijian Shao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2397-2424,共28页
The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivot... The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality. 展开更多
关键词 Remote sensing image image dehazing deep learning feature fusion
下载PDF
Fuzzy Difference Equations in Diagnoses of Glaucoma from Retinal Images Using Deep Learning
12
作者 D.Dorathy Prema Kavitha L.Francis Raj +3 位作者 Sandeep Kautish Abdulaziz S.Almazyad Karam M.Sallam Ali Wagdy Mohamed 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期801-816,共16页
The intuitive fuzzy set has found important application in decision-making and machine learning.To enrich and utilize the intuitive fuzzy set,this study designed and developed a deep neural network-based glaucoma eye ... The intuitive fuzzy set has found important application in decision-making and machine learning.To enrich and utilize the intuitive fuzzy set,this study designed and developed a deep neural network-based glaucoma eye detection using fuzzy difference equations in the domain where the retinal images converge.Retinal image detections are categorized as normal eye recognition,suspected glaucomatous eye recognition,and glaucomatous eye recognition.Fuzzy degrees associated with weighted values are calculated to determine the level of concentration between the fuzzy partition and the retinal images.The proposed model was used to diagnose glaucoma using retinal images and involved utilizing the Convolutional Neural Network(CNN)and deep learning to identify the fuzzy weighted regularization between images.This methodology was used to clarify the input images and make them adequate for the process of glaucoma detection.The objective of this study was to propose a novel approach to the early diagnosis of glaucoma using the Fuzzy Expert System(FES)and Fuzzy differential equation(FDE).The intensities of the different regions in the images and their respective peak levels were determined.Once the peak regions were identified,the recurrence relationships among those peaks were then measured.Image partitioning was done due to varying degrees of similar and dissimilar concentrations in the image.Similar and dissimilar concentration levels and spatial frequency generated a threshold image from the combined fuzzy matrix and FDE.This distinguished between a normal and abnormal eye condition,thus detecting patients with glaucomatous eyes. 展开更多
关键词 Convolutional Neural Network(CNN) glaucomatous eyes fuzzy difference equation intuitive fuzzy sets image segmentation retinal images
下载PDF
Color Image Compression and Encryption Algorithm Based on 2D Compressed Sensing and Hyperchaotic System
13
作者 Zhiqing Dong Zhao Zhang +1 位作者 Hongyan Zhou Xuebo Chen 《Computers, Materials & Continua》 SCIE EI 2024年第2期1977-1993,共17页
With the advent of the information security era,it is necessary to guarantee the privacy,accuracy,and dependable transfer of pictures.This study presents a new approach to the encryption and compression of color image... With the advent of the information security era,it is necessary to guarantee the privacy,accuracy,and dependable transfer of pictures.This study presents a new approach to the encryption and compression of color images.It is predicated on 2D compressed sensing(CS)and the hyperchaotic system.First,an optimized Arnold scrambling algorithm is applied to the initial color images to ensure strong security.Then,the processed images are con-currently encrypted and compressed using 2D CS.Among them,chaotic sequences replace traditional random measurement matrices to increase the system’s security.Third,the processed images are re-encrypted using a combination of permutation and diffusion algorithms.In addition,the 2D projected gradient with an embedding decryption(2DPG-ED)algorithm is used to reconstruct images.Compared with the traditional reconstruction algorithm,the 2DPG-ED algorithm can improve security and reduce computational complexity.Furthermore,it has better robustness.The experimental outcome and the performance analysis indicate that this algorithm can withstand malicious attacks and prove the method is effective. 展开更多
关键词 image encryption image compression hyperchaotic system compressed sensing
下载PDF
Image enhancement with intensity transformation on embedding space
14
作者 Hanul Kim Yeji Jeon Yeong Jun Koh 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期101-115,共15页
In recent times,an image enhancement approach,which learns the global transformation function using deep neural networks,has gained attention.However,many existing methods based on this approach have a limitation:thei... In recent times,an image enhancement approach,which learns the global transformation function using deep neural networks,has gained attention.However,many existing methods based on this approach have a limitation:their transformation functions are too simple to imitate complex colour transformations between low-quality images and manually retouched high-quality images.In order to address this limitation,a simple yet effective approach for image enhancement is proposed.The proposed algorithm based on the channel-wise intensity transformation is designed.However,this transformation is applied to the learnt embedding space instead of specific colour spaces and then return enhanced features to colours.To this end,the authors define the continuous intensity transformation(CIT)to describe the mapping between input and output intensities on the embedding space.Then,the enhancement network is developed,which produces multi-scale feature maps from input images,derives the set of transformation functions,and performs the CIT to obtain enhanced images.Extensive experiments on the MIT-Adobe 5K dataset demonstrate that the authors’approach improves the performance of conventional intensity transforms on colour space metrics.Specifically,the authors achieved a 3.8%improvement in peak signal-to-noise ratio,a 1.8%improvement in structual similarity index measure,and a 27.5%improvement in learned perceptual image patch similarity.Also,the authors’algorithm outperforms state-of-the-art alternatives on three image enhancement datasets:MIT-Adobe 5K,Low-Light,and Google HDRþ. 展开更多
关键词 computer vision deep learning image enhancement image processing
下载PDF
A Degradation Type Adaptive and Deep CNN-Based Image Classification Model for Degraded Images
15
作者 Huanhua Liu Wei Wang +3 位作者 Hanyu Liu Shuheng Yi Yonghao Yu Xunwen Yao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期459-472,共14页
Deep Convolutional Neural Networks(CNNs)have achieved high accuracy in image classification tasks,however,most existing models are trained on high-quality images that are not subject to image degradation.In practice,i... Deep Convolutional Neural Networks(CNNs)have achieved high accuracy in image classification tasks,however,most existing models are trained on high-quality images that are not subject to image degradation.In practice,images are often affected by various types of degradation which can significantly impact the performance of CNNs.In this work,we investigate the influence of image degradation on three typical image classification CNNs and propose a Degradation Type Adaptive Image Classification Model(DTA-ICM)to improve the existing CNNs’classification accuracy on degraded images.The proposed DTA-ICM comprises two key components:a Degradation Type Predictor(DTP)and a Degradation Type Specified Image Classifier(DTS-IC)set,which is trained on existing CNNs for specified types of degradation.The DTP predicts the degradation type of a test image,and the corresponding DTS-IC is then selected to classify the image.We evaluate the performance of both the proposed DTP and the DTA-ICMon the Caltech 101 database.The experimental results demonstrate that the proposed DTP achieves an average accuracy of 99.70%.Moreover,the proposed DTA-ICM,based on AlexNet,VGG19,and ResNet152,exhibits an average accuracy improvement of 20.63%,18.22%,and 12.9%,respectively,compared with the original CNNs in classifying degraded images.It suggests that the proposed DTA-ICM can effectively improve the classification performance of existing CNNs on degraded images,which has important practical implications. 展开更多
关键词 image recognition image degradation machine learning deep convolutional neural network
下载PDF
Image Splicing Forgery Detection Using Feature-Based of Sonine Functions and Deep Features
16
作者 Ala’a R.Al-Shamasneh Rabha W.Ibrahim 《Computers, Materials & Continua》 SCIE EI 2024年第1期795-810,共16页
The growing prevalence of fake images on the Internet and social media makes image integrity verification a crucial research topic.One of the most popular methods for manipulating digital images is image splicing,whic... The growing prevalence of fake images on the Internet and social media makes image integrity verification a crucial research topic.One of the most popular methods for manipulating digital images is image splicing,which involves copying a specific area from one image and pasting it into another.Attempts were made to mitigate the effects of image splicing,which continues to be a significant research challenge.This study proposes a new splicing detectionmodel,combining Sonine functions-derived convex-based features and deep features.Two stages make up the proposed method.The first step entails feature extraction,then classification using the“support vector machine”(SVM)to differentiate authentic and spliced images.The proposed Sonine functions-based feature extraction model reveals the spliced texture details by extracting some clues about the probability of image pixels.The proposed model achieved an accuracy of 98.93% when tested with the CASIA V2.0 dataset“Chinese Academy of Sciences,Institute of Automation”which is a publicly available dataset for forgery classification.The experimental results show that,for image splicing forgery detection,the proposed Sonine functions-derived convex-based features and deep features outperform state-of-the-art techniques in terms of accuracy,precision,and recall.Overall,the obtained detection accuracy attests to the benefit of using the Sonine functions alongside deep feature representations.Finding the regions or locations where image tampering has taken place is limited by the study.Future research will need to look into advanced image analysis techniques that can offer a higher degree of accuracy in identifying and localizing tampering regions. 展开更多
关键词 image forgery image splicing deep learning Sonine functions
下载PDF
A Novel Multi-Stream Fusion Network for Underwater Image Enhancement
17
作者 Guijin Tang Lian Duan +1 位作者 Haitao Zhao Feng Liu 《China Communications》 SCIE CSCD 2024年第2期166-182,共17页
Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color... Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images. 展开更多
关键词 image enhancement multi-stream fusion underwater image
下载PDF
Transformer-Based Cloud Detection Method for High-Resolution Remote Sensing Imagery
18
作者 Haotang Tan Song Sun +1 位作者 Tian Cheng Xiyuan Shu 《Computers, Materials & Continua》 SCIE EI 2024年第7期661-678,共18页
Cloud detection from satellite and drone imagery is crucial for applications such as weather forecasting and environmentalmonitoring.Addressing the limitations of conventional convolutional neural networks,we propose ... Cloud detection from satellite and drone imagery is crucial for applications such as weather forecasting and environmentalmonitoring.Addressing the limitations of conventional convolutional neural networks,we propose an innovative transformer-based method.This method leverages transformers,which are adept at processing data sequences,to enhance cloud detection accuracy.Additionally,we introduce a Cyclic Refinement Architecture that improves the resolution and quality of feature extraction,thereby aiding in the retention of critical details often lost during cloud detection.Our extensive experimental validation shows that our approach significantly outperforms established models,excelling in high-resolution feature extraction and precise cloud segmentation.By integrating Positional Visual Transformers(PVT)with this architecture,our method advances high-resolution feature delineation and segmentation accuracy.Ultimately,our research offers a novel perspective for surmounting traditional challenges in cloud detection and contributes to the advancement of precise and dependable image analysis across various domains. 展开更多
关键词 CLOUD TRANSFORMER image segmentation remotely sensed imagery pyramid vision transformer
下载PDF
A Modified CycleGAN for Multi-Organ Ultrasound Image Enhancement via Unpaired Pre-Training
19
作者 Haonan Han Bingyu Yang +2 位作者 Weihang Zhang Dongwei Li Huiqi Li 《Journal of Beijing Institute of Technology》 EI CAS 2024年第3期194-203,共10页
Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image qual... Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image quality of handheld ultrasound devices is not always satisfactory due to the limited equipment size,which hinders accurate diagnoses by doctors.At the same time,paired ultrasound images are difficult to obtain from the clinic because imaging process is complicated.Therefore,we propose a modified cycle generative adversarial network(cycleGAN) for ultrasound image enhancement from multiple organs via unpaired pre-training.We introduce an ultrasound image pre-training method that does not require paired images,alleviating the requirement for large-scale paired datasets.We also propose an enhanced block with different structures in the pre-training and fine-tuning phases,which can help achieve the goals of different training phases.To improve the robustness of the model,we add Gaussian noise to the training images as data augmentation.Our approach is effective in obtaining the best quantitative evaluation results using a small number of parameters and less training costs to improve the quality of handheld ultrasound devices. 展开更多
关键词 ultrasound image enhancement handheld devices unpaired images pre-train and finetune cycleGAN
下载PDF
Multi-scale cross-domain alignment for person image generation
20
作者 Liyuan Ma Tingwei Gao +1 位作者 Haibin Shen Kejie Huang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第2期374-387,共14页
Person image generation aims to generate images that maintain the original human appearance in different target poses.Recent works have revealed that the critical element in achieving this task is the alignment of app... Person image generation aims to generate images that maintain the original human appearance in different target poses.Recent works have revealed that the critical element in achieving this task is the alignment of appearance domain and pose domain.Previous alignment methods,such as appearance flow warping,correspondence learning and cross attention,often encounter challenges when it comes to producing fine texture details.These approaches suffer from limitations in accurately estimating appearance flows due to the lack of global receptive field.Alternatively,they can only perform cross-domain alignment on high-level feature maps with small spatial dimensions since the computational complexity increases quadratically with larger feature sizes.In this article,the significance of multi-scale alignment,in both low-level and high-level domains,for ensuring reliable cross-domain alignment of appearance and pose is demonstrated.To this end,a novel and effective method,named Multi-scale Crossdomain Alignment(MCA)is proposed.Firstly,MCA adopts global context aggregation transformer to model multi-scale interaction between pose and appearance inputs,which employs pair-wise window-based cross attention.Furthermore,leveraging the integrated global source information for each target position,MCA applies flexible flow prediction head and point correlation to effectively conduct warping and fusing for final transformed person image generation.Our proposed MCA achieves superior performance on two popular datasets than other methods,which verifies the effectiveness of our approach. 展开更多
关键词 artificial intelligence image processing image reconstruction
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部