期刊文献+
共找到10,345篇文章
< 1 2 250 >
每页显示 20 50 100
Unified deep learning model for predicting fundus fluorescein angiography image from fundus structure image
1
作者 Yiwei Chen Yi He +3 位作者 Hong Ye Lina Xing Xin Zhang Guohua Shi 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第3期105-113,共9页
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im... The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error. 展开更多
关键词 Fundus fluorescein angiography image fundus structure image image translation unified deep learning model generative adversarial networks
下载PDF
A Systematic Literature Review of Machine Learning and Deep Learning Approaches for Spectral Image Classification in Agricultural Applications Using Aerial Photography
2
作者 Usman Khan Muhammad Khalid Khan +4 位作者 Muhammad Ayub Latif Muhammad Naveed Muhammad Mansoor Alam Salman A.Khan Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第3期2967-3000,共34页
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma... Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements. 展开更多
关键词 Machine learning deep learning unmanned aerial vehicles multi-spectral images image recognition object detection hyperspectral images aerial photography
下载PDF
Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding
3
作者 Chunming Wu Wukai Liu Xin Ma 《Computers, Materials & Continua》 SCIE EI 2024年第4期1441-1461,共21页
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne... A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations. 展开更多
关键词 image fusion Res2Net-Transformer infrared image visible image
下载PDF
Enhancing the Quality of Low-Light Printed Circuit Board Images through Hue, Saturation, and Value Channel Processing and Improved Multi-Scale Retinex
4
作者 Huichao Shang Penglei Li Xiangqian Peng 《Journal of Computer and Communications》 2024年第1期1-10,共10页
To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. First... To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. Firstly, an improved MSRCR method was employed for brightness enhancement of the original image. Next, the color space of the original image was transformed from RGB to HSV, followed by processing the S-channel image using bilateral filtering and contrast stretching algorithms. The V-channel image was subjected to brightness enhancement using adaptive Gamma and CLAHE algorithms. Subsequently, the processed image was transformed back to the RGB color space from HSV. Finally, the images processed by the two algorithms were fused to create a new RGB image, and color restoration was performed on the fused image. Comparative experiments with other methods indicated that the contrast of the image was optimized, texture features were more abundantly preserved, brightness levels were significantly improved, and color distortion was prevented effectively, thus enhancing the quality of low-lit PCB images. 展开更多
关键词 Low-Lit PCB images Spatial Transformation image Enhancement image Fusion HSV
下载PDF
Background removal from global auroral images:Data-driven dayglow modeling 被引量:1
5
作者 A.Ohma M.Madelaire +4 位作者 K.M.Laundal J.P.Reistad S.M.Hatch S.Gasparini S.J.Walker 《Earth and Planetary Physics》 EI CSCD 2024年第1期247-257,共11页
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but... Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission. 展开更多
关键词 AURORA dayglow modeling global auroral images far ultraviolet images dayglow removal
下载PDF
Using restored two-dimensional X-ray images to reconstruct the three-dimensional magnetopause 被引量:1
6
作者 RongCong Wang JiaQi Wang +3 位作者 DaLin Li TianRan Sun XiaoDong Peng YiHong Guo 《Earth and Planetary Physics》 EI CSCD 2024年第1期133-154,共22页
Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosph... Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images. 展开更多
关键词 Solar wind Magnetosphere Ionosphere Link Explorer(SMILE) soft X-ray imager MAGNETOPAUSE image restoration
下载PDF
Integer multiple quantum image scaling based on NEQR and bicubic interpolation
7
作者 蔡硕 周日贵 +1 位作者 罗佳 陈思哲 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第4期259-273,共15页
As a branch of quantum image processing,quantum image scaling has been widely studied.However,most of the existing quantum image scaling algorithms are based on nearest-neighbor interpolation and bilinear interpolatio... As a branch of quantum image processing,quantum image scaling has been widely studied.However,most of the existing quantum image scaling algorithms are based on nearest-neighbor interpolation and bilinear interpolation,the quantum version of bicubic interpolation has not yet been studied.In this work,we present the first quantum image scaling scheme for bicubic interpolation based on the novel enhanced quantum representation(NEQR).Our scheme can realize synchronous enlargement and reduction of the image with the size of 2^(n)×2^(n) by integral multiple.Firstly,the image is represented by NEQR and the original image coordinates are obtained through multiple CNOT modules.Then,16 neighborhood pixels are obtained by quantum operation circuits,and the corresponding weights of these pixels are calculated by quantum arithmetic modules.Finally,a quantum matrix operation,instead of a classical convolution operation,is used to realize the sum of convolution of these pixels.Through simulation experiments and complexity analysis,we demonstrate that our scheme achieves exponential speedup over the classical bicubic interpolation algorithm,and has better effect than the quantum version of bilinear interpolation. 展开更多
关键词 quantum image processing image scaling bicubic interpolation quantum circuit
下载PDF
A Review on the Recent Trends of Image Steganography for VANET Applications
8
作者 Arshiya S.Ansari 《Computers, Materials & Continua》 SCIE EI 2024年第3期2865-2892,共28页
Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate w... Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate with one another and with roadside infrastructure to enhance safety and traffic flow provide a range of value-added services,as they are an essential component of modern smart transportation systems.VANETs steganography has been suggested by many authors for secure,reliable message transfer between terminal/hope to terminal/hope and also to secure it from attack for privacy protection.This paper aims to determine whether using steganography is possible to improve data security and secrecy in VANET applications and to analyze effective steganography techniques for incorporating data into images while minimizing visual quality loss.According to simulations in literature and real-world studies,Image steganography proved to be an effectivemethod for secure communication on VANETs,even in difficult network conditions.In this research,we also explore a variety of steganography approaches for vehicular ad-hoc network transportation systems like vector embedding,statistics,spatial domain(SD),transform domain(TD),distortion,masking,and filtering.This study possibly shall help researchers to improve vehicle networks’ability to communicate securely and lay the door for innovative steganography methods. 展开更多
关键词 STEGANOGRAPHY image steganography image steganography techniques information exchange data embedding and extracting vehicular ad hoc network(VANET) transportation system
下载PDF
Fuzzy Difference Equations in Diagnoses of Glaucoma from Retinal Images Using Deep Learning
9
作者 D.Dorathy Prema Kavitha L.Francis Raj +3 位作者 Sandeep Kautish Abdulaziz S.Almazyad Karam M.Sallam Ali Wagdy Mohamed 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期801-816,共16页
The intuitive fuzzy set has found important application in decision-making and machine learning.To enrich and utilize the intuitive fuzzy set,this study designed and developed a deep neural network-based glaucoma eye ... The intuitive fuzzy set has found important application in decision-making and machine learning.To enrich and utilize the intuitive fuzzy set,this study designed and developed a deep neural network-based glaucoma eye detection using fuzzy difference equations in the domain where the retinal images converge.Retinal image detections are categorized as normal eye recognition,suspected glaucomatous eye recognition,and glaucomatous eye recognition.Fuzzy degrees associated with weighted values are calculated to determine the level of concentration between the fuzzy partition and the retinal images.The proposed model was used to diagnose glaucoma using retinal images and involved utilizing the Convolutional Neural Network(CNN)and deep learning to identify the fuzzy weighted regularization between images.This methodology was used to clarify the input images and make them adequate for the process of glaucoma detection.The objective of this study was to propose a novel approach to the early diagnosis of glaucoma using the Fuzzy Expert System(FES)and Fuzzy differential equation(FDE).The intensities of the different regions in the images and their respective peak levels were determined.Once the peak regions were identified,the recurrence relationships among those peaks were then measured.Image partitioning was done due to varying degrees of similar and dissimilar concentrations in the image.Similar and dissimilar concentration levels and spatial frequency generated a threshold image from the combined fuzzy matrix and FDE.This distinguished between a normal and abnormal eye condition,thus detecting patients with glaucomatous eyes. 展开更多
关键词 Convolutional Neural Network(CNN) glaucomatous eyes fuzzy difference equation intuitive fuzzy sets image segmentation retinal images
下载PDF
Color Image Compression and Encryption Algorithm Based on 2D Compressed Sensing and Hyperchaotic System
10
作者 Zhiqing Dong Zhao Zhang +1 位作者 Hongyan Zhou Xuebo Chen 《Computers, Materials & Continua》 SCIE EI 2024年第2期1977-1993,共17页
With the advent of the information security era,it is necessary to guarantee the privacy,accuracy,and dependable transfer of pictures.This study presents a new approach to the encryption and compression of color image... With the advent of the information security era,it is necessary to guarantee the privacy,accuracy,and dependable transfer of pictures.This study presents a new approach to the encryption and compression of color images.It is predicated on 2D compressed sensing(CS)and the hyperchaotic system.First,an optimized Arnold scrambling algorithm is applied to the initial color images to ensure strong security.Then,the processed images are con-currently encrypted and compressed using 2D CS.Among them,chaotic sequences replace traditional random measurement matrices to increase the system’s security.Third,the processed images are re-encrypted using a combination of permutation and diffusion algorithms.In addition,the 2D projected gradient with an embedding decryption(2DPG-ED)algorithm is used to reconstruct images.Compared with the traditional reconstruction algorithm,the 2DPG-ED algorithm can improve security and reduce computational complexity.Furthermore,it has better robustness.The experimental outcome and the performance analysis indicate that this algorithm can withstand malicious attacks and prove the method is effective. 展开更多
关键词 image encryption image compression hyperchaotic system compressed sensing
下载PDF
Image enhancement with intensity transformation on embedding space
11
作者 Hanul Kim Yeji Jeon Yeong Jun Koh 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期101-115,共15页
In recent times,an image enhancement approach,which learns the global transformation function using deep neural networks,has gained attention.However,many existing methods based on this approach have a limitation:thei... In recent times,an image enhancement approach,which learns the global transformation function using deep neural networks,has gained attention.However,many existing methods based on this approach have a limitation:their transformation functions are too simple to imitate complex colour transformations between low-quality images and manually retouched high-quality images.In order to address this limitation,a simple yet effective approach for image enhancement is proposed.The proposed algorithm based on the channel-wise intensity transformation is designed.However,this transformation is applied to the learnt embedding space instead of specific colour spaces and then return enhanced features to colours.To this end,the authors define the continuous intensity transformation(CIT)to describe the mapping between input and output intensities on the embedding space.Then,the enhancement network is developed,which produces multi-scale feature maps from input images,derives the set of transformation functions,and performs the CIT to obtain enhanced images.Extensive experiments on the MIT-Adobe 5K dataset demonstrate that the authors’approach improves the performance of conventional intensity transforms on colour space metrics.Specifically,the authors achieved a 3.8%improvement in peak signal-to-noise ratio,a 1.8%improvement in structual similarity index measure,and a 27.5%improvement in learned perceptual image patch similarity.Also,the authors’algorithm outperforms state-of-the-art alternatives on three image enhancement datasets:MIT-Adobe 5K,Low-Light,and Google HDRþ. 展开更多
关键词 computer vision deep learning image enhancement image processing
下载PDF
A Degradation Type Adaptive and Deep CNN-Based Image Classification Model for Degraded Images
12
作者 Huanhua Liu Wei Wang +3 位作者 Hanyu Liu Shuheng Yi Yonghao Yu Xunwen Yao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期459-472,共14页
Deep Convolutional Neural Networks(CNNs)have achieved high accuracy in image classification tasks,however,most existing models are trained on high-quality images that are not subject to image degradation.In practice,i... Deep Convolutional Neural Networks(CNNs)have achieved high accuracy in image classification tasks,however,most existing models are trained on high-quality images that are not subject to image degradation.In practice,images are often affected by various types of degradation which can significantly impact the performance of CNNs.In this work,we investigate the influence of image degradation on three typical image classification CNNs and propose a Degradation Type Adaptive Image Classification Model(DTA-ICM)to improve the existing CNNs’classification accuracy on degraded images.The proposed DTA-ICM comprises two key components:a Degradation Type Predictor(DTP)and a Degradation Type Specified Image Classifier(DTS-IC)set,which is trained on existing CNNs for specified types of degradation.The DTP predicts the degradation type of a test image,and the corresponding DTS-IC is then selected to classify the image.We evaluate the performance of both the proposed DTP and the DTA-ICMon the Caltech 101 database.The experimental results demonstrate that the proposed DTP achieves an average accuracy of 99.70%.Moreover,the proposed DTA-ICM,based on AlexNet,VGG19,and ResNet152,exhibits an average accuracy improvement of 20.63%,18.22%,and 12.9%,respectively,compared with the original CNNs in classifying degraded images.It suggests that the proposed DTA-ICM can effectively improve the classification performance of existing CNNs on degraded images,which has important practical implications. 展开更多
关键词 image recognition image degradation machine learning deep convolutional neural network
下载PDF
Image Splicing Forgery Detection Using Feature-Based of Sonine Functions and Deep Features
13
作者 Ala’a R.Al-Shamasneh Rabha W.Ibrahim 《Computers, Materials & Continua》 SCIE EI 2024年第1期795-810,共16页
The growing prevalence of fake images on the Internet and social media makes image integrity verification a crucial research topic.One of the most popular methods for manipulating digital images is image splicing,whic... The growing prevalence of fake images on the Internet and social media makes image integrity verification a crucial research topic.One of the most popular methods for manipulating digital images is image splicing,which involves copying a specific area from one image and pasting it into another.Attempts were made to mitigate the effects of image splicing,which continues to be a significant research challenge.This study proposes a new splicing detectionmodel,combining Sonine functions-derived convex-based features and deep features.Two stages make up the proposed method.The first step entails feature extraction,then classification using the“support vector machine”(SVM)to differentiate authentic and spliced images.The proposed Sonine functions-based feature extraction model reveals the spliced texture details by extracting some clues about the probability of image pixels.The proposed model achieved an accuracy of 98.93% when tested with the CASIA V2.0 dataset“Chinese Academy of Sciences,Institute of Automation”which is a publicly available dataset for forgery classification.The experimental results show that,for image splicing forgery detection,the proposed Sonine functions-derived convex-based features and deep features outperform state-of-the-art techniques in terms of accuracy,precision,and recall.Overall,the obtained detection accuracy attests to the benefit of using the Sonine functions alongside deep feature representations.Finding the regions or locations where image tampering has taken place is limited by the study.Future research will need to look into advanced image analysis techniques that can offer a higher degree of accuracy in identifying and localizing tampering regions. 展开更多
关键词 image forgery image splicing deep learning Sonine functions
下载PDF
A Novel Multi-Stream Fusion Network for Underwater Image Enhancement
14
作者 Guijin Tang Lian Duan +1 位作者 Haitao Zhao Feng Liu 《China Communications》 SCIE CSCD 2024年第2期166-182,共17页
Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color... Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images. 展开更多
关键词 image enhancement multi-stream fusion underwater image
下载PDF
Multi-scale cross-domain alignment for person image generation
15
作者 Liyuan Ma Tingwei Gao +1 位作者 Haibin Shen Kejie Huang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第2期374-387,共14页
Person image generation aims to generate images that maintain the original human appearance in different target poses.Recent works have revealed that the critical element in achieving this task is the alignment of app... Person image generation aims to generate images that maintain the original human appearance in different target poses.Recent works have revealed that the critical element in achieving this task is the alignment of appearance domain and pose domain.Previous alignment methods,such as appearance flow warping,correspondence learning and cross attention,often encounter challenges when it comes to producing fine texture details.These approaches suffer from limitations in accurately estimating appearance flows due to the lack of global receptive field.Alternatively,they can only perform cross-domain alignment on high-level feature maps with small spatial dimensions since the computational complexity increases quadratically with larger feature sizes.In this article,the significance of multi-scale alignment,in both low-level and high-level domains,for ensuring reliable cross-domain alignment of appearance and pose is demonstrated.To this end,a novel and effective method,named Multi-scale Crossdomain Alignment(MCA)is proposed.Firstly,MCA adopts global context aggregation transformer to model multi-scale interaction between pose and appearance inputs,which employs pair-wise window-based cross attention.Furthermore,leveraging the integrated global source information for each target position,MCA applies flexible flow prediction head and point correlation to effectively conduct warping and fusing for final transformed person image generation.Our proposed MCA achieves superior performance on two popular datasets than other methods,which verifies the effectiveness of our approach. 展开更多
关键词 artificial intelligence image processing image reconstruction
下载PDF
Automated Algorithms for Detecting and Classifying X-Ray Images of Spine Fractures
16
作者 Fayez Alfayez 《Computers, Materials & Continua》 SCIE EI 2024年第4期1539-1560,共22页
This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include pictu... This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms. 展开更多
关键词 Feature reduction image classification X-ray images
下载PDF
SMILE soft X-ray Imager flight model CCD370 pre-flight device characterisation 被引量:1
17
作者 S.Parsons D.J.Hall +4 位作者 O.Hetherington T.W.Buggey T.Arnold M.W.J.Hubbard A.Holland 《Earth and Planetary Physics》 EI CSCD 2024年第1期25-38,共14页
Throughout the SMILE mission the satellite will be bombarded by radiation which gradually damages the focal plane devices and degrades their performance.In order to understand the changes of the CCD370s within the sof... Throughout the SMILE mission the satellite will be bombarded by radiation which gradually damages the focal plane devices and degrades their performance.In order to understand the changes of the CCD370s within the soft X-ray Imager,an initial characterisation of the devices has been carried out to give a baseline performance level.Three CCDs have been characterised,the two flight devices and the flight spa re.This has been carried out at the Open University in a bespo ke cleanroom measure ment facility.The results show that there is a cluster of bright pixels in the flight spa re which increases in size with tempe rature.However at the nominal ope rating tempe rature(-120℃) it is within the procure ment specifications.Overall,the devices meet the specifications when ope rating at -120℃ in 6 × 6 binned frame transfer science mode.The se rial charge transfer inefficiency degrades with temperature in full frame mode.However any charge losses are recovered when binning/frame transfer is implemented. 展开更多
关键词 CCD soft X-ray imager characterisation SMILE
下载PDF
Simulation of the SMILE Soft X-ray Imager response to a southward interplanetary magnetic field turning 被引量:1
18
作者 Andrey Samsonov Graziella Branduardi-Raymont +3 位作者 Steven Sembay Andrew Read David Sibeck Lutz Rastaetter 《Earth and Planetary Physics》 EI CSCD 2024年第1期39-46,共8页
The Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)Soft X-ray Imager(SXI)will shine a spotlight on magnetopause dynamics during magnetic reconnection.We simulate an event with a southward interplanetary magne... The Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)Soft X-ray Imager(SXI)will shine a spotlight on magnetopause dynamics during magnetic reconnection.We simulate an event with a southward interplanetary magnetic field turning and produce SXI count maps with a 5-minute integration time.By making assumptions about the magnetopause shape,we find the magnetopause standoff distance from the count maps and compare it with the one obtained directly from the magnetohydrodynamic(MHD)simulation.The root mean square deviations between the reconstructed and MHD standoff distances do not exceed 0.2 RE(Earth radius)and the maximal difference equals 0.24 RE during the 25-minute interval around the southward turning. 展开更多
关键词 MAGNETOPAUSE magnetic reconnection solar wind charge exchange southward interplanetary magnetic field numerical modeling Solar wind Magnetosphere Ionosphere Link Explorer(SMILE) Soft X-ray imager
下载PDF
Scale-space effect and scale hybridization in image intelligent recognition of geological discontinuities on rock slopes
19
作者 Mingyang Wang Enzhi Wang +1 位作者 Xiaoli Liu Congcong Wang 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第4期1315-1336,共22页
Geological discontinuity(GD)plays a pivotal role in determining the catastrophic mechanical failure of jointed rock masses.Accurate and efficient acquisition of GD networks is essential for characterizing and understa... Geological discontinuity(GD)plays a pivotal role in determining the catastrophic mechanical failure of jointed rock masses.Accurate and efficient acquisition of GD networks is essential for characterizing and understanding the progressive damage mechanisms of slopes based on monitoring image data.Inspired by recent advances in computer vision,deep learning(DL)models have been widely utilized for image-based fracture identification.The multi-scale characteristics,image resolution and annotation quality of images will cause a scale-space effect(SSE)that makes features indistinguishable from noise,directly affecting the accuracy.However,this effect has not received adequate attention.Herein,we try to address this gap by collecting slope images at various proportional scales and constructing multi-scale datasets using image processing techniques.Next,we quantify the intensity of feature signals using metrics such as peak signal-to-noise ratio(PSNR)and structural similarity(SSIM).Combining these metrics with the scale-space theory,we investigate the influence of the SSE on the differentiation of multi-scale features and the accuracy of recognition.It is found that augmenting the image's detail capacity does not always yield benefits for vision-based recognition models.In light of these observations,we propose a scale hybridization approach based on the diffusion mechanism of scale-space representation.The results show that scale hybridization strengthens the tolerance of multi-scale feature recognition under complex environmental noise interference and significantly enhances the recognition accuracy of GD.It also facilitates the objective understanding,description and analysis of the rock behavior and stability of slopes from the perspective of image data. 展开更多
关键词 image processing Geological discontinuities Deep learning MULTI-SCALE Scale-space theory Scale hybridization
下载PDF
A Galaxy Image Augmentation Method Based on Few-shot Learning and Generative Adversarial Networks
20
作者 Yiqi Yao Jinqu Zhang +1 位作者 Ping Du Shuyu Dong 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2024年第3期180-193,共14页
Galaxy morphology classifications based on machine learning are a typical technique to handle enormous amounts of astronomical observation data,but the key challenge is how to provide enough training data for the mach... Galaxy morphology classifications based on machine learning are a typical technique to handle enormous amounts of astronomical observation data,but the key challenge is how to provide enough training data for the machine learning models.Therefore this article proposes an image data augmentation method that combines few-shot learning and generative adversarial networks.The Galaxy10 DECaLs data set is selected for the experiments with consistency,variance,and augmentation effects being evaluated.Three popular networks,including AlexNet,VGG,and ResNet,are used as examples to study the effectiveness of different augmentation methods on galaxy morphology classifications.Experiment results show that the proposed method can generate galaxy images and can be used for expanding the classification model’s training set.According to comparative studies,the best enhancement effect on model performance is obtained by generating a data set that is 0.5–1 time larger than the original data set.Meanwhile,different augmentation strategies have considerably varied effects on different types of galaxies.FSL-GAN achieved the best classification performance on the ResNet network for In-between Round Smooth Galaxies and Unbarred Loose Spiral Galaxies,with F1 Scores of 89.54%and 63.18%,respectively.Experimental comparison reveals that various data augmentation techniques have varied effects on different categories of galaxy morphology and machine learning models.Finally,the best augmentation strategies for each galaxy category are suggested. 展开更多
关键词 techniques image processing-galaxies structure-galaxies general
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部