期刊文献+
共找到320,999篇文章
< 1 2 250 >
每页显示 20 50 100
On Women's Images In Vanity Fair
1
作者 舒静 《海外英语》 2014年第12X期178-179,共2页
Wililiam Makepeace Thackeray portrayed two female images in his representative work Vanity Fair: Amelia Sedley and Becky Sharp. Through analyzing the minor status and role of women in the patriarchy society, the femin... Wililiam Makepeace Thackeray portrayed two female images in his representative work Vanity Fair: Amelia Sedley and Becky Sharp. Through analyzing the minor status and role of women in the patriarchy society, the feminism represented by Becky who actively tried to change her life and the mild and the kind-hearted traditional female image represented by Amelia,the paper reveals the social status and role of women in the patriarchy society. 展开更多
关键词 FEMALE IMAGE PATRIARCHY SOCIETY VALUE
下载PDF
Background removal from global auroral images:Data-driven dayglow modeling 被引量:1
2
作者 A.Ohma M.Madelaire +4 位作者 K.M.Laundal J.P.Reistad S.M.Hatch S.Gasparini S.J.Walker 《Earth and Planetary Physics》 EI CSCD 2024年第1期247-257,共11页
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but... Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission. 展开更多
关键词 AURORA dayglow modeling global auroral images far ultraviolet images dayglow removal
下载PDF
Using restored two-dimensional X-ray images to reconstruct the three-dimensional magnetopause 被引量:1
3
作者 RongCong Wang JiaQi Wang +3 位作者 DaLin Li TianRan Sun XiaoDong Peng YiHong Guo 《Earth and Planetary Physics》 EI CSCD 2024年第1期133-154,共22页
Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosph... Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images. 展开更多
关键词 Solar wind Magnetosphere Ionosphere Link Explorer(SMILE) soft X-ray imager MAGNETOPAUSE image restoration
下载PDF
Automated Algorithms for Detecting and Classifying X-Ray Images of Spine Fractures
4
作者 Fayez Alfayez 《Computers, Materials & Continua》 SCIE EI 2024年第4期1539-1560,共22页
This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include pictu... This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms. 展开更多
关键词 Feature reduction image classification X-ray images
下载PDF
A Degradation Type Adaptive and Deep CNN-Based Image Classification Model for Degraded Images
5
作者 Huanhua Liu Wei Wang +3 位作者 Hanyu Liu Shuheng Yi Yonghao Yu Xunwen Yao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期459-472,共14页
Deep Convolutional Neural Networks(CNNs)have achieved high accuracy in image classification tasks,however,most existing models are trained on high-quality images that are not subject to image degradation.In practice,i... Deep Convolutional Neural Networks(CNNs)have achieved high accuracy in image classification tasks,however,most existing models are trained on high-quality images that are not subject to image degradation.In practice,images are often affected by various types of degradation which can significantly impact the performance of CNNs.In this work,we investigate the influence of image degradation on three typical image classification CNNs and propose a Degradation Type Adaptive Image Classification Model(DTA-ICM)to improve the existing CNNs’classification accuracy on degraded images.The proposed DTA-ICM comprises two key components:a Degradation Type Predictor(DTP)and a Degradation Type Specified Image Classifier(DTS-IC)set,which is trained on existing CNNs for specified types of degradation.The DTP predicts the degradation type of a test image,and the corresponding DTS-IC is then selected to classify the image.We evaluate the performance of both the proposed DTP and the DTA-ICMon the Caltech 101 database.The experimental results demonstrate that the proposed DTP achieves an average accuracy of 99.70%.Moreover,the proposed DTA-ICM,based on AlexNet,VGG19,and ResNet152,exhibits an average accuracy improvement of 20.63%,18.22%,and 12.9%,respectively,compared with the original CNNs in classifying degraded images.It suggests that the proposed DTA-ICM can effectively improve the classification performance of existing CNNs on degraded images,which has important practical implications. 展开更多
关键词 Image recognition image degradation machine learning deep convolutional neural network
下载PDF
Fuzzy Difference Equations in Diagnoses of Glaucoma from Retinal Images Using Deep Learning
6
作者 D.Dorathy Prema Kavitha L.Francis Raj +3 位作者 Sandeep Kautish Abdulaziz S.Almazyad Karam M.Sallam Ali Wagdy Mohamed 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期801-816,共16页
The intuitive fuzzy set has found important application in decision-making and machine learning.To enrich and utilize the intuitive fuzzy set,this study designed and developed a deep neural network-based glaucoma eye ... The intuitive fuzzy set has found important application in decision-making and machine learning.To enrich and utilize the intuitive fuzzy set,this study designed and developed a deep neural network-based glaucoma eye detection using fuzzy difference equations in the domain where the retinal images converge.Retinal image detections are categorized as normal eye recognition,suspected glaucomatous eye recognition,and glaucomatous eye recognition.Fuzzy degrees associated with weighted values are calculated to determine the level of concentration between the fuzzy partition and the retinal images.The proposed model was used to diagnose glaucoma using retinal images and involved utilizing the Convolutional Neural Network(CNN)and deep learning to identify the fuzzy weighted regularization between images.This methodology was used to clarify the input images and make them adequate for the process of glaucoma detection.The objective of this study was to propose a novel approach to the early diagnosis of glaucoma using the Fuzzy Expert System(FES)and Fuzzy differential equation(FDE).The intensities of the different regions in the images and their respective peak levels were determined.Once the peak regions were identified,the recurrence relationships among those peaks were then measured.Image partitioning was done due to varying degrees of similar and dissimilar concentrations in the image.Similar and dissimilar concentration levels and spatial frequency generated a threshold image from the combined fuzzy matrix and FDE.This distinguished between a normal and abnormal eye condition,thus detecting patients with glaucomatous eyes. 展开更多
关键词 Convolutional Neural Network(CNN) glaucomatous eyes fuzzy difference equation intuitive fuzzy sets image segmentation retinal images
下载PDF
Restoration of the JPEG Maximum Lossy Compressed Face Images with Hourglass Block-GAN
7
作者 Jongwook Si Sungyoung Kim 《Computers, Materials & Continua》 SCIE EI 2024年第3期2893-2908,共16页
In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the imag... In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the image to its original quality.The challenge lies in regenerating significantly compressed images into a state in which these become identifiable.Therefore,this study focuses on the restoration of JPEG images subjected to substantial degradation caused by maximum lossy compression using Generative Adversarial Networks(GAN).The generator in this network is based on theU-Net architecture.It features a newhourglass structure that preserves the characteristics of the deep layers.In addition,the network incorporates two loss functions to generate natural and high-quality images:Low Frequency(LF)loss and High Frequency(HF)loss.HF loss uses a pretrained VGG-16 network and is configured using a specific layer that best represents features.This can enhance the performance in the high-frequency region.In contrast,LF loss is used to handle the low-frequency region.The two loss functions facilitate the generation of images by the generator,which can mislead the discriminator while accurately generating high-and low-frequency regions.Consequently,by removing the blocking effects frommaximum lossy compressed images,images inwhich identities could be recognized are generated.This study represents a significant improvement over previous research in terms of the image resolution performance. 展开更多
关键词 JPEG lossy compression RESTORATION image generation GAN
下载PDF
MIDNet:Deblurring Network for Material Microstructure Images
8
作者 Jiaxiang Wang Zhengyi Li +2 位作者 Peng Shi Hongying Yu Dongbai Sun 《Computers, Materials & Continua》 SCIE EI 2024年第4期1187-1204,共18页
Scanning electron microscopy(SEM)is a crucial tool in the field of materials science,providing valuable insightsinto the microstructural characteristics of materials.Unfortunately,SEM images often suffer from blurrine... Scanning electron microscopy(SEM)is a crucial tool in the field of materials science,providing valuable insightsinto the microstructural characteristics of materials.Unfortunately,SEM images often suffer from blurrinesscaused by improper hardware calibration or imaging automation errors,which present challenges in analyzingand interpretingmaterial characteristics.Consequently,rectifying the blurring of these images assumes paramountsignificance to enable subsequent analysis.To address this issue,we introduce a Material Images DeblurringNetwork(MIDNet)built upon the foundation of the Nonlinear Activation Free Network(NAFNet).MIDNetis meticulously tailored to address the blurring in images capturing the microstructure of materials.The keycontributions include enhancing the NAFNet architecture for better feature extraction and representation,integratinga novel soft attention mechanism to uncover important correlations between encoder and decoder,andintroducing newmulti-loss functions to improve training effectiveness and overallmodel performance.We conducta comprehensive set of experiments utilizing the material blurry dataset and compare them to several state-of-theartdeblurring methods.The experimental results demonstrate the applicability and effectiveness of MIDNet in thedomain of deblurring material microstructure images,with a PSNR(Peak Signal-to-Noise Ratio)reaching 35.26 dBand an SSIM(Structural Similarity)of 0.946.Our dataset is available at:https://github.com/woshigui/MIDNet. 展开更多
关键词 Image deblurring material microstructure attention mechanism deep learning
下载PDF
Double quantum images encryption scheme based on chaotic system
9
作者 蒋社想 李杨 +1 位作者 石锦 张茹 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第4期305-320,共16页
This paper explores a double quantum images representation(DNEQR)model that allows for simultaneous storage of two digital images in a quantum superposition state.Additionally,a new type of two-dimensional hyperchaoti... This paper explores a double quantum images representation(DNEQR)model that allows for simultaneous storage of two digital images in a quantum superposition state.Additionally,a new type of two-dimensional hyperchaotic system based on sine and logistic maps is investigated,offering a wider parameter space and better chaotic behavior compared to the sine and logistic maps.Based on the DNEQR model and the hyperchaotic system,a double quantum images encryption algorithm is proposed.Firstly,two classical plaintext images are transformed into quantum states using the DNEQR model.Then,the proposed hyperchaotic system is employed to iteratively generate pseudo-random sequences.These chaotic sequences are utilized to perform pixel value and position operations on the quantum image,resulting in changes to both pixel values and positions.Finally,the ciphertext image can be obtained by qubit-level diffusion using two XOR operations between the position-permutated image and the pseudo-random sequences.The corresponding quantum circuits are also given.Experimental results demonstrate that the proposed scheme ensures the security of the images during transmission,improves the encryption efficiency,and enhances anti-interference and anti-attack capabilities. 展开更多
关键词 double quantum images encryption chaotic system pixel scrambling XOR operation
下载PDF
Artificial Intelligence and Computer Vision during Surgery: Discussing Laparoscopic Images with ChatGPT4—Preliminary Results
10
作者 Savvas Hirides Petros Hirides +1 位作者 Kouloufakou Kalliopi Constantinos Hirides 《Surgical Science》 2024年第3期169-181,共13页
Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce... Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come. 展开更多
关键词 Artificial Intelligence SURGERY Image Recognition Autonomous Surgery
下载PDF
Enhancing Dense Small Object Detection in UAV Images Based on Hybrid Transformer
11
作者 Changfeng Feng Chunping Wang +2 位作者 Dongdong Zhang Renke Kou Qiang Fu 《Computers, Materials & Continua》 SCIE EI 2024年第3期3993-4013,共21页
Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unman... Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection. 展开更多
关键词 UAV images TRANSFORMER dense small object detection
下载PDF
Deep learning-based inpainting of saturation artifacts in optical coherence tomography images
12
作者 Muyun Hu Zhuoqun Yuan +2 位作者 Di Yang Jingzhu Zhao Yanmei Liang 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第3期1-10,共10页
Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts ... Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images.We proposed a deep learning-based inpainting method of saturation artifacts in this paper.The generation mechanism of saturation artifacts was analyzed,and experimental and simulated datasets were built based on the mechanism.Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs.The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility,strong generalization,and robustness. 展开更多
关键词 Optical coherence tomography saturation artifacts deep learning image inpainting.
下载PDF
An Implementation of Multiscale Line Detection and Mathematical Morphology for Efficient and Precise Blood Vessel Segmentation in Fundus Images
13
作者 Syed Ayaz Ali Shah Aamir Shahzad +4 位作者 Musaed Alhussein Chuan Meng Goh Khursheed Aurangzeb Tong Boon Tang Muhammad Awais 《Computers, Materials & Continua》 SCIE EI 2024年第5期2565-2583,共19页
Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when deal... Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field. 展开更多
关键词 Line detector vessel detection LOCALIZATION mathematical morphology image processing
下载PDF
Nonlinear Registration of Brain Magnetic Resonance Images with Cross Constraints of Intensity and Structure
14
作者 Han Zhou HongtaoXu +2 位作者 Xinyue Chang Wei Zhang Heng Dong 《Computers, Materials & Continua》 SCIE EI 2024年第5期2295-2313,共19页
Many deep learning-based registration methods rely on a single-stream encoder-decoder network for computing deformation fields between 3D volumes.However,these methods often lack constraint information and overlook se... Many deep learning-based registration methods rely on a single-stream encoder-decoder network for computing deformation fields between 3D volumes.However,these methods often lack constraint information and overlook semantic consistency,limiting their performance.To address these issues,we present a novel approach for medical image registration called theDual-VoxelMorph,featuring a dual-channel cross-constraint network.This innovative network utilizes both intensity and segmentation images,which share identical semantic information and feature representations.Two encoder-decoder structures calculate deformation fields for intensity and segmentation images,as generated by the dual-channel cross-constraint network.This design facilitates bidirectional communication between grayscale and segmentation information,enabling the model to better learn the corresponding grayscale and segmentation details of the same anatomical structures.To ensure semantic and directional consistency,we introduce constraints and apply the cosine similarity function to enhance semantic consistency.Evaluation on four public datasets demonstrates superior performance compared to the baselinemethod,achieving Dice scores of 79.9%,64.5%,69.9%,and 63.5%for OASIS-1,OASIS-3,LPBA40,and ADNI,respectively. 展开更多
关键词 Medical image registration cross constraint semantic consistency directional consistency DUAL-CHANNEL
下载PDF
Using Cross Entropy as a Performance Metric for Quantifying Uncertainty in DNN Image Classifiers: An Application to Classification of Lung Cancer on CT Images
15
作者 Eri Matsuyama Masayuki Nishiki +1 位作者 Noriyuki Takahashi Haruyuki Watanabe 《Journal of Biomedical Science and Engineering》 2024年第1期1-12,共12页
Cross entropy is a measure in machine learning and deep learning that assesses the difference between predicted and actual probability distributions. In this study, we propose cross entropy as a performance evaluation... Cross entropy is a measure in machine learning and deep learning that assesses the difference between predicted and actual probability distributions. In this study, we propose cross entropy as a performance evaluation metric for image classifier models and apply it to the CT image classification of lung cancer. A convolutional neural network is employed as the deep neural network (DNN) image classifier, with the residual network (ResNet) 50 chosen as the DNN archi-tecture. The image data used comprise a lung CT image set. Two classification models are built from datasets with varying amounts of data, and lung cancer is categorized into four classes using 10-fold cross-validation. Furthermore, we employ t-distributed stochastic neighbor embedding to visually explain the data distribution after classification. Experimental results demonstrate that cross en-tropy is a highly useful metric for evaluating the reliability of image classifier models. It is noted that for a more comprehensive evaluation of model perfor-mance, combining with other evaluation metrics is considered essential. . 展开更多
关键词 Cross Entropy Performance Metrics DNN Image Classifiers Lung Cancer Prediction Uncertainty
下载PDF
Enhancing the Quality of Low-Light Printed Circuit Board Images through Hue, Saturation, and Value Channel Processing and Improved Multi-Scale Retinex
16
作者 Huichao Shang Penglei Li Xiangqian Peng 《Journal of Computer and Communications》 2024年第1期1-10,共10页
To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. First... To address the issue of deteriorated PCB image quality in the quality inspection process due to insufficient or uneven lighting, we proposed an image enhancement fusion algorithm based on different color spaces. Firstly, an improved MSRCR method was employed for brightness enhancement of the original image. Next, the color space of the original image was transformed from RGB to HSV, followed by processing the S-channel image using bilateral filtering and contrast stretching algorithms. The V-channel image was subjected to brightness enhancement using adaptive Gamma and CLAHE algorithms. Subsequently, the processed image was transformed back to the RGB color space from HSV. Finally, the images processed by the two algorithms were fused to create a new RGB image, and color restoration was performed on the fused image. Comparative experiments with other methods indicated that the contrast of the image was optimized, texture features were more abundantly preserved, brightness levels were significantly improved, and color distortion was prevented effectively, thus enhancing the quality of low-lit PCB images. 展开更多
关键词 Low-Lit PCB images Spatial Transformation Image Enhancement Image Fusion HSV
下载PDF
Using MsfNet to Predict the ISUP Grade of Renal Clear Cell Carcinoma in Digital Pathology Images
17
作者 Kun Yang Shilong Chang +5 位作者 Yucheng Wang Minghui Wang Jiahui Yang Shuang Liu Kun Liu Linyan Xue 《Computers, Materials & Continua》 SCIE EI 2024年第1期393-410,共18页
Clear cell renal cell carcinoma(ccRCC)represents the most frequent form of renal cell carcinoma(RCC),and accurate International Society of Urological Pathology(ISUP)grading is crucial for prognosis and treatment selec... Clear cell renal cell carcinoma(ccRCC)represents the most frequent form of renal cell carcinoma(RCC),and accurate International Society of Urological Pathology(ISUP)grading is crucial for prognosis and treatment selection.This study presents a new deep network called Multi-scale Fusion Network(MsfNet),which aims to enhance the automatic ISUP grade of ccRCC with digital histopathology pathology images.The MsfNet overcomes the limitations of traditional ResNet50 by multi-scale information fusion and dynamic allocation of channel quantity.The model was trained and tested using 90 Hematoxylin and Eosin(H&E)stained whole slide images(WSIs),which were all cropped into 320×320-pixel patches at 40×magnification.MsfNet achieved a micro-averaged area under the curve(AUC)of 0.9807,a macro-averaged AUC of 0.9778 on the test dataset.The Gradient-weighted Class Activation Mapping(Grad-CAM)visually demonstrated MsfNet’s ability to distinguish and highlight abnormal areas more effectively than ResNet50.The t-Distributed Stochastic Neighbor Embedding(t-SNE)plot indicates our model can efficiently extract critical features from images,reducing the impact of noise and redundant information.The results suggest that MsfNet offers an accurate ISUP grade of ccRCC in digital images,emphasizing the potential of AI-assisted histopathological systems in clinical practice. 展开更多
关键词 Renal cell carcinoma computer-aided diagnosis pathology image deep learning machine learning
下载PDF
U-Net Inspired Deep Neural Network-Based Smoke Plume Detection in Satellite Images
18
作者 Ananthakrishnan Balasundaram Ayesha Shaik +1 位作者 Japmann Kaur Banga Aman Kumar Singh 《Computers, Materials & Continua》 SCIE EI 2024年第4期779-799,共21页
Industrial activities, through the human-induced release of Green House Gas (GHG) emissions, have beenidentified as the primary cause of global warming. Accurate and quantitative monitoring of these emissions isessent... Industrial activities, through the human-induced release of Green House Gas (GHG) emissions, have beenidentified as the primary cause of global warming. Accurate and quantitative monitoring of these emissions isessential for a comprehensive understanding of their impact on the Earth’s climate and for effectively enforcingemission regulations at a large scale. This work examines the feasibility of detecting and quantifying industrialsmoke plumes using freely accessible geo-satellite imagery. The existing systemhas so many lagging factors such aslimitations in accuracy, robustness, and efficiency and these factors hinder the effectiveness in supporting timelyresponse to industrial fires. In this work, the utilization of grayscale images is done instead of traditional colorimages for smoke plume detection. The dataset was trained through a ResNet-50 model for classification and aU-Net model for segmentation. The dataset consists of images gathered by European Space Agency’s Sentinel-2 satellite constellation from a selection of industrial sites. The acquired images predominantly capture scenesof industrial locations, some of which exhibit active smoke plume emissions. The performance of the abovementionedtechniques and models is represented by their accuracy and IOU (Intersection-over-Union) metric.The images are first trained on the basic RGB images where their respective classification using the ResNet-50model results in an accuracy of 94.4% and segmentation using the U-Net Model with an IOU metric of 0.5 andaccuracy of 94% which leads to the detection of exact patches where the smoke plume has occurred. This work hastrained the classification model on grayscale images achieving a good increase in accuracy of 96.4%. 展开更多
关键词 Smoke plume ResNet-50 U-Net geo satellite images early warning global monitoring
下载PDF
Mapping soil organic matter in cultivated land based on multi-year composite images on monthly time scales
19
作者 Jie Song Dongsheng Yu +4 位作者 Siwei Wang Yanhe Zhao Xin Wang Lixia Ma Jiangang Li 《Journal of Integrative Agriculture》 SCIE CAS CSCD 2024年第4期1393-1408,共16页
Rapid and accurate acquisition of soil organic matter(SOM)information in cultivated land is important for sustainable agricultural development and carbon balance management.This study proposed a novel approach to pred... Rapid and accurate acquisition of soil organic matter(SOM)information in cultivated land is important for sustainable agricultural development and carbon balance management.This study proposed a novel approach to predict SOM with high accuracy using multiyear synthetic remote sensing variables on a monthly scale.We obtained 12 monthly synthetic Sentinel-2 images covering the study area from 2016 to 2021 through the Google Earth Engine(GEE)platform,and reflectance bands and vegetation indices were extracted from these composite images.Then the random forest(RF),support vector machine(SVM)and gradient boosting regression tree(GBRT)models were tested to investigate the difference in SOM prediction accuracy under different combinations of monthly synthetic variables.Results showed that firstly,all monthly synthetic spectral bands of Sentinel-2 showed a significant correlation with SOM(P<0.05)for the months of January,March,April,October,and November.Secondly,in terms of single-monthly composite variables,the prediction accuracy was relatively poor,with the highest R^(2)value of 0.36 being observed in January.When monthly synthetic environmental variables were grouped in accordance with the four quarters of the year,the first quarter and the fourth quarter showed good performance,and any combination of three quarters was similar in estimation accuracy.The overall best performance was observed when all monthly synthetic variables were incorporated into the models.Thirdly,among the three models compared,the RF model was consistently more accurate than the SVM and GBRT models,achieving an R^(2)value of 0.56.Except for band 12 in December,the importance of the remaining bands did not exhibit significant differences.This research offers a new attempt to map SOM with high accuracy and fine spatial resolution based on monthly synthetic Sentinel-2 images. 展开更多
关键词 soil organic matter Sentinel-2 monthly synthetic images machine learning model spatial prediction
下载PDF
Weakly Supervised Network with Scribble-Supervised and Edge-Mask for Road Extraction from High-Resolution Remote Sensing Images
20
作者 Supeng Yu Fen Huang Chengcheng Fan 《Computers, Materials & Continua》 SCIE EI 2024年第4期549-562,共14页
Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous human... Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods. 展开更多
关键词 Semantic segmentation road extraction weakly supervised learning scribble supervision remote sensing image
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部