期刊文献+
共找到15,518篇文章
< 1 2 250 >
每页显示 20 50 100
Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module
1
作者 胡振涛 HU Chonghao +1 位作者 YANG Haoran SHUAI Weiwei 《High Technology Letters》 EI CAS 2024年第1期23-30,共8页
The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera... The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable. 展开更多
关键词 multi-modal image translation generative adversarial network(GAN) squeezeand-excitation(SE)mechanism feature attention(FA)module
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
2
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet image Classification Lightweight Convolutional Neural Network Depthwise Dilated Separable Convolution Hierarchical Multi-Scale feature Fusion
下载PDF
Fine-Grained Features for Image Captioning
3
作者 Mengyue Shao Jie Feng +2 位作者 Jie Wu Haixiang Zhang Yayu Zheng 《Computers, Materials & Continua》 SCIE EI 2023年第6期4697-4712,共16页
Image captioning involves two different major modalities(image and sentence)that convert a given image into a language that adheres to visual semantics.Almost all methods first extract image features to reduce the dif... Image captioning involves two different major modalities(image and sentence)that convert a given image into a language that adheres to visual semantics.Almost all methods first extract image features to reduce the difficulty of visual semantic embedding and then use the caption model to generate fluent sentences.The Convolutional Neural Network(CNN)is often used to extract image features in image captioning,and the use of object detection networks to extract region features has achieved great success.However,the region features retrieved by this method are object-level and do not pay attention to fine-grained details because of the detection model’s limitation.We offer an approach to address this issue that more properly generates captions by fusing fine-grained features and region features.First,we extract fine-grained features using a panoramic segmentation algorithm.Second,we suggest two fusion methods and contrast their fusion outcomes.An X-linear Attention Network(X-LAN)serves as the foundation for both fusion methods.According to experimental findings on the COCO dataset,the two-branch fusion approach is superior.It is important to note that on the COCO Karpathy test split,CIDEr is increased up to 134.3%in comparison to the baseline,highlighting the potency and viability of our method. 展开更多
关键词 image captioning region features fine-grained features FUSION
下载PDF
Learning Noise-Assisted Robust Image Features for Fine-Grained Image Retrieval
4
作者 Vidit Kumar Hemant Petwal +1 位作者 Ajay Krishan Gairola Pareshwar Prasad Barmola 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期2711-2724,共14页
Fine-grained image search is one of the most challenging tasks in computer vision that aims to retrieve similar images at the fine-grained level for a given query image.The key objective is to learn discriminative fin... Fine-grained image search is one of the most challenging tasks in computer vision that aims to retrieve similar images at the fine-grained level for a given query image.The key objective is to learn discriminative fine-grained features by training deep models such that similar images are clustered,and dissimilar images are separated in the low embedding space.Previous works primarily focused on defining local structure loss functions like triplet loss,pairwise loss,etc.However,training via these approaches takes a long training time,and they have poor accuracy.Additionally,representations learned through it tend to tighten up in the embedded space and lose generalizability to unseen classes.This paper proposes a noise-assisted representation learning method for fine-grained image retrieval to mitigate these issues.In the proposed work,class manifold learning is performed in which positive pairs are created with noise insertion operation instead of tightening class clusters.And other instances are treated as negatives within the same cluster.Then a loss function is defined to penalize when the distance between instances of the same class becomes too small relative to the noise pair in that class in embedded space.The proposed approach is validated on CARS-196 and CUB-200 datasets and achieved better retrieval results(85.38%recall@1 for CARS-196%and 70.13%recall@1 for CUB-200)compared to other existing methods. 展开更多
关键词 Convolutional network zero-shot learning fine-grained image retrieval image representation image retrieval intra-class diversity feature learning
下载PDF
An Intelligent Sensor Data Preprocessing Method for OCT Fundus Image Watermarking Using an RCNN
5
作者 Jialun Lin Qiong Chen 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1549-1561,共13页
Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images ha... Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images have a large amount of low-quality data,which seriously affects the performance of segmentationmethods.Therefore,this paper proposes an effective segmentation method for OCT fundus image watermarking using a rough convolutional neural network(RCNN).First,the rough-set-based feature discretization module is designed to preprocess the input data.Second,a dual attention mechanism for feature channels and spatial regions in the CNN is added to enable the model to adaptively select important information for fusion.Finally,the refinement module for enhancing the extraction power of multi-scale information is added to improve the edge accuracy in segmentation.RCNN is compared with CE-Net and MultiResUNet on 83 gold standard 3D retinal OCT data samples.The average dice similarly coefficient(DSC)obtained by RCNN is 6%higher than that of CE-Net.The average 95 percent Hausdorff distance(95HD)and average symmetric surface distance(ASD)obtained by RCNN are 32.4%and 33.3%lower than those of MultiResUNet,respectively.We also evaluate the effect of feature discretization,as well as analyze the initial learning rate of RCNN and conduct ablation experiments with the four different models.The experimental results indicate that our method can improve the segmentation accuracy of OCT fundus images,providing strong support for its application in medical image watermarking. 展开更多
关键词 Watermarks image segmentation rough convolutional neural network attentionmechanism feature discretization
下载PDF
DCFNet:An Effective Dual-Branch Cross-Attention Fusion Network for Medical Image Segmentation
6
作者 Chengzhang Zhu Renmao Zhang +5 位作者 Yalong Xiao Beiji Zou Xian Chai Zhangzheng Yang Rong Hu Xuanchu Duan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1103-1128,共26页
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans... Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance. 展开更多
关键词 Convolutional neural networks Swin Transformer dual branch medical image segmentation feature cross fusion
下载PDF
DGConv: A Novel Convolutional Neural Network Approach for Weld Seam Depth Image Detection
7
作者 Pengchao Li Fang Xu +3 位作者 Jintao Wang Haibing Guo Mingmin Liu Zhenjun Du 《Computers, Materials & Continua》 SCIE EI 2024年第2期1755-1771,共17页
We propose a novel image segmentation algorithm to tackle the challenge of limited recognition and segmentation performance in identifying welding seam images during robotic intelligent operations.Initially,to enhance... We propose a novel image segmentation algorithm to tackle the challenge of limited recognition and segmentation performance in identifying welding seam images during robotic intelligent operations.Initially,to enhance the capability of deep neural networks in extracting geometric attributes from depth images,we developed a novel deep geometric convolution operator(DGConv).DGConv is utilized to construct a deep local geometric feature extraction module,facilitating a more comprehensive exploration of the intrinsic geometric information within depth images.Secondly,we integrate the newly proposed deep geometric feature module with the Fully Convolutional Network(FCN8)to establish a high-performance deep neural network algorithm tailored for depth image segmentation.Concurrently,we enhance the FCN8 detection head by separating the segmentation and classification processes.This enhancement significantly boosts the network’s overall detection capability.Thirdly,for a comprehensive assessment of our proposed algorithm and its applicability in real-world industrial settings,we curated a line-scan image dataset featuring weld seams.This dataset,named the Standardized Linear Depth Profile(SLDP)dataset,was collected from actual industrial sites where autonomous robots are in operation.Ultimately,we conducted experiments utilizing the SLDP dataset,achieving an average accuracy of 92.7%.Our proposed approach exhibited a remarkable performance improvement over the prior method on the identical dataset.Moreover,we have successfully deployed the proposed algorithm in genuine industrial environments,fulfilling the prerequisites of unmanned robot operations. 展开更多
关键词 Weld image detection deep learning semantic segmentation depth map geometric feature extraction
下载PDF
Facial Image-Based Autism Detection:A Comparative Study of Deep Neural Network Classifiers
8
作者 Tayyaba Farhat Sheeraz Akram +3 位作者 Hatoon SAlSagri Zulfiqar Ali Awais Ahmad Arfan Jaffar 《Computers, Materials & Continua》 SCIE EI 2024年第1期105-126,共22页
Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particula... Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis. 展开更多
关键词 AUTISM Autism Spectrum Disorder(ASD) disease segmentation features optimization deep learning models facial images classification
下载PDF
Study on Image Recognition Algorithm for Residual Snow and Ice on Photovoltaic Modules
9
作者 Yongcan Zhu JiawenWang +3 位作者 Ye Zhang Long Zhao Botao Jiang Xinbo Huang 《Energy Engineering》 EI 2024年第4期895-911,共17页
The accumulation of snow and ice on PV modules can have a detrimental impact on power generation,leading to reduced efficiency for prolonged periods.Thus,it becomes imperative to develop an intelligent system capable ... The accumulation of snow and ice on PV modules can have a detrimental impact on power generation,leading to reduced efficiency for prolonged periods.Thus,it becomes imperative to develop an intelligent system capable of accurately assessing the extent of snow and ice coverage on PV modules.To address this issue,the article proposes an innovative ice and snow recognition algorithm that effectively segments the ice and snow areas within the collected images.Furthermore,the algorithm incorporates an analysis of the morphological characteristics of ice and snow coverage on PV modules,allowing for the establishment of a residual ice and snow recognition process.This process utilizes both the external ellipse method and the pixel statistical method to refine the identification process.The effectiveness of the proposed algorithm is validated through extensive testing with isolated and continuous snow area pictures.The results demonstrate the algorithm’s accuracy and reliability in identifying and quantifying residual snow and ice on PV modules.In conclusion,this research presents a valuable method for accurately detecting and quantifying snow and ice coverage on PV modules.This breakthrough is of utmost significance for PV power plants,as it enables predictions of power generation efficiency and facilitates efficient PV maintenance during the challenging winter conditions characterized by snow and ice.By proactively managing snow and ice coverage,PV power plants can optimize energy production and minimize downtime,ensuring a sustainable and reliable renewable energy supply. 展开更多
关键词 Photovoltaic(PV)module residual snow and ice snow detection feature extraction image processing
下载PDF
Image Retrieval with Text Manipulation by Local Feature Modification 被引量:1
10
作者 查剑宏 燕彩蓉 +1 位作者 张艳婷 王俊 《Journal of Donghua University(English Edition)》 CAS 2023年第4期404-409,共6页
The demand for image retrieval with text manipulation exists in many fields, such as e-commerce and Internet search. Deep metric learning methods are used by most researchers to calculate the similarity between the qu... The demand for image retrieval with text manipulation exists in many fields, such as e-commerce and Internet search. Deep metric learning methods are used by most researchers to calculate the similarity between the query and the candidate image by fusing the global feature of the query image and the text feature. However, the text usually corresponds to the local feature of the query image rather than the global feature. Therefore, in this paper, we propose a framework of image retrieval with text manipulation by local feature modification(LFM-IR) which can focus on the related image regions and attributes and perform modification. A spatial attention module and a channel attention module are designed to realize the semantic mapping between image and text. We achieve excellent performance on three benchmark datasets, namely Color-Shape-Size(CSS), Massachusetts Institute of Technology(MIT) States and Fashion200K(+8.3%, +0.7% and +4.6% in R@1). 展开更多
关键词 image retrieval text manipulation ATTENTION local feature modification
下载PDF
CFM-UNet:A Joint CNN and Transformer Network via Cross Feature Modulation for Remote Sensing Images Segmentation 被引量:1
11
作者 Min WANG Peidong WANG 《Journal of Geodesy and Geoinformation Science》 CSCD 2023年第4期40-47,共8页
The semantic segmentation methods based on CNN have made great progress,but there are still some shortcomings in the application of remote sensing images segmentation,such as the small receptive field can not effectiv... The semantic segmentation methods based on CNN have made great progress,but there are still some shortcomings in the application of remote sensing images segmentation,such as the small receptive field can not effectively capture global context.In order to solve this problem,this paper proposes a hybrid model based on ResNet50 and swin transformer to directly capture long-range dependence,which fuses features through Cross Feature Modulation Module(CFMM).Experimental results on two publicly available datasets,Vaihingen and Potsdam,are mIoU of 70.27%and 76.63%,respectively.Thus,CFM-UNet can maintain a high segmentation performance compared with other competitive networks. 展开更多
关键词 remote sensing images semantic segmentation swin transformer feature modulation module
下载PDF
Transformation of MRI Images to Three-Level Color Spaces for Brain Tumor Classification Using Deep-Net
12
作者 Fadl Dahan 《Intelligent Automation & Soft Computing》 2024年第2期381-395,共15页
In the domain ofmedical imaging,the accurate detection and classification of brain tumors is very important.This study introduces an advanced method for identifying camouflaged brain tumors within images.Our proposed ... In the domain ofmedical imaging,the accurate detection and classification of brain tumors is very important.This study introduces an advanced method for identifying camouflaged brain tumors within images.Our proposed model consists of three steps:Feature extraction,feature fusion,and then classification.The core of this model revolves around a feature extraction framework that combines color-transformed images with deep learning techniques,using the ResNet50 Convolutional Neural Network(CNN)architecture.So the focus is to extract robust feature fromMRI images,particularly emphasizingweighted average features extracted fromthe first convolutional layer renowned for their discriminative power.To enhance model robustness,we introduced a novel feature fusion technique based on the Marine Predator Algorithm(MPA),inspired by the hunting behavior of marine predators and has shown promise in optimizing complex problems.The proposed methodology can accurately classify and detect brain tumors in camouflage images by combining the power of color transformations,deep learning,and feature fusion via MPA,and achieved an accuracy of 98.72%on a more complex dataset surpassing the existing state-of-the-art methods,highlighting the effectiveness of the proposed model.The importance of this research is in its potential to advance the field ofmedical image analysis,particularly in brain tumor diagnosis,where diagnoses early,and accurate classification are critical for improved patient results. 展开更多
关键词 Camouflage brain tumor image classification weighted convolutional features CNN ResNet50
下载PDF
A Visual Indoor Localization Method Based on Efficient Image Retrieval
13
作者 Mengyan Lyu Xinxin Guo +1 位作者 Kunpeng Zhang Liye Zhang 《Journal of Computer and Communications》 2024年第2期47-66,共20页
The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor l... The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor localization technologies generally used scene-specific 3D representations or were trained on specific datasets, making it challenging to balance accuracy and cost when applied to new scenes. Addressing this issue, this paper proposed a universal indoor visual localization method based on efficient image retrieval. Initially, a Multi-Layer Perceptron (MLP) was employed to aggregate features from intermediate layers of a convolutional neural network, obtaining a global representation of the image. This approach ensured accurate and rapid retrieval of reference images. Subsequently, a new mechanism using Random Sample Consensus (RANSAC) was designed to resolve relative pose ambiguity caused by the essential matrix decomposition based on the five-point method. Finally, the absolute pose of the queried user image was computed, thereby achieving indoor user pose estimation. The proposed indoor localization method was characterized by its simplicity, flexibility, and excellent cross-scene generalization. Experimental results demonstrated a positioning error of 0.09 m and 2.14° on the 7Scenes dataset, and 0.15 m and 6.37° on the 12Scenes dataset. These results convincingly illustrated the outstanding performance of the proposed indoor localization method. 展开更多
关键词 Visual Indoor Positioning feature Point Matching image Retrieval Position Calculation Five-Point Method
下载PDF
Research on Detection Technology of Micro-Components on Circuit Board Based on Digital Image Processing
14
作者 Aibin Tang Yi Liu +1 位作者 Chunyin Liu Libin Yang 《Journal of Electronic Research and Application》 2024年第3期230-233,共4页
Aiming at the stability of the circuit board image in the acquisition process,this paper realizes the accurate registration of the image to be registered and the standard image based on the SIFT feature operator and R... Aiming at the stability of the circuit board image in the acquisition process,this paper realizes the accurate registration of the image to be registered and the standard image based on the SIFT feature operator and RANSAC algorithm.The device detection model and data set are established based on Faster RCNN.Finally,the number of training was continuously optimized,and when the loss function of Faster RCNN converged,the identification result of the device was obtained. 展开更多
关键词 Tiny device recognition image registration SIFT feature operator RANSAC algorithm Faster RCN
下载PDF
Image Feature Extraction and Matching of Augmented Solar Images in Space Weather
15
作者 WANG Rui BAO Lili CAI Yanxia 《空间科学学报》 CAS CSCD 北大核心 2023年第5期840-852,共13页
Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speed... Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speeded-up robust features algorithm,binary robust invariant scalable keypoints algorithm,and oriented fast and rotated brief algorithm.The performance of these algorithms was estimated in terms of matching accuracy,feature point richness,and running time.The experiment result showed that no algorithm achieved high accuracy while keeping low running time,and all algorithms are not suitable for image feature extraction and matching of augmented solar images.To solve this problem,an improved method was proposed by using two-frame matching to utilize the accuracy advantage of the scale-invariant feature transform algorithm and the speed advantage of the oriented fast and rotated brief algorithm.Furthermore,our method and the four representative algorithms were applied to augmented solar images.Our application experiments proved that our method achieved a similar high recognition rate to the scale-invariant feature transform algorithm which is significantly higher than other algorithms.Our method also obtained a similar low running time to the oriented fast and rotated brief algorithm,which is significantly lower than other algorithms. 展开更多
关键词 Augmented reality Augmented image image feature point extraction and matching Space weather Solar image
下载PDF
Correlation of image textures of a polarization feature parameter and the microstructures of liver fibrosis tissues
16
作者 Yue Yao Jiachen Wan +3 位作者 Fengdi Zhang Yang Dong Lihong Chen Hui Ma 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2023年第5期59-68,共10页
Mueller matrix imaging is emerging for the quantitative characterization of pathological microstructures and is especially sensitive to fibrous structures.Liver fibrosis is a characteristic of many types of chronic li... Mueller matrix imaging is emerging for the quantitative characterization of pathological microstructures and is especially sensitive to fibrous structures.Liver fibrosis is a characteristic of many types of chronic liver diseases.The clinical diagnosis of liver fibrosis requires time-consuming multiple staining processes that specifically target on fibrous structures.The staining proficiency of technicians and the subjective visualization of pathologists may bring inconsistency to clinical diagnosis.Mueller matrix imaging can reduce the multiple staining processes and provide quantitative diagnostic indicators to characterize liver fibrosis tissues.In this study,a fibersensitive polarization feature parameter(PFP)was derived through the forward sequential feature selection(SFS)and linear discriminant analysis(LDA)to target on the identification of fibrous structures.Then,the Pearson correlation coeffcients and the statistical T-tests between the fiber-sensitive PFP image textures and the liver fibrosis tissues were calculated.The results show the gray level run length matrix(GLRLM)-based run entropy that measures the heterogeneity of the PFP image was most correlated to the changes of liver fibrosis tissues at four stages with a Pearson correlation of 0.6919.The results also indicate the highest Pearson correlation of 0.9996 was achieved through the linear regression predictions of the combination of the PFP image textures.This study demonstrates the potential of deriving a fiber-sensitive PFP to reduce the multiple staining process and provide textures-based quantitative diagnostic indicators for the staging of liver fibrosis. 展开更多
关键词 Polarization feature parameter polarization image textures liver fibrosis.
下载PDF
Clinical and multimodal imaging features of acute macular neuroretinopathy lesions following recent SARS-CoV-2 infection
17
作者 Yang-Chen Liu Bin Wu +1 位作者 Yan Wang Song Chen 《International Journal of Ophthalmology(English edition)》 SCIE CAS 2023年第5期755-761,共7页
AIM:To describe the clinical characteristics of eyes using multimodal imaging features with acute macular neuroretinopathy(AMN)lesions following severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)infection.MET... AIM:To describe the clinical characteristics of eyes using multimodal imaging features with acute macular neuroretinopathy(AMN)lesions following severe acute respiratory syndrome coronavirus 2(SARS-CoV-2)infection.METHODS:Retrospective case series study.From December 18,2022 to February 14,2023,previously healthy cases within 1-week infection with SARS-CoV-2 and examined at Tianjin Eye Hospital to confirm the diagnosis of AMN were included in the study.Totally 5 males and 9 females[mean age:29.93±10.32(16-49)y]were presented for reduced vision,with or without blurred vision.All patients underwent best corrected visual acuity(BCVA),intraocular pressure,slit lamp microscopy,indirect fundoscopy.Simultaneously,multimodal imagings fundus photography(45°or 200°field of view)was performed in 7 cases(14 eyes).Near infrared(NIR)fundus photography was performed in 9 cases(18 eyes),optical coherence tomography(OCT)in 5 cases(10 eyes),optical coherence tomography angiography(OCTA)in 9 cases(18 eyes),and fundus fluorescence angiography(FFA)in 3 cases(6 eyes).Visual field was performed in 1 case(2 eyes).RESULTS:Multimodal imaging findings data from 14 patients with AMN were reviewed.All eyes demonstrated different extent hyperreflective lesions at the level of the inner nuclear layer and/or outer plexus layer on OCT or OCTA.Fundus photography(45°or 200°field of view)showed irregular hypo-reflective lesion around the fovea in 7 cases(14 eyes).OCTA demonstrated that the superficial retinal capillary plexus(SCP)vascular density,deep capillary plexus(DCP)vascular density and choriocapillaris(CC)vascular density was reduced in 9 case(18 eyes).Among the follow-up cases(2 cases),vascular density increased in 1 case with elevated BCVA;another case has vascular density decrease in one eye and basically unchanged in other eye.En face images of the ellipsoidal zone and interdigitation zone injury showed a low wedge-shaped reflection contour appearance.NIR image mainly show the absence of the outer retinal interdigitation zone in AMN.No abnormal fluorescence was observed in FFA.Corresponding partial defect of the visual field were visualized via perimeter in one case.CONCLUSION:The morbidity of SARS-CoV-2 infection with AMN is increased.Ophthalmologists should be aware of the possible,albeit rare,AMN after SARS-CoV-2 infection and focus on multimodal imaging features.OCT,OCTA,and infrared fundus phase are proved to be valuable tools for detection of AMN in patients with SARS-CoV-2. 展开更多
关键词 SARS-CoV-2 infection tomography optical coherence acute macular neuroretinopathy multimodal imaging features
下载PDF
Adaptive Window Based 3-D Feature Selection for Multispectral Image Classification Using Firefly Algorithm
18
作者 M.Rajakani R.J.Kavitha A.Ramachandran 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期265-280,共16页
Feature extraction is the most critical step in classification of multispectral image.The classification accuracy is mainly influenced by the feature sets that are selected to classify the image.In the past,handcrafte... Feature extraction is the most critical step in classification of multispectral image.The classification accuracy is mainly influenced by the feature sets that are selected to classify the image.In the past,handcrafted feature sets are used which are not adaptive for different image domains.To overcome this,an evolu-tionary learning method is developed to automatically learn the spatial-spectral features for classification.A modified Firefly Algorithm(FA)which achieves maximum classification accuracy with reduced size of feature set is proposed to gain the interest of feature selection for this purpose.For extracting the most effi-cient features from the data set,we have used 3-D discrete wavelet transform which decompose the multispectral image in all three dimensions.For selecting spatial and spectral features we have studied three different approaches namely overlapping window(OW-3DFS),non-overlapping window(NW-3DFS)adaptive window cube(AW-3DFS)and Pixel based technique.Fivefold Multiclass Support Vector Machine(MSVM)is used for classification purpose.Experiments con-ducted on Madurai LISS IV multispectral image exploited that the adaptive win-dow approach is used to increase the classification accuracy. 展开更多
关键词 Multispectral image modifiedfirefly algorithm 3-D feature extraction feature selection multiclass support vector machine CLASSIFICATION
下载PDF
An Effective Machine-Learning Based Feature Extraction/Recognition Model for Fetal Heart Defect Detection from 2D Ultrasonic Imageries
19
作者 Bingzheng Wu Peizhong Liu +3 位作者 Huiling Wu Shunlan Liu Shaozheng He Guorong Lv 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第2期1069-1089,共21页
Congenital heart defect,accounting for about 30%of congenital defects,is the most common one.Data shows that congenital heart defects have seriously affected the birth rate of healthy newborns.In Fetal andNeonatal Car... Congenital heart defect,accounting for about 30%of congenital defects,is the most common one.Data shows that congenital heart defects have seriously affected the birth rate of healthy newborns.In Fetal andNeonatal Cardiology,medical imaging technology(2D ultrasonic,MRI)has been proved to be helpful to detect congenital defects of the fetal heart and assists sonographers in prenatal diagnosis.It is a highly complex task to recognize 2D fetal heart ultrasonic standard plane(FHUSP)manually.Compared withmanual identification,automatic identification through artificial intelligence can save a lot of time,ensure the efficiency of diagnosis,and improve the accuracy of diagnosis.In this study,a feature extraction method based on texture features(Local Binary Pattern LBP and Histogram of Oriented Gradient HOG)and combined with Bag of Words(BOW)model is carried out,and then feature fusion is performed.Finally,it adopts Support VectorMachine(SVM)to realize automatic recognition and classification of FHUSP.The data includes 788 standard plane data sets and 448 normal and abnormal plane data sets.Compared with some other methods and the single method model,the classification accuracy of our model has been obviously improved,with the highest accuracy reaching 87.35%.Similarly,we also verify the performance of the model in normal and abnormal planes,and the average accuracy in classifying abnormal and normal planes is 84.92%.The experimental results show that thismethod can effectively classify and predict different FHUSP and can provide certain assistance for sonographers to diagnose fetal congenital heart disease. 展开更多
关键词 Congenital heart defect fetal heart ultrasonic standard plane image recognition and classification machine learning bag of words model feature fusion
下载PDF
Identification of serous ovarian tumors based on polarization imaging and correlation analysis with clinicopathological features
20
作者 Yulu Huang Anli Hou +7 位作者 Jing Wang Yue Yao Wenbin Miao Xuewu Tian Jiawen Yu Cheng Li Hui Ma Yujuan Fan 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2023年第5期33-46,共14页
Ovarian cancer is one of the most aggressive and heterogeneous female tumors in the world,and serous ovarian cancer(SOC)is of particular concern for being the leading cause of ovarian cancer death.Due to its clinical ... Ovarian cancer is one of the most aggressive and heterogeneous female tumors in the world,and serous ovarian cancer(SOC)is of particular concern for being the leading cause of ovarian cancer death.Due to its clinical and biological complexities,ovarian cancer is still considered one of the most di±cult tumors to diagnose and manage.In this study,three datasets were assembled,including 30 cases of serous cystadenoma(SCA),30 cases of serous borderline tumor(SBT),and 45 cases of serous adenocarcinoma(SAC).Mueller matrix microscopy is used to obtain the polarimetry basis parameters(PBPs)of each case,combined with a machine learning(ML)model to derive the polarimetry feature parameters(PFPs)for distinguishing serous ovarian tumor(SOT).The correlation between the mean values of PBPs and the clinicopathological features of serous ovarian cancer was analyzed.The accuracies of PFPs obtained from three types of SOT for identifying dichotomous groups(SCA versus SAC,SCA versus SBT,and SBT versus SAC)were 0.91,0.92,and 0.8,respectively.The accuracy of PFP for identifying triadic groups(SCA versus SBT versus SAC)was 0.75.Correlation analysis between PBPs and the clinicopathological features of SOC was performed.There were correlations between some PBPs(δ,β,q_(L),E_(2),rqcross,P_(2),P_(3),P_(4),and P_(5))and clinicopathological features,including the International Federation of Gynecology and Obstetrics(FIGO)stage,pathological grading,preoperative ascites,malignant ascites,and peritoneal implantation.The research showed that PFPs extracted from polarization images have potential applications in quantitatively differentiating the SOTs.These polarimetry basis parameters related to the clinicopathological features of SOC can be used as prognostic factors. 展开更多
关键词 Serous ovarian tumor(SOT) polarimetry basis parameter(PBP) polarimetry feature parameter(PFP) polarization imaging machine learning(ML).
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部