期刊文献+
共找到16,264篇文章
< 1 2 250 >
每页显示 20 50 100
Congruent Feature Selection Method to Improve the Efficacy of Machine Learning-Based Classification in Medical Image Processing
1
作者 Mohd Anjum Naoufel Kraiem +2 位作者 Hong Min Ashit Kumar Dutta Yousef Ibrahim Daradkeh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期357-384,共28页
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp... Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset. 展开更多
关键词 Computer vision feature selection machine learning region detection texture analysis image classification medical images
下载PDF
Retrospective analysis of pathological types and imaging features in pancreatic cancer: A comprehensive study
2
作者 Yang-Gang Luo Mei Wu Hong-Guang Chen 《World Journal of Gastrointestinal Oncology》 SCIE 2025年第1期121-129,共9页
BACKGROUND Pancreatic cancer remains one of the most lethal malignancies worldwide,with a poor prognosis often attributed to late diagnosis.Understanding the correlation between pathological type and imaging features ... BACKGROUND Pancreatic cancer remains one of the most lethal malignancies worldwide,with a poor prognosis often attributed to late diagnosis.Understanding the correlation between pathological type and imaging features is crucial for early detection and appropriate treatment planning.AIM To retrospectively analyze the relationship between different pathological types of pancreatic cancer and their corresponding imaging features.METHODS We retrospectively analyzed the data of 500 patients diagnosed with pancreatic cancer between January 2010 and December 2020 at our institution.Pathological types were determined by histopathological examination of the surgical spe-cimens or biopsy samples.The imaging features were assessed using computed tomography,magnetic resonance imaging,and endoscopic ultrasound.Statistical analyses were performed to identify significant associations between pathological types and specific imaging characteristics.RESULTS There were 320(64%)cases of pancreatic ductal adenocarcinoma,75(15%)of intraductal papillary mucinous neoplasms,50(10%)of neuroendocrine tumors,and 55(11%)of other rare types.Distinct imaging features were identified in each pathological type.Pancreatic ductal adenocarcinoma typically presents as a hypodense mass with poorly defined borders on computed tomography,whereas intraductal papillary mucinous neoplasms present as characteristic cystic lesions with mural nodules.Neuroendocrine tumors often appear as hypervascular lesions in contrast-enhanced imaging.Statistical analysis revealed significant correlations between specific imaging features and pathological types(P<0.001).CONCLUSION This study demonstrated a strong association between the pathological types of pancreatic cancer and imaging features.These findings can enhance the accuracy of noninvasive diagnosis and guide personalized treatment approaches. 展开更多
关键词 Pancreatic cancer Pathological types imaging features Retrospective analysis Diagnostic accuracy
下载PDF
Advancements in Remote Sensing Image Dehazing: Introducing URA-Net with Multi-Scale Dense Feature Fusion Clusters and Gated Jump Connection
3
作者 Hongchi Liu Xing Deng Haijian Shao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2397-2424,共28页
The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivot... The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality. 展开更多
关键词 Remote sensing image image dehazing deep learning feature fusion
下载PDF
A Concise and Varied Visual Features-Based Image Captioning Model with Visual Selection
4
作者 Alaa Thobhani Beiji Zou +4 位作者 Xiaoyan Kui Amr Abdussalam Muhammad Asim Naveed Ahmed Mohammed Ali Alshara 《Computers, Materials & Continua》 SCIE EI 2024年第11期2873-2894,共22页
Image captioning has gained increasing attention in recent years.Visual characteristics found in input images play a crucial role in generating high-quality captions.Prior studies have used visual attention mechanisms... Image captioning has gained increasing attention in recent years.Visual characteristics found in input images play a crucial role in generating high-quality captions.Prior studies have used visual attention mechanisms to dynamically focus on localized regions of the input image,improving the effectiveness of identifying relevant image regions at each step of caption generation.However,providing image captioning models with the capability of selecting the most relevant visual features from the input image and attending to them can significantly improve the utilization of these features.Consequently,this leads to enhanced captioning network performance.In light of this,we present an image captioning framework that efficiently exploits the extracted representations of the image.Our framework comprises three key components:the Visual Feature Detector module(VFD),the Visual Feature Visual Attention module(VFVA),and the language model.The VFD module is responsible for detecting a subset of the most pertinent features from the local visual features,creating an updated visual features matrix.Subsequently,the VFVA directs its attention to the visual features matrix generated by the VFD,resulting in an updated context vector employed by the language model to generate an informative description.Integrating the VFD and VFVA modules introduces an additional layer of processing for the visual features,thereby contributing to enhancing the image captioning model’s performance.Using the MS-COCO dataset,our experiments show that the proposed framework competes well with state-of-the-art methods,effectively leveraging visual representations to improve performance.The implementation code can be found here:https://github.com/althobhani/VFDICM(accessed on 30 July 2024). 展开更多
关键词 Visual attention image captioning visual feature detector visual feature visual attention
下载PDF
Triple-path feature transform network for ring-array photoacoustic tomography image reconstruction
5
作者 Lingyu Ma Zezheng Qin +1 位作者 Yiming Ma Mingjian Sun 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第3期23-40,共18页
Photoacoustic imaging(PAI)is a noninvasive emerging imaging method based on the photoacoustic effect,which provides necessary assistance for medical diagnosis.It has the characteristics of large imaging depth and high... Photoacoustic imaging(PAI)is a noninvasive emerging imaging method based on the photoacoustic effect,which provides necessary assistance for medical diagnosis.It has the characteristics of large imaging depth and high contrast.However,limited by the equipment cost and reconstruction time requirements,the existing PAI systems distributed with annular array transducers are difficult to take into account both the image quality and the imaging speed.In this paper,a triple-path feature transform network(TFT-Net)for ring-array photoacoustic tomography is proposed to enhance the imaging quality from limited-view and sparse measurement data.Specifically,the network combines the raw photoacoustic pressure signals and conventional linear reconstruction images as input data,and takes the photoacoustic physical model as a prior information to guide the reconstruction process.In addition,to enhance the ability of extracting signal features,the residual block and squeeze and excitation block are introduced into the TFT-Net.For further efficient reconstruction,the final output of photoacoustic signals uses‘filter-then-upsample’operation with a pixel-shuffle multiplexer and a max out module.Experiment results on simulated and in-vivo data demonstrate that the constructed TFT-Net can restore the target boundary clearly,reduce background noise,and realize fast and high-quality photoacoustic image reconstruction of limited view with sparse sampling. 展开更多
关键词 Deep learning feature transformation image reconstruction limited-view measurement photoacoustic tomography.
下载PDF
Research on Multi-Scale Feature Fusion Network Algorithm Based on Brain Tumor Medical Image Classification
6
作者 Yuting Zhou Xuemei Yang +1 位作者 Junping Yin Shiqi Liu 《Computers, Materials & Continua》 SCIE EI 2024年第6期5313-5333,共21页
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier... Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect. 展开更多
关键词 Medical image classification feature fusion TRANSFORMER
下载PDF
CMMCAN:Lightweight Feature Extraction and Matching Network for Endoscopic Images Based on Adaptive Attention
7
作者 Nannan Chong Fan Yang 《Computers, Materials & Continua》 SCIE EI 2024年第8期2761-2783,共23页
In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clini... In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clinical operating environments,endoscopic images often suffer from challenges such as low texture,uneven illumination,and non-rigid structures,which affect feature observation and extraction.This can severely impact surgical navigation or clinical diagnosis due to missing feature points in endoscopic images,leading to treatment and postoperative recovery issues for patients.To address these challenges,this paper introduces,for the first time,a Cross-Channel Multi-Modal Adaptive Spatial Feature Fusion(ASFF)module based on the lightweight architecture of EfficientViT.Additionally,a novel lightweight feature extraction and matching network based on attention mechanism is proposed.This network dynamically adjusts attention weights for cross-modal information from grayscale images and optical flow images through a dual-branch Siamese network.It extracts static and dynamic information features ranging from low-level to high-level,and from local to global,ensuring robust feature extraction across different widths,noise levels,and blur scenarios.Global and local matching are performed through a multi-level cascaded attention mechanism,with cross-channel attention introduced to simultaneously extract low-level and high-level features.Extensive ablation experiments and comparative studies are conducted on the HyperKvasir,EAD,M2caiSeg,CVC-ClinicDB,and UCL synthetic datasets.Experimental results demonstrate that the proposed network improves upon the baseline EfficientViT-B3 model by 75.4%in accuracy(Acc),while also enhancing runtime performance and storage efficiency.When compared with the complex DenseDescriptor feature extraction network,the difference in Acc is less than 7.22%,and IoU calculation results on specific datasets outperform complex dense models.Furthermore,this method increases the F1 score by 33.2%and accelerates runtime by 70.2%.It is noteworthy that the speed of CMMCAN surpasses that of comparative lightweight models,with feature extraction and matching performance comparable to existing complex models but with faster speed and higher cost-effectiveness. 展开更多
关键词 feature extraction and matching lightweighted network medical images ENDOSCOPIC ATTENTION
下载PDF
Brain Tumor Retrieval in MRI Images with Integration of Optimal Features
8
作者 N V Shamna B Aziz Musthafa 《Journal of Harbin Institute of Technology(New Series)》 CAS 2024年第6期71-83,共13页
This paper presents an approach to improve medical image retrieval, particularly for brain tumors, by addressing the gap between low-level visual and high-level perceived contents in MRI, X-ray, and CT scans. Traditio... This paper presents an approach to improve medical image retrieval, particularly for brain tumors, by addressing the gap between low-level visual and high-level perceived contents in MRI, X-ray, and CT scans. Traditional methods based on color, shape, or texture are less effective. The proposed solution uses machine learning to handle high-dimensional image features, reducing computational complexity and mitigating issues caused by artifacts or noise. It employs a genetic algorithm for feature reduction and a hybrid residual UNet(HResUNet) model for Region-of-Interest(ROI) segmentation and classification, with enhanced image preprocessing. The study examines various loss functions, finding that a hybrid loss function yields superior results, and the GA-HResUNet model outperforms the HResUNet. Comparative analysis with state-of-the-art models shows a 4% improvement in retrieval accuracy. 展开更多
关键词 medical images brain MRI machine learning feature extraction and reduction content-based image retrieval(CBIR)
下载PDF
Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module
9
作者 胡振涛 HU Chonghao +1 位作者 YANG Haoran SHUAI Weiwei 《High Technology Letters》 EI CAS 2024年第1期23-30,共8页
The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera... The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable. 展开更多
关键词 multi-modal image translation generative adversarial network(GAN) squeezeand-excitation(SE)mechanism feature attention(FA)module
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
10
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet image Classification Lightweight Convolutional Neural Network Depthwise Dilated Separable Convolution Hierarchical Multi-Scale feature Fusion
下载PDF
MOVING OBJECT TRACKING IN DYNAMIC IMAGE SEQUENCE BASED ON ESTIMATION OF MOTION VECTORS OF FEATURE POINTS 被引量:2
11
作者 黎宁 周建江 张星星 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2009年第4期295-300,共6页
An improved estimation of motion vectors of feature points is proposed for tracking moving objects of dynamic image sequence. Feature points are firstly extracted by the improved minimum intensity change (MIC) algor... An improved estimation of motion vectors of feature points is proposed for tracking moving objects of dynamic image sequence. Feature points are firstly extracted by the improved minimum intensity change (MIC) algorithm. The matching points of these feature points are then determined by adaptive rood pattern searching. Based on the random sample consensus (RANSAC) method, the background motion is finally compensated by the parameters of an affine transform of the background motion. With reasonable morphological filtering, the moving objects are completely extracted from the background, and then tracked accurately. Experimental results show that the improved method is successful on the motion background compensation and offers great promise in tracking moving objects of the dynamic image sequence. 展开更多
关键词 motion compensation motion estimation feature extraction moving object tracking dynamic image sequence
下载PDF
EFFECTIVE FEATURE ANALYSIS FOR COLOR IMAGE SEGMENTATION 被引量:2
12
作者 黎宁 毛四新 李有福 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2001年第2期206-212,共7页
An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. The determination of effective color features depen... An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. The determination of effective color features depends on the analysis of various color features from each tested color image via the designed feature encoding. It is different from the pervious methods where self organized feature map (SOFM) is used for constructing the feature encoding so that the feature encoding can self organize the effective features for different color images. Fuzzy clustering is applied for the final segmentation when the well suited color features and the initial parameter are available. The proposed method has been applied in segmenting different types of color images and the experimental results show that it outperforms the classical clustering method. The study shows that the feature encoding approach offers great promise in automating and optimizing the segmentation of color images. 展开更多
关键词 image segmentation color image neural networks fuzzy clustering feature encoding
下载PDF
Electronic image stabilization system based on global feature tracking 被引量:7
13
作者 Zhu Juanjuan Guo Baolong 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2008年第2期228-233,共6页
A new robust electronic image stabilization system is presented, which involves feature-point, tracking based global motion estimation and Kalman filtering based motion compensation. First, global motion is estimated ... A new robust electronic image stabilization system is presented, which involves feature-point, tracking based global motion estimation and Kalman filtering based motion compensation. First, global motion is estimated from the local motions of selected feature points. Considering the local moving objects or the inevitable mismatch, the matching validation, based on the stable relative distance between the points set is proposed, thus maintaining high accuracy and robustness. Next, the global motion parameters are accumulated for correction by Kalman filteration. The experimental result illustrates that the proposed system is effective to stabilize translational, rotational, and zooming jitter and robust to local motions. 展开更多
关键词 electronic image stabilization global motion estimation feature tracking Kalman filter.
下载PDF
Feature fusion method for edge detection of color images 被引量:4
14
作者 Ma Yu Gu Xiaodong Wang Yuanyuan 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2009年第2期394-399,共6页
A novel feature fusion method is proposed for the edge detection of color images. Except for the typical features used in edge detection, the color contrast similarity and the orientation consistency are also selected... A novel feature fusion method is proposed for the edge detection of color images. Except for the typical features used in edge detection, the color contrast similarity and the orientation consistency are also selected as the features. The four features are combined together as a parameter to detect the edges of color images. Experimental results show that the method can inhibit noisy edges and facilitate the detection for weak edges. It has a better performance than conventional methods in noisy environments. 展开更多
关键词 color image processing edge detection feature extraction feature fusion
下载PDF
Image block feature vectors based on a singular-value information metric and color-texture description 被引量:4
15
作者 王朔中 路兴 +1 位作者 苏胜君 张新鹏 《Journal of Shanghai University(English Edition)》 CAS 2007年第3期205-209,共5页
In this work, image feature vectors are formed for blocks containing sufficient information, which are selected using a singular-value criterion. When the ratio between the first two SVs axe below a given threshold, t... In this work, image feature vectors are formed for blocks containing sufficient information, which are selected using a singular-value criterion. When the ratio between the first two SVs axe below a given threshold, the block is considered informative. A total of 12 features including statistics of brightness, color components and texture measures are used to form intermediate vectors. Principal component analysis is then performed to reduce the dimension to 6 to give the final feature vectors. Relevance of the constructed feature vectors is demonstrated by experiments in which k-means clustering is used to group the vectors hence the blocks. Blocks falling into the same group show similar visual appearances. 展开更多
关键词 image feature COLOR TEXTURE content-based image retrieval (CBIR) image hashing
下载PDF
Role of the texture features of images in the diagnosis of solitary pulmonary nodules in different sizes 被引量:4
16
作者 Qian Zhao Chang-Zheng Shi Liang-Ping Luo 《Chinese Journal of Cancer Research》 SCIE CAS CSCD 2014年第4期451-458,共8页
Objective: To explore the role of the texture features of images in the diagnosis of solitary pulmonary nodules (SPNs) in different sizes. Materials and methods: A total of 379 patients with pathologically confirm... Objective: To explore the role of the texture features of images in the diagnosis of solitary pulmonary nodules (SPNs) in different sizes. Materials and methods: A total of 379 patients with pathologically confirmed SPNs were enrolled in this study. They were divided into three groups based on the SPN sizes: ≤10, 11-20, and 〉20 mm. Their texture features were segmented and extracted. The differences in the image features between benign and malignant SPNs were compared. The SPNs in these three groups were determined and analyzed with the texture features of images. Results: These 379 SPNs were successfully segmented using the 2D Otsu threshold method and the self-adaptive threshold segmentation method. The texture features of these SPNs were obtained using the method of grey level co-occurrence matrix (GLCM). Of these 379 patients, 120 had benign SPNs and 259 had malignant SPNs. The entropy, contrast, energy, homogeneity, and correlation were 3.5597±0.6470, 0.5384±0.2561, 0.1921±0.1256, 0.8281±0.0604, and 0.8748±0.0740 in the benign SPNs and 3.8007±0.6235, 0.6088±0.2961, 0.1673±0.1070, 0.7980±0.0555, and 0.8550±0.0869 in the malignant SPNs (all P〈0.05). The sensitivity, specificity, and accuracy of the texture features of images were 83.3%, 90.0%, and 86.8%, respectively, for SPNs sized 〈10 mm, and were 86.6%, 88.2%, and 87.1%, respectively, for SPNs sized 11-20 mm and 94.7%, 91.8%, and 93.9%, respectively, for SPNs sized 〉20 mm. Conclusions: The entropy and contrast of malignant pulmonary nodules have been demonstrated to be higher in comparison to those of benign pulmonary nodules, while the energy, homogeneity correlation of malignant pulmonary nodules are lower than those of benign pulmonary nodules. The texture features of images can reflect the tissue features and have high sensitivity, specificity, and accuracy in differentiating SPNs. The sensitivity and accuracy increase for larger SPNs. 展开更多
关键词 Solitary pulmonary nodules (SPNs) DIFFERENTIATION textures image features
下载PDF
The Detecting System of Image Forgeries with Noise Features and EXIF Information 被引量:4
17
作者 SUN Xiaoting LI Yezhou +1 位作者 NIU Shaozhang HUANG Yanli 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2015年第5期1164-1176,共13页
Recently, the digital image blind forensics technology has received an increasing attention in academic community. This paper aims at developing a new identification approach based on the statistical noise and exchang... Recently, the digital image blind forensics technology has received an increasing attention in academic community. This paper aims at developing a new identification approach based on the statistical noise and exchangeable image file format (EXIF) information of image for images authen- tication. In particular, the authors can identify whether the current image has been modified or not by utilizing the relevance between noise and EXIF parameters and comparing the real values with the estimated values of the EXIF parameters. Experimental results validate the proposed method. That is, the detecting system can identify the doctored image effectively. 展开更多
关键词 Blind forensics doctored image EXIF parameters noise features
下载PDF
Image Classification Based on the Fusion of Complementary Features 被引量:3
18
作者 Huilin Gao Wenjie Chen 《Journal of Beijing Institute of Technology》 EI CAS 2017年第2期197-205,共9页
Image classification based on bag-of-words(BOW)has a broad application prospect in pattern recognition field but the shortcomings such as single feature and low classification accuracy are apparent.To deal with this... Image classification based on bag-of-words(BOW)has a broad application prospect in pattern recognition field but the shortcomings such as single feature and low classification accuracy are apparent.To deal with this problem,this paper proposes to combine two ingredients:(i)Three features with functions of mutual complementation are adopted to describe the images,including pyramid histogram of words(PHOW),pyramid histogram of color(PHOC)and pyramid histogram of orientated gradients(PHOG).(ii)An adaptive feature-weight adjusted image categorization algorithm based on the SVM and the decision level fusion of multiple features are employed.Experiments are carried out on the Caltech101 database,which confirms the validity of the proposed approach.The experimental results show that the classification accuracy rate of the proposed method is improved by 7%-14%higher than that of the traditional BOW methods.With full utilization of global,local and spatial information,the algorithm is much more complete and flexible to describe the feature information of the image through the multi-feature fusion and the pyramid structure composed by image spatial multi-resolution decomposition.Significant improvements to the classification accuracy are achieved as the result. 展开更多
关键词 image classification complementary features bag-of-words (BOW) feature fusion
下载PDF
A flower image retrieval method based on ROI feature 被引量:6
19
作者 洪安祥 陈刚 +2 位作者 李均利 池哲儒 张亶 《Journal of Zhejiang University Science》 CSCD 2004年第7期764-772,共9页
Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower... Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999). 展开更多
关键词 Flower image retrieval Knowledge-driven segmentation Flower image characterization Region-of-Interest (ROI) Color features Shape features
下载PDF
The motion analysis of fire video images based on moment features and flicker frequency 被引量:9
20
作者 LIJin FONG +3 位作者 N.K.,CHOW W.K.,WONG L.T.,LUPuyi XUDian-guo 《Journal of Marine Science and Application》 2004年第1期81-86,共6页
In this paper, motion analysis methods based on the moment features and flicker frequency features for early fire flame from ordinary CCD video camera were proposed, and in order to describe the changing of flame and ... In this paper, motion analysis methods based on the moment features and flicker frequency features for early fire flame from ordinary CCD video camera were proposed, and in order to describe the changing of flame and disturbance of non-flame phenomena further more, the average changing pixel number of the first-order moments of consecutive flames has been defined in the moment analysis as well. The first-order moments of all kinds of flames used in our experiments present irregularly flickering, and their average changing pixel numbers of first-order moments are greater than fire-like disturbances. For the analysis of flicker frequency of flame, which is extracted and calculated in spatial domain, and therefore it is computational simple and fast. The method of extracting flicker frequency from video images is not affected by the catalogues of combustion material and distance. In experiments, we adopted two kinds of flames, i. e. , fixed flame and movable flame. Many comparing and disturbing experiments were done and verified that the methods can be used as criteria for early fire detection. 展开更多
关键词 fire video images moment features flicker frequency
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部