期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Automatic Image Annotation Using Adaptive Convolutional Deep Learning Model
1
作者 R.Jayaraj S.Lokesh 《Intelligent Automation & Soft Computing》 SCIE 2023年第4期481-497,共17页
Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of ... Every day,websites and personal archives create more and more photos.The size of these archives is immeasurable.The comfort of use of these huge digital image gatherings donates to their admiration.However,not all of these folders deliver relevant indexing information.From the outcomes,it is dif-ficult to discover data that the user can be absorbed in.Therefore,in order to determine the significance of the data,it is important to identify the contents in an informative manner.Image annotation can be one of the greatest problematic domains in multimedia research and computer vision.Hence,in this paper,Adap-tive Convolutional Deep Learning Model(ACDLM)is developed for automatic image annotation.Initially,the databases are collected from the open-source system which consists of some labelled images(for training phase)and some unlabeled images{Corel 5 K,MSRC v2}.After that,the images are sent to the pre-processing step such as colour space quantization and texture color class map.The pre-processed images are sent to the segmentation approach for efficient labelling technique using J-image segmentation(JSEG).Thefinal step is an auto-matic annotation using ACDLM which is a combination of Convolutional Neural Network(CNN)and Honey Badger Algorithm(HBA).Based on the proposed classifier,the unlabeled images are labelled.The proposed methodology is imple-mented in MATLAB and performance is evaluated by performance metrics such as accuracy,precision,recall and F1_Measure.With the assistance of the pro-posed methodology,the unlabeled images are labelled. 展开更多
关键词 Deep learning model J-image segmentation honey badger algorithm convolutional neural network image annotation
下载PDF
Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation 被引量:1
2
作者 田东平 《High Technology Letters》 EI CAS 2017年第4期367-374,共8页
In recent years,multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas,especially for automatic image annotation,whose purpose is to provide an efficie... In recent years,multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas,especially for automatic image annotation,whose purpose is to provide an efficient and effective searching environment for users to query their images more easily. In this paper,a semi-supervised learning based probabilistic latent semantic analysis( PLSA) model for automatic image annotation is presenred. Since it's often hard to obtain or create labeled images in large quantities while unlabeled ones are easier to collect,a transductive support vector machine( TSVM) is exploited to enhance the quality of the training image data. Then,different image features with different magnitudes will result in different performance for automatic image annotation. To this end,a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible. Finally,a PLSA model with asymmetric modalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores. Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PLSA for the task of automatic image annotation. 展开更多
关键词 automatic image annotation semi-supervised learning probabilistic latent semantic analysis(PLSA) transductive support vector machine(TSVM) image segmentation image retrieval
下载PDF
Robust Deep Transfer Learning Based Object Detection and Tracking Approach
3
作者 C.Narmadha T.Kavitha +4 位作者 R.Poonguzhali V.Hamsadhwani Ranjan walia Monia B.Jegajothi 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期3613-3626,共14页
At present days,object detection and tracking concepts have gained more importance among researchers and business people.Presently,deep learning(DL)approaches have been used for object tracking as it increases the per... At present days,object detection and tracking concepts have gained more importance among researchers and business people.Presently,deep learning(DL)approaches have been used for object tracking as it increases the perfor-mance and speed of the tracking process.This paper presents a novel robust DL based object detection and tracking algorithm using Automated Image Anno-tation with ResNet based Faster regional convolutional neural network(R-CNN)named(AIA-FRCNN)model.The AIA-RFRCNN method performs image anno-tation using a Discriminative Correlation Filter(DCF)with Channel and Spatial Reliability tracker(CSR)called DCF-CSRT model.The AIA-RFRCNN model makes use of Faster RCNN as an object detector and tracker,which involves region proposal network(RPN)and Fast R-CNN.The RPN is a full convolution network that concurrently predicts the bounding box and score of different objects.The RPN is a trained model used for the generation of the high-quality region proposals,which are utilized by Fast R-CNN for detection process.Besides,Residual Network(ResNet 101)model is used as a shared convolutional neural network(CNN)for the generation of feature maps.The performance of the ResNet 101 model is further improved by the use of Adam optimizer,which tunes the hyperparameters namely learning rate,batch size,momentum,and weight decay.Finally,softmax layer is applied to classify the images.The performance of the AIA-RFRCNN method has been assessed using a benchmark dataset and a detailed comparative analysis of the results takes place.The outcome of the experiments indicated the superior characteristics of the AIA-RFRCNN model under diverse aspects. 展开更多
关键词 Object detection TRACKING deep learning deep transfer learning image annotation
下载PDF
Deep Learning Enabled Object Detection and Tracking Model for Big Data Environment
4
作者 K.Vijaya Kumar E.Laxmi Lydia +4 位作者 Ashit Kumar Dutta Velmurugan Subbiah Parvathy Gobi Ramasamy Irina V.Pustokhina Denis A.Pustokhin 《Computers, Materials & Continua》 SCIE EI 2022年第11期2541-2554,共14页
Recently,big data becomes evitable due to massive increase in the generation of data in real time application.Presently,object detection and tracking applications becomes popular among research communities and finds u... Recently,big data becomes evitable due to massive increase in the generation of data in real time application.Presently,object detection and tracking applications becomes popular among research communities and finds useful in different applications namely vehicle navigation,augmented reality,surveillance,etc.This paper introduces an effective deep learning based object tracker using Automated Image Annotation with Inception v2 based Faster RCNN(AIA-IFRCNN)model in big data environment.The AIA-IFRCNN model annotates the images by Discriminative Correlation Filter(DCF)with Channel and Spatial Reliability tracker(CSR),named DCF-CSRT model.The AIA-IFRCNN technique employs Faster RCNN for object detection and tracking,which comprises region proposal network(RPN)and Fast R-CNN.In addition,inception v2 model is applied as a shared convolution neural network(CNN)to generate the feature map.Lastly,softmax layer is applied to perform classification task.The effectiveness of the AIA-IFRCNN method undergoes experimentation against a benchmark dataset and the results are assessed under diverse aspects with maximum detection accuracy of 97.77%. 展开更多
关键词 Object detection TRACKING convolutional neural network inception v2 image annotation
下载PDF
Detection of Angioectasias and Haemorrhages Incorporated into a Multi-Class Classification Tool for the GI Tract Anomalies by Using Binary CNNs
5
作者 Christos Barbagiannis Alexios Polydorou +2 位作者 Michail Zervakis Andreas Polydorou Eleftheria Sergaki 《Journal of Biomedical Science and Engineering》 2021年第12期402-414,共13页
The proposed deep learning algorithm will be integrated as a binary classifier under the umbrella of a multi-class classification tool to facilitate the automated detection of non-healthy deformities, anatomical landm... The proposed deep learning algorithm will be integrated as a binary classifier under the umbrella of a multi-class classification tool to facilitate the automated detection of non-healthy deformities, anatomical landmarks, pathological findings, other anomalies and normal cases, by examining medical endoscopic images of GI tract. Each binary classifier is trained to detect one specific non-healthy condition. The algorithm analyzed in the present work expands the ability of detection of this tool by classifying GI tract image snapshots into two classes, depicting haemorrhage and non-haemorrhage state. The proposed algorithm is the result of the collaboration between interdisciplinary specialists on AI and Data Analysis, Computer Vision, Gastroenterologists of four University Gastroenterology Departments of Greek Medical Schools. The data used are 195 videos (177 from non-healthy cases and 18 from healthy cases) videos captured from the PillCam<sup>(R)</sup> Medronics device, originated from 195 patients, all diagnosed with different forms of angioectasia, haemorrhages and other diseases from different sites of the gastrointestinal (GI), mainly including difficult cases of diagnosis. Our AI algorithm is based on convolutional neural network (CNN) trained on annotated images at image level, using a semantic tag indicating whether the image contains angioectasia and haemorrhage traces or not. At least 22 CNN architectures were created and evaluated some of which pre-trained applying transfer learning on ImageNet data. All the CNN variations were introduced, trained to a prevalence dataset of 50%, and evaluated of unseen data. On test data, the best results were obtained from our CNN architectures which do not utilize backbone of transfer learning. Across a balanced dataset from no-healthy images and healthy images from 39 videos from different patients, identified correct diagnosis with sensitivity 90%, specificity 92%, precision 91.8%, FPR 8%, FNR 10%. Besides, we compared the performance of our best CNN algorithm versus our same goal algorithm based on HSV colorimetric lesions features extracted of pixel-level annotations, both algorithms trained and tested on the same data. It is evaluated that the CNN trained on image level annotated images, is 9% less sensitive, achieves 2.6% less precision, 1.2% less FPR, and 7% less FNR, than that based on HSV filters, extracted from on pixel-level annotated training data. 展开更多
关键词 Capsule Endoscopy (CE) Small Bowel Bleeding (SBB) Angioectasia Haemorrhage Gatrointestinal (GI) Small Bowel Capsule Endoscopy (SBCE) Convolutional Neural Network (CNN) Computer Aided Diagnosis (CAD) image Level annotation Pixel Level annotation Binary Classification
下载PDF
A Semantic Ontology Structure-based Approach for Retrieving Similar Medical Images
6
作者 Yiwen Wang 《Chinese Journal of Biomedical Engineering(English Edition)》 CAS 2020年第4期11-19,共9页
Radiology doctors perform text-based image retrieval when they want to retrieve medical images.However,the accuracy and efficiency of such retrieval cannot keep up with the requirements.An innovative algorithm is bein... Radiology doctors perform text-based image retrieval when they want to retrieve medical images.However,the accuracy and efficiency of such retrieval cannot keep up with the requirements.An innovative algorithm is being proposed to retrieve similar medical images.First,we extract the professional terms from the ontology structure and use them to annotate the CT images.Second,the semantic similarity matrix of ontology terms is calculated according to the structure of the ontology.Lastly,the corresponding semantic distance is calculated according to the marked vector,which contains different annotations.We use 120 real liver CT images(divided into six categories)of a top three-hospital to run the algorithm of the program.Result shows that the retrieval index"Precision"is 80.81%,and the classification index"AUC(Area Under Curve)"under the"ROC curve"(Receiver Operating Characteristic)is 0.945. 展开更多
关键词 annotated images semantic similarity matrix of ontology terms ranking method of medical image similarity
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部