期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
A flower image retrieval method based on ROI feature 被引量:6
1
作者 洪安祥 陈刚 +2 位作者 李均利 池哲儒 张亶 《Journal of Zhejiang University Science》 CSCD 2004年第7期764-772,共9页
Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower... Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999). 展开更多
关键词 Flower image retrieval Knowledge-driven segmentation Flower image characterization region-of-interest (ROI) Color features Shape features
下载PDF
Autonomous Parking-Lots Detection with Multi-Sensor Data Fusion Using Machine Deep Learning Techniques 被引量:1
2
作者 Kashif Iqbal Sagheer Abbas +4 位作者 Muhammad Adnan Khan Atifa Ather Muhammad Saleem Khan Areej Fatima Gulzar Ahmad 《Computers, Materials & Continua》 SCIE EI 2021年第2期1595-1612,共18页
The rapid development and progress in deep machine-learning techniques have become a key factor in solving the future challenges of humanity.Vision-based target detection and object classification have been improved d... The rapid development and progress in deep machine-learning techniques have become a key factor in solving the future challenges of humanity.Vision-based target detection and object classification have been improved due to the development of deep learning algorithms.Data fusion in autonomous driving is a fact and a prerequisite task of data preprocessing from multi-sensors that provide a precise,well-engineered,and complete detection of objects,scene or events.The target of the current study is to develop an in-vehicle information system to prevent or at least mitigate traffic issues related to parking detection and traffic congestion detection.In this study we examined to solve these problems described by(1)extracting region-of-interest in the images(2)vehicle detection based on instance segmentation,and(3)building deep learning model based on the key features obtained from input parking images.We build a deep machine learning algorithm that enables collecting real video-camera feeds from vision sensors and predicting free parking spaces.Image augmentation techniques were performed using edge detection,cropping,refined by rotating,thresholding,resizing,or color augment to predict the region of bounding boxes.A deep convolutional neural network F-MTCNN model is proposed that simultaneously capable for compiling,training,validating and testing on parking video frames through video-camera.The results of proposed model employing on publicly available PK-Lot parking dataset and the optimized model achieved a relatively higher accuracy 97.6%than previous reported methodologies.Moreover,this article presents mathematical and simulation results using state-of-the-art deep learning technologies for smart parking space detection.The results are verified using Python,TensorFlow,OpenCV computer simulation frameworks. 展开更多
关键词 Smart parking-lot detection deep convolutional neural network data augmentation region-of-interest object detection
下载PDF
S4Net: Single stage salient-instance segmentation 被引量:2
3
作者 Ruochen Fan Ming-Ming Cheng +3 位作者 Qibin Hou Tai-Jiang Mu Jingdong Wang Shi-Min Hu 《Computational Visual Media》 CSCD 2020年第2期191-204,共14页
In this paper, we consider salient instance segmentation. As well as producing bounding boxes,our network also outputs high-quality instance-level segments as initial selections to indicate the regions of interest. Ta... In this paper, we consider salient instance segmentation. As well as producing bounding boxes,our network also outputs high-quality instance-level segments as initial selections to indicate the regions of interest. Taking into account the category-independent property of each target, we design a single stage salient instance segmentation framework, with a novel segmentation branch. Our new branch regards not only local context inside each detection window but also the surrounding context, enabling us to distinguish instances in the same scope even with partial occlusion.Our network is end-to-end trainable and is fast(running at 40 fps for images with resolution 320 × 320). We evaluate our approach on a publicly available benchmark and show that it outperforms alternative solutions. We also provide a thorough analysis of our design choices to help readers better understand the function of each part of our network. Source code can be found at https://github.com/Ruochen Fan/S4 Net. 展开更多
关键词 salient-instance segmentation salient object detection single stage region-of-interest masking
原文传递
Saliency-Based Fidelity Adaptation Preprocessing for Video Coding
4
作者 卢少平 张松海 《Journal of Computer Science & Technology》 SCIE EI CSCD 2011年第1期195-202,共8页
In this paper, we present a video coding scheme which applies the technique of visual saliency computation to adjust image fidelity before compression. To extract visually salient features, we construct a spatio-tempo... In this paper, we present a video coding scheme which applies the technique of visual saliency computation to adjust image fidelity before compression. To extract visually salient features, we construct a spatio-temporal saliency map by analyzing the video using a combined bottom-up and top-down visual saliency model. We then use an extended bilateral filter, in which the local intensity and spatial scales are adjusted according to visual saliency, to adaptively alter the image fidelity. Our implementation is based on the H.264 video encoder JM12.0. Besides evaluating our scheme with the H.264 reference software, we also compare it to a more traditional foreground-background segmentation-based method and a foveation-based approach which employs Gaussian blurring. Our results show that the proposed algorithm can improve the compression ratio significantly while effectively preserving perceptual visual quality. 展开更多
关键词 visual saliency bilateral filter fidelity adjustment region-of-interest ENCODER
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部