期刊文献+
共找到10篇文章
< 1 >
每页显示 20 50 100
Extraction of Geometric Features of Wear Particles in Color Ferr ograph Images Based on RGB Color Space 被引量:1
1
作者 CHEN Gui-ming, WANG Han-gong, ZHANG Bao-jun, PAN Wei Second Artillery Engineering Institute, Xi′an 710025, P.R.China 《International Journal of Plant Engineering and Management》 2003年第3期171-178,共8页
This paper analyzes the potential color f ormats of ferrograph images, and presents the algorithms of converting the forma ts to RGB(Red, Green, Blue) color space. Through statistical analysis of wear pa rticles′ ge... This paper analyzes the potential color f ormats of ferrograph images, and presents the algorithms of converting the forma ts to RGB(Red, Green, Blue) color space. Through statistical analysis of wear pa rticles′ geometric features of color ferrograph images in the RGB color space, we give the differences of ferrograph wear particles′ geometric features among RGB color spaces and gray scale space, and calculate their respective distributi ons. 展开更多
关键词 ferrograph image rgb image gray image geometri c feature.
下载PDF
Tree species classification using deep learning and RGB optical images obtained by an unmanned aerial vehicle 被引量:7
2
作者 Chen Zhang Kai Xia +2 位作者 Hailin Feng Yinhui Yang Xiaochen Du 《Journal of Forestry Research》 SCIE CAS CSCD 2021年第5期1879-1888,共10页
The diversity of tree species and the complexity of land use in cities create challenging issues for tree species classification.The combination of deep learning methods and RGB optical images obtained by unmanned aer... The diversity of tree species and the complexity of land use in cities create challenging issues for tree species classification.The combination of deep learning methods and RGB optical images obtained by unmanned aerial vehicles(UAVs) provides a new research direction for urban tree species classification.We proposed an RGB optical image dataset with 10 urban tree species,termed TCC10,which is a benchmark for tree canopy classification(TCC).TCC10 dataset contains two types of data:tree canopy images with simple backgrounds and those with complex backgrounds.The objective was to examine the possibility of using deep learning methods(AlexNet,VGG-16,and ResNet-50) for individual tree species classification.The results of convolutional neural networks(CNNs) were compared with those of K-nearest neighbor(KNN) and BP neural network.Our results demonstrated:(1) ResNet-50 achieved an overall accuracy(OA) of 92.6% and a kappa coefficient of 0.91 for tree species classification on TCC10 and outperformed AlexNet and VGG-16.(2) The classification accuracy of KNN and BP neural network was less than70%,while the accuracy of CNNs was relatively higher.(3)The classification accuracy of tree canopy images with complex backgrounds was lower than that for images with simple backgrounds.For the deciduous tree species in TCC10,the classification accuracy of ResNet-50 was higher in summer than that in autumn.Therefore,the deep learning is effective for urban tree species classification using RGB optical images. 展开更多
关键词 Urban forest Unmanned aerial vehicle(UAV) Convolutional neural network Tree species classification rgb optical images
下载PDF
Development of image-based wheat spike counter through a Faster R-CNN algorithm and application for genetic studies 被引量:7
3
作者 Lei Li Muhammad Adeel Hassan +7 位作者 Shurong Yang Furong Jing Mengjiao Yang Awais Rasheed Jiankang Wang Xianchun Xia Zhonghu He Yonggui Xiao 《The Crop Journal》 SCIE CSCD 2022年第5期1303-1311,共9页
Spike number(SN) per unit area is one of the major determinants of grain yield in wheat. Development of high-throughput techniques to count SN from large populations enables rapid and cost-effective selection and faci... Spike number(SN) per unit area is one of the major determinants of grain yield in wheat. Development of high-throughput techniques to count SN from large populations enables rapid and cost-effective selection and facilitates genetic studies. In the present study, we used a deep-learning algorithm, i.e., Faster Region-based Convolutional Neural Networks(Faster R-CNN) on Red-Green-Blue(RGB) images to explore the possibility of image-based detection of SN and its application to identify the loci underlying SN. A doubled haploid population of 101 lines derived from the Yangmai 16/Zhongmai 895 cross was grown at two sites for SN phenotyping and genotyped using the high-density wheat 660 K SNP array.Analysis of manual spike number(MSN) in the field, image-based spike number(ISN), and verification of spike number(VSN) by Faster R-CNN revealed significant variation(P < 0.001) among genotypes, with high heritability ranged from 0.71 to 0.96. The coefficients of determination(R^(2)) between ISN and VSN was 0.83, which was higher than that between ISN and MSN(R^(2)= 0.51), and between VSN and MSN(R^(2)= 0.50). Results showed that VSN data can effectively predict wheat spikes with an average accuracy of 86.7% when validated using MSN data. Three QTL Qsnyz.caas-4 DS, Qsnyz.caas-7 DS, and QSnyz.caas-7 DL were identified based on MSN, ISN and VSN data, while QSnyz.caas-7 DS was detected in all the three data sets. These results indicate that using Faster R-CNN model for image-based identification of SN per unit area is a precise and rapid phenotyping method, which can be used for genetic studies of SN in wheat. 展开更多
关键词 Deeping learning High-throughput phenotyping QTL mapping rgb imaging
下载PDF
An integrated rice panicle phenotyping method based on X-ray and RGB scanning and deep learning 被引量:2
4
作者 Lejun Yu Jiawei Shi +7 位作者 Chenglong Huang Lingfeng Duan Di Wu Debao Fu Changyin Wu Lizhong Xiong Wanneng Yang Qian Liu 《The Crop Journal》 SCIE CSCD 2021年第1期42-56,共15页
Rice panicle phenotyping is required in rice breeding for high yield and grain quality.To fully evaluate spikelet and kernel traits without threshing and hulling,using X-ray and RGB scanning,we developed an integrated... Rice panicle phenotyping is required in rice breeding for high yield and grain quality.To fully evaluate spikelet and kernel traits without threshing and hulling,using X-ray and RGB scanning,we developed an integrated rice panicle phenotyping system and a corresponding image analysis pipeline.We compared five methods of counting spikelets and found that Faster R-CNN achieved high accuracy(R~2 of 0.99)and speed.Faster R-CNN was also applied to indica and japonica classification and achieved 91%accuracy.The proposed integrated panicle phenotyping method offers benefit for rice functional genetics and breeding. 展开更多
关键词 Rice(O.satiua) Panicle traits rgb imaging X-ray scanning Faster R-CNN
下载PDF
Encryption and Decryption of Color Images through Random Disruption of Rows and Columns
5
作者 ZENG Jianhua ZHAN Yanlin YANG Jianrong 《Journal of Donghua University(English Edition)》 EI CAS 2020年第3期245-255,共11页
In modern society,information is becoming increasingly interconnected through networks,and the rapid development of information technology has caused people to pay more attention to the encryption and the protection o... In modern society,information is becoming increasingly interconnected through networks,and the rapid development of information technology has caused people to pay more attention to the encryption and the protection of information.Image encryption technology is a key technology for ensuring the security performance of images.We extracted single channel RGB component images from a color image using MATLAB programs,encrypted and decrypted the color images by randomly disrupting rows,columns and regions of the image.Combined with histograms and the visual judgments of encryption images,it is shown that the information of the original image cannot be obtained from the encryption image easily.The results show that the color-image encryptions with the algorithm we used have good effect and fast operation speed.Thus this algorithm has certain practical value. 展开更多
关键词 color image ENCRYPTION DECRYPTION single channel rgb component image disrupting
下载PDF
Convolutional Neural Network-based Leakage Detection of Crude Oil Transmission Pipes 被引量:2
6
作者 Anqi LI Dongxu YE +1 位作者 Clarence W.DE SILVA Max Q.-H.MENG 《Instrumentation》 2019年第4期85-94,共10页
Due to the rapid development in the petroleum industry,the leakage detection of crude oil transmission pipes has become an increasingly crucial issue.At present,oil plants at home and abroad mostly use manual inspecti... Due to the rapid development in the petroleum industry,the leakage detection of crude oil transmission pipes has become an increasingly crucial issue.At present,oil plants at home and abroad mostly use manual inspection method for detection.This traditional method is not only inefficient but also labor-intensive.The present paper proposes a novel convolutional neural network(CNN)architecture for automatic leakage level assessment of crude oil transmission pipes.An experimental setup is developed,where a visible camera and a thermal imaging camera are used to collect image data and analyze various leakage conditions.Specifically,images are collected from various pipes with no leaking and different leaking states.Apart from images from existing pipelines,images are collected from the experimental setup with different types of joints to simulate leakage conditions in the real world.The main contributions of the present paper are,developing a convolutional neural network to classify the information in red-green-blue(RGB)and thermal images,development of the experimental setup,conducting leakage experiments,and analyzing the data using the developed approach.By successfully combining the two types of images,the proposed method is able to achieve a higher classification accuracy,compared to other methods that use RGB images or thermal images alone.Especially,compared with the method that uses thermal images only,the accuracy increases from about 91%to over 96%. 展开更多
关键词 Pipeline Leakage Convolutional Neural Network rgb images Thermal images Data Fusion
下载PDF
Retinex based low-light image enhancement using guided filtering and variational framework 被引量:5
7
作者 张诗 唐贵进 +2 位作者 刘小花 罗苏淮 王大东 《Optoelectronics Letters》 EI 2018年第2期156-160,共5页
A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV col... A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization(CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods. 展开更多
关键词 rgb CLAHE Retinex based low-light image enhancement using guided filtering and variational framework HSV
原文传递
Exploring 2D projection and 3D spatial information for aircraft 6D pose
8
作者 Daoyong FU Songchen HAN +2 位作者 BinBin LIANG Xinyang YUAN Wei LI 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2023年第8期258-268,共11页
The 6D pose estimation is important for the safe take-off and landing of the aircraft using a single RGB image. Due to the large scene and large depth, the exiting pose estimation methods have unstratified performance... The 6D pose estimation is important for the safe take-off and landing of the aircraft using a single RGB image. Due to the large scene and large depth, the exiting pose estimation methods have unstratified performance on the accuracy. To achieve precise 6D pose estimation of the aircraft, an end-to-end method using an RGB image is proposed. In the proposed method, the2D and 3D information of the keypoints of the aircraft is used as the intermediate supervision,and 6D pose information of the aircraft in this intermediate information will be explored. Specifically, an off-the-shelf object detector is utilized to detect the Region of the Interest(Ro I) of the aircraft to eliminate background distractions. The 2D projection and 3D spatial information of the pre-designed keypoints of the aircraft is predicted by the keypoint coordinate estimator(Kp Net).The proposed method is trained in an end-to-end fashion. In addition, to deal with the lack of the related datasets, this paper builds the Aircraft 6D Pose dataset to train and test, which captures the take-off and landing process of three types of aircraft from 11 views. Compared with the latest Wide-Depth-Range method on this dataset, our proposed method improves the average 3D distance of model points metric(ADD) and 5° and 5 m metric by 86.8% and 30.1%, respectively. Furthermore, the proposed method gets 9.30 ms, 61.0% faster than YOLO6D with 23.86 ms. 展开更多
关键词 2D and 3D information 6D pose regression aircraft 6D pose estimation End-to-end network rgb image
原文传递
VNLSTM-PoseNet: A novel deep ConvNet for real-time 6-DOF camera relocalization in urban streets 被引量:2
9
作者 Ming Li Jiangying Qin +3 位作者 Deren Li Ruizhi Chen Xuan Liao Bingxuan Guo 《Geo-Spatial Information Science》 SCIE EI CSCD 2021年第3期422-437,共16页
Image-based relocalization is a renewed interest in outdoor environments,because it is an important problem with many applications.PoseNet introduces Convolutional Neural Network(CNN)for the first time to realize the ... Image-based relocalization is a renewed interest in outdoor environments,because it is an important problem with many applications.PoseNet introduces Convolutional Neural Network(CNN)for the first time to realize the real-time camera pose solution based on a single image.In order to solve the problem of precision and robustness of PoseNet and its improved algorithms in complex environment,this paper proposes and implements a new visual relocation method based on deep convolutional neural networks(VNLSTM-PoseNet).Firstly,this method directly resizes the input image without cropping to increase the receptive field of the training image.Then,the image and the corresponding pose labels are put into the improved Long Short-Term Memory based(LSTM-based)PoseNet network for training and the network is optimized by the Nadam optimizer.Finally,the trained network is used for image localization to obtain the camera pose.Experimental results on outdoor public datasets show our VNLSTM-PoseNet can lead to drastic improvements in relocalization performance compared to existing state-of-theart CNN-based methods. 展开更多
关键词 Camera relocalization pose regression deep convnet rgb image camera pose
原文传递
3D Vehicle Detection Based on LiDAR and Camera Fusion 被引量:2
10
作者 Yingfeng Cai Tiantian Zhang +3 位作者 Hai Wang Yicheng Li Qingchao Liu Xiaobo Chen 《Automotive Innovation》 EI CSCD 2019年第4期276-283,共8页
Nowadays,the deep learning for object detection has become more popular and is widely adopted in many fields.This paper focuses on the research of LiDAR and camera sensor fusion technology for vehicle detection to ens... Nowadays,the deep learning for object detection has become more popular and is widely adopted in many fields.This paper focuses on the research of LiDAR and camera sensor fusion technology for vehicle detection to ensure extremely high detection accuracy.The proposed network architecture takes full advantage of the deep information of both the LiDAR point cloud and RGB image in object detection.First,the LiDAR point cloud and RGB image are fed into the system.Then a high-resolution feature map is used to generate a reliable 3D object proposal for both the LiDAR point cloud and RGB image.Finally,3D box regression is performed to predict the extent and orientation of vehicles in 3D space.Experiments on the challenging KITTI benchmark show that the proposed approach obtains ideal detection results and the detection time of each frame is about 0.12 s.This approach could establish a basis for further research in autonomous vehicles. 展开更多
关键词 Vehicle detection LiDAR point cloud rgb image FUSION
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部