This paper analyzes the potential color f ormats of ferrograph images, and presents the algorithms of converting the forma ts to RGB(Red, Green, Blue) color space. Through statistical analysis of wear pa rticles′ ge...This paper analyzes the potential color f ormats of ferrograph images, and presents the algorithms of converting the forma ts to RGB(Red, Green, Blue) color space. Through statistical analysis of wear pa rticles′ geometric features of color ferrograph images in the RGB color space, we give the differences of ferrograph wear particles′ geometric features among RGB color spaces and gray scale space, and calculate their respective distributi ons.展开更多
The diversity of tree species and the complexity of land use in cities create challenging issues for tree species classification.The combination of deep learning methods and RGB optical images obtained by unmanned aer...The diversity of tree species and the complexity of land use in cities create challenging issues for tree species classification.The combination of deep learning methods and RGB optical images obtained by unmanned aerial vehicles(UAVs) provides a new research direction for urban tree species classification.We proposed an RGB optical image dataset with 10 urban tree species,termed TCC10,which is a benchmark for tree canopy classification(TCC).TCC10 dataset contains two types of data:tree canopy images with simple backgrounds and those with complex backgrounds.The objective was to examine the possibility of using deep learning methods(AlexNet,VGG-16,and ResNet-50) for individual tree species classification.The results of convolutional neural networks(CNNs) were compared with those of K-nearest neighbor(KNN) and BP neural network.Our results demonstrated:(1) ResNet-50 achieved an overall accuracy(OA) of 92.6% and a kappa coefficient of 0.91 for tree species classification on TCC10 and outperformed AlexNet and VGG-16.(2) The classification accuracy of KNN and BP neural network was less than70%,while the accuracy of CNNs was relatively higher.(3)The classification accuracy of tree canopy images with complex backgrounds was lower than that for images with simple backgrounds.For the deciduous tree species in TCC10,the classification accuracy of ResNet-50 was higher in summer than that in autumn.Therefore,the deep learning is effective for urban tree species classification using RGB optical images.展开更多
Spike number(SN) per unit area is one of the major determinants of grain yield in wheat. Development of high-throughput techniques to count SN from large populations enables rapid and cost-effective selection and faci...Spike number(SN) per unit area is one of the major determinants of grain yield in wheat. Development of high-throughput techniques to count SN from large populations enables rapid and cost-effective selection and facilitates genetic studies. In the present study, we used a deep-learning algorithm, i.e., Faster Region-based Convolutional Neural Networks(Faster R-CNN) on Red-Green-Blue(RGB) images to explore the possibility of image-based detection of SN and its application to identify the loci underlying SN. A doubled haploid population of 101 lines derived from the Yangmai 16/Zhongmai 895 cross was grown at two sites for SN phenotyping and genotyped using the high-density wheat 660 K SNP array.Analysis of manual spike number(MSN) in the field, image-based spike number(ISN), and verification of spike number(VSN) by Faster R-CNN revealed significant variation(P < 0.001) among genotypes, with high heritability ranged from 0.71 to 0.96. The coefficients of determination(R^(2)) between ISN and VSN was 0.83, which was higher than that between ISN and MSN(R^(2)= 0.51), and between VSN and MSN(R^(2)= 0.50). Results showed that VSN data can effectively predict wheat spikes with an average accuracy of 86.7% when validated using MSN data. Three QTL Qsnyz.caas-4 DS, Qsnyz.caas-7 DS, and QSnyz.caas-7 DL were identified based on MSN, ISN and VSN data, while QSnyz.caas-7 DS was detected in all the three data sets. These results indicate that using Faster R-CNN model for image-based identification of SN per unit area is a precise and rapid phenotyping method, which can be used for genetic studies of SN in wheat.展开更多
Rice panicle phenotyping is required in rice breeding for high yield and grain quality.To fully evaluate spikelet and kernel traits without threshing and hulling,using X-ray and RGB scanning,we developed an integrated...Rice panicle phenotyping is required in rice breeding for high yield and grain quality.To fully evaluate spikelet and kernel traits without threshing and hulling,using X-ray and RGB scanning,we developed an integrated rice panicle phenotyping system and a corresponding image analysis pipeline.We compared five methods of counting spikelets and found that Faster R-CNN achieved high accuracy(R~2 of 0.99)and speed.Faster R-CNN was also applied to indica and japonica classification and achieved 91%accuracy.The proposed integrated panicle phenotyping method offers benefit for rice functional genetics and breeding.展开更多
In modern society,information is becoming increasingly interconnected through networks,and the rapid development of information technology has caused people to pay more attention to the encryption and the protection o...In modern society,information is becoming increasingly interconnected through networks,and the rapid development of information technology has caused people to pay more attention to the encryption and the protection of information.Image encryption technology is a key technology for ensuring the security performance of images.We extracted single channel RGB component images from a color image using MATLAB programs,encrypted and decrypted the color images by randomly disrupting rows,columns and regions of the image.Combined with histograms and the visual judgments of encryption images,it is shown that the information of the original image cannot be obtained from the encryption image easily.The results show that the color-image encryptions with the algorithm we used have good effect and fast operation speed.Thus this algorithm has certain practical value.展开更多
Due to the rapid development in the petroleum industry,the leakage detection of crude oil transmission pipes has become an increasingly crucial issue.At present,oil plants at home and abroad mostly use manual inspecti...Due to the rapid development in the petroleum industry,the leakage detection of crude oil transmission pipes has become an increasingly crucial issue.At present,oil plants at home and abroad mostly use manual inspection method for detection.This traditional method is not only inefficient but also labor-intensive.The present paper proposes a novel convolutional neural network(CNN)architecture for automatic leakage level assessment of crude oil transmission pipes.An experimental setup is developed,where a visible camera and a thermal imaging camera are used to collect image data and analyze various leakage conditions.Specifically,images are collected from various pipes with no leaking and different leaking states.Apart from images from existing pipelines,images are collected from the experimental setup with different types of joints to simulate leakage conditions in the real world.The main contributions of the present paper are,developing a convolutional neural network to classify the information in red-green-blue(RGB)and thermal images,development of the experimental setup,conducting leakage experiments,and analyzing the data using the developed approach.By successfully combining the two types of images,the proposed method is able to achieve a higher classification accuracy,compared to other methods that use RGB images or thermal images alone.Especially,compared with the method that uses thermal images only,the accuracy increases from about 91%to over 96%.展开更多
A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV col...A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization(CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.展开更多
The 6D pose estimation is important for the safe take-off and landing of the aircraft using a single RGB image. Due to the large scene and large depth, the exiting pose estimation methods have unstratified performance...The 6D pose estimation is important for the safe take-off and landing of the aircraft using a single RGB image. Due to the large scene and large depth, the exiting pose estimation methods have unstratified performance on the accuracy. To achieve precise 6D pose estimation of the aircraft, an end-to-end method using an RGB image is proposed. In the proposed method, the2D and 3D information of the keypoints of the aircraft is used as the intermediate supervision,and 6D pose information of the aircraft in this intermediate information will be explored. Specifically, an off-the-shelf object detector is utilized to detect the Region of the Interest(Ro I) of the aircraft to eliminate background distractions. The 2D projection and 3D spatial information of the pre-designed keypoints of the aircraft is predicted by the keypoint coordinate estimator(Kp Net).The proposed method is trained in an end-to-end fashion. In addition, to deal with the lack of the related datasets, this paper builds the Aircraft 6D Pose dataset to train and test, which captures the take-off and landing process of three types of aircraft from 11 views. Compared with the latest Wide-Depth-Range method on this dataset, our proposed method improves the average 3D distance of model points metric(ADD) and 5° and 5 m metric by 86.8% and 30.1%, respectively. Furthermore, the proposed method gets 9.30 ms, 61.0% faster than YOLO6D with 23.86 ms.展开更多
Image-based relocalization is a renewed interest in outdoor environments,because it is an important problem with many applications.PoseNet introduces Convolutional Neural Network(CNN)for the first time to realize the ...Image-based relocalization is a renewed interest in outdoor environments,because it is an important problem with many applications.PoseNet introduces Convolutional Neural Network(CNN)for the first time to realize the real-time camera pose solution based on a single image.In order to solve the problem of precision and robustness of PoseNet and its improved algorithms in complex environment,this paper proposes and implements a new visual relocation method based on deep convolutional neural networks(VNLSTM-PoseNet).Firstly,this method directly resizes the input image without cropping to increase the receptive field of the training image.Then,the image and the corresponding pose labels are put into the improved Long Short-Term Memory based(LSTM-based)PoseNet network for training and the network is optimized by the Nadam optimizer.Finally,the trained network is used for image localization to obtain the camera pose.Experimental results on outdoor public datasets show our VNLSTM-PoseNet can lead to drastic improvements in relocalization performance compared to existing state-of-theart CNN-based methods.展开更多
Nowadays,the deep learning for object detection has become more popular and is widely adopted in many fields.This paper focuses on the research of LiDAR and camera sensor fusion technology for vehicle detection to ens...Nowadays,the deep learning for object detection has become more popular and is widely adopted in many fields.This paper focuses on the research of LiDAR and camera sensor fusion technology for vehicle detection to ensure extremely high detection accuracy.The proposed network architecture takes full advantage of the deep information of both the LiDAR point cloud and RGB image in object detection.First,the LiDAR point cloud and RGB image are fed into the system.Then a high-resolution feature map is used to generate a reliable 3D object proposal for both the LiDAR point cloud and RGB image.Finally,3D box regression is performed to predict the extent and orientation of vehicles in 3D space.Experiments on the challenging KITTI benchmark show that the proposed approach obtains ideal detection results and the detection time of each frame is about 0.12 s.This approach could establish a basis for further research in autonomous vehicles.展开更多
文摘This paper analyzes the potential color f ormats of ferrograph images, and presents the algorithms of converting the forma ts to RGB(Red, Green, Blue) color space. Through statistical analysis of wear pa rticles′ geometric features of color ferrograph images in the RGB color space, we give the differences of ferrograph wear particles′ geometric features among RGB color spaces and gray scale space, and calculate their respective distributi ons.
基金supported by Joint Fund of Natural Science Foundation of Zhejiang-Qingshanhu Science and Technology City(Grant No.LQY18C160002)National Natural Science Foundation of China(Grant No.U1809208)+1 种基金Zhejiang Science and Technology Key R&D Program Funded Project(Grant No.2018C02013)Natural Science Foundation of Zhejiang Province(Grant No.LQ20F020005).
文摘The diversity of tree species and the complexity of land use in cities create challenging issues for tree species classification.The combination of deep learning methods and RGB optical images obtained by unmanned aerial vehicles(UAVs) provides a new research direction for urban tree species classification.We proposed an RGB optical image dataset with 10 urban tree species,termed TCC10,which is a benchmark for tree canopy classification(TCC).TCC10 dataset contains two types of data:tree canopy images with simple backgrounds and those with complex backgrounds.The objective was to examine the possibility of using deep learning methods(AlexNet,VGG-16,and ResNet-50) for individual tree species classification.The results of convolutional neural networks(CNNs) were compared with those of K-nearest neighbor(KNN) and BP neural network.Our results demonstrated:(1) ResNet-50 achieved an overall accuracy(OA) of 92.6% and a kappa coefficient of 0.91 for tree species classification on TCC10 and outperformed AlexNet and VGG-16.(2) The classification accuracy of KNN and BP neural network was less than70%,while the accuracy of CNNs was relatively higher.(3)The classification accuracy of tree canopy images with complex backgrounds was lower than that for images with simple backgrounds.For the deciduous tree species in TCC10,the classification accuracy of ResNet-50 was higher in summer than that in autumn.Therefore,the deep learning is effective for urban tree species classification using RGB optical images.
基金funded by the National Natural Science Foundation of China (31671691, 3171101265, and 31961143007)the National Key Research and Development Program of China(2016YFD0101804)the Fundamental Research Funds for the Institute Planning in Chinese Academy of Agricultural Sciences(S2018QY02)。
文摘Spike number(SN) per unit area is one of the major determinants of grain yield in wheat. Development of high-throughput techniques to count SN from large populations enables rapid and cost-effective selection and facilitates genetic studies. In the present study, we used a deep-learning algorithm, i.e., Faster Region-based Convolutional Neural Networks(Faster R-CNN) on Red-Green-Blue(RGB) images to explore the possibility of image-based detection of SN and its application to identify the loci underlying SN. A doubled haploid population of 101 lines derived from the Yangmai 16/Zhongmai 895 cross was grown at two sites for SN phenotyping and genotyped using the high-density wheat 660 K SNP array.Analysis of manual spike number(MSN) in the field, image-based spike number(ISN), and verification of spike number(VSN) by Faster R-CNN revealed significant variation(P < 0.001) among genotypes, with high heritability ranged from 0.71 to 0.96. The coefficients of determination(R^(2)) between ISN and VSN was 0.83, which was higher than that between ISN and MSN(R^(2)= 0.51), and between VSN and MSN(R^(2)= 0.50). Results showed that VSN data can effectively predict wheat spikes with an average accuracy of 86.7% when validated using MSN data. Three QTL Qsnyz.caas-4 DS, Qsnyz.caas-7 DS, and QSnyz.caas-7 DL were identified based on MSN, ISN and VSN data, while QSnyz.caas-7 DS was detected in all the three data sets. These results indicate that using Faster R-CNN model for image-based identification of SN per unit area is a precise and rapid phenotyping method, which can be used for genetic studies of SN in wheat.
基金supported by the National Key Research and Development Program of China(2016YFD0100101-18)the National Natural Science Foundation of China(31770397,31701317)the Fundamental Research Funds for the Central Universities(2662017PY058)。
文摘Rice panicle phenotyping is required in rice breeding for high yield and grain quality.To fully evaluate spikelet and kernel traits without threshing and hulling,using X-ray and RGB scanning,we developed an integrated rice panicle phenotyping system and a corresponding image analysis pipeline.We compared five methods of counting spikelets and found that Faster R-CNN achieved high accuracy(R~2 of 0.99)and speed.Faster R-CNN was also applied to indica and japonica classification and achieved 91%accuracy.The proposed integrated panicle phenotyping method offers benefit for rice functional genetics and breeding.
基金National Natural Science Foundation of China(No.11865013)Horizontal Project of Shangrao Normal University,China(No.K8000219T)+1 种基金Industrial Science and Technology Project in Shangrao of Jiangxi Province,China(No.17A005)Doctoral Scientific Research Foundation of Shangrao Normal University,China(No.6000108)。
文摘In modern society,information is becoming increasingly interconnected through networks,and the rapid development of information technology has caused people to pay more attention to the encryption and the protection of information.Image encryption technology is a key technology for ensuring the security performance of images.We extracted single channel RGB component images from a color image using MATLAB programs,encrypted and decrypted the color images by randomly disrupting rows,columns and regions of the image.Combined with histograms and the visual judgments of encryption images,it is shown that the information of the original image cannot be obtained from the encryption image easily.The results show that the color-image encryptions with the algorithm we used have good effect and fast operation speed.Thus this algorithm has certain practical value.
文摘Due to the rapid development in the petroleum industry,the leakage detection of crude oil transmission pipes has become an increasingly crucial issue.At present,oil plants at home and abroad mostly use manual inspection method for detection.This traditional method is not only inefficient but also labor-intensive.The present paper proposes a novel convolutional neural network(CNN)architecture for automatic leakage level assessment of crude oil transmission pipes.An experimental setup is developed,where a visible camera and a thermal imaging camera are used to collect image data and analyze various leakage conditions.Specifically,images are collected from various pipes with no leaking and different leaking states.Apart from images from existing pipelines,images are collected from the experimental setup with different types of joints to simulate leakage conditions in the real world.The main contributions of the present paper are,developing a convolutional neural network to classify the information in red-green-blue(RGB)and thermal images,development of the experimental setup,conducting leakage experiments,and analyzing the data using the developed approach.By successfully combining the two types of images,the proposed method is able to achieve a higher classification accuracy,compared to other methods that use RGB images or thermal images alone.Especially,compared with the method that uses thermal images only,the accuracy increases from about 91%to over 96%.
基金supported by the China Scholarship CouncilPostgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX17_0776)the Natural Science Foundation of NUPT(No.NY214039)
文摘A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization(CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.
基金co-supported by the Key research and development plan project of Sichuan Province,China(No.2022YFG0153).
文摘The 6D pose estimation is important for the safe take-off and landing of the aircraft using a single RGB image. Due to the large scene and large depth, the exiting pose estimation methods have unstratified performance on the accuracy. To achieve precise 6D pose estimation of the aircraft, an end-to-end method using an RGB image is proposed. In the proposed method, the2D and 3D information of the keypoints of the aircraft is used as the intermediate supervision,and 6D pose information of the aircraft in this intermediate information will be explored. Specifically, an off-the-shelf object detector is utilized to detect the Region of the Interest(Ro I) of the aircraft to eliminate background distractions. The 2D projection and 3D spatial information of the pre-designed keypoints of the aircraft is predicted by the keypoint coordinate estimator(Kp Net).The proposed method is trained in an end-to-end fashion. In addition, to deal with the lack of the related datasets, this paper builds the Aircraft 6D Pose dataset to train and test, which captures the take-off and landing process of three types of aircraft from 11 views. Compared with the latest Wide-Depth-Range method on this dataset, our proposed method improves the average 3D distance of model points metric(ADD) and 5° and 5 m metric by 86.8% and 30.1%, respectively. Furthermore, the proposed method gets 9.30 ms, 61.0% faster than YOLO6D with 23.86 ms.
基金This work is supported by the National Key R&D Program of China[grant number 2018YFB0505400]the National Natural Science Foundation of China(NSFC)[grant num-ber 41901407]+1 种基金the LIESMARS Special Research Funding[grant number 2021]the College Students’Innovative Entrepreneurial Training Plan Program[grant number S2020634016].
文摘Image-based relocalization is a renewed interest in outdoor environments,because it is an important problem with many applications.PoseNet introduces Convolutional Neural Network(CNN)for the first time to realize the real-time camera pose solution based on a single image.In order to solve the problem of precision and robustness of PoseNet and its improved algorithms in complex environment,this paper proposes and implements a new visual relocation method based on deep convolutional neural networks(VNLSTM-PoseNet).Firstly,this method directly resizes the input image without cropping to increase the receptive field of the training image.Then,the image and the corresponding pose labels are put into the improved Long Short-Term Memory based(LSTM-based)PoseNet network for training and the network is optimized by the Nadam optimizer.Finally,the trained network is used for image localization to obtain the camera pose.Experimental results on outdoor public datasets show our VNLSTM-PoseNet can lead to drastic improvements in relocalization performance compared to existing state-of-theart CNN-based methods.
基金This work was supported by the National Key Research and Development Program of China(2017YFB0102603,2018YFB0105003)the National Natural Science Foundation of China(51875255,61601203,61773184,U1564201,U1664258,U1764257,U1762264)+3 种基金the Natural Science Foundation of Jiangsu Province(BK20180100)the Six Talent Peaks Project of Jiangsu Province(2018-TD-GDZB-022)the Key Project for the Development of Strategic Emerging Industries of Jiangsu Province(2016-1094)the Key Research and Development Program of Zhenjiang City(GY2017006).
文摘Nowadays,the deep learning for object detection has become more popular and is widely adopted in many fields.This paper focuses on the research of LiDAR and camera sensor fusion technology for vehicle detection to ensure extremely high detection accuracy.The proposed network architecture takes full advantage of the deep information of both the LiDAR point cloud and RGB image in object detection.First,the LiDAR point cloud and RGB image are fed into the system.Then a high-resolution feature map is used to generate a reliable 3D object proposal for both the LiDAR point cloud and RGB image.Finally,3D box regression is performed to predict the extent and orientation of vehicles in 3D space.Experiments on the challenging KITTI benchmark show that the proposed approach obtains ideal detection results and the detection time of each frame is about 0.12 s.This approach could establish a basis for further research in autonomous vehicles.