The airport apron scene contains rich contextual information about the spatial position relationship.Traditional object detectors only considered visual appearance and ignored the contextual information.In addition,th...The airport apron scene contains rich contextual information about the spatial position relationship.Traditional object detectors only considered visual appearance and ignored the contextual information.In addition,the detection accuracy of some categories in the apron dataset was low.Therefore,an improved object detection method using spatial-aware features in apron scenes called SA-FRCNN is presented.The method uses graph convolutional networks to capture the relative spatial relationship between objects in the apron scene,incorporating this spatial context into feature learning.Moreover,an attention mechanism is introduced into the feature extraction process,with the goal to focus on the spatial position and key features,and distance-IoU loss is used to achieve a more accurate regression.The experimental results show that the mean average precision of the apron object detection based on SAFRCNN can reach 95.75%,and the detection effect of some hard-to-detect categories has been significantly improved.The proposed method effectively improves the detection accuracy on the apron dataset,which has a leading advantage over other methods.展开更多
A new real-time algorithm is proposed in this paperfor detecting moving object in color image sequencestaken from stationary cameras.This algorithm combines a temporal difference with an adaptive background subtractio...A new real-time algorithm is proposed in this paperfor detecting moving object in color image sequencestaken from stationary cameras.This algorithm combines a temporal difference with an adaptive background subtraction where the combination is novel.Ⅷ1en changes OCCUr.the background is automatically adapted to suit the new conditions.Forthe background model,a new model is proposed with each frame decomposed into regions and the model is based not only upon single pixel but also on the characteristic of a region.The hybrid presentationincludes a model for single pixel information and a model for the pixel’s neighboring area information.This new model of background can both improve the accuracy of segmentation due to that spatialinformation is taken into account and salientl5r speed up the processing procedure because porlion of neighboring pixel call be selected into modeling.The algorithm was successfully used in a video surveillance systern and the experiment result showsit call obtain a clearer foreground than the singleframe difference or background subtraction method.展开更多
This paper presents a video context enhancement method for night surveillance. The basic idea is to extract and fuse the meaningful information of video sequence captured from a fixed camera under different illuminati...This paper presents a video context enhancement method for night surveillance. The basic idea is to extract and fuse the meaningful information of video sequence captured from a fixed camera under different illuminations. A unique characteristic of the algorithm is to separate the image context into two classes and estimate them in different ways. One class contains basic surrounding scene in- formation and scene model, which is obtained via background modeling and object tracking in daytime video sequence. The other class is extracted from nighttime video, including frequently moving region, high illumination region and high gradient region. The scene model and pixel-wise difference method are used to segment the three regions. A shift-invariant discrete wavelet based image fusion technique is used to integral all those context information in the final result. Experiment results demonstrate that the proposed approach can provide much more details and meaningful information for nighttime video.展开更多
基金supported by the Fundamental Research Funds for Central Universities of the Civil Aviation University of China(No.3122021088).
文摘The airport apron scene contains rich contextual information about the spatial position relationship.Traditional object detectors only considered visual appearance and ignored the contextual information.In addition,the detection accuracy of some categories in the apron dataset was low.Therefore,an improved object detection method using spatial-aware features in apron scenes called SA-FRCNN is presented.The method uses graph convolutional networks to capture the relative spatial relationship between objects in the apron scene,incorporating this spatial context into feature learning.Moreover,an attention mechanism is introduced into the feature extraction process,with the goal to focus on the spatial position and key features,and distance-IoU loss is used to achieve a more accurate regression.The experimental results show that the mean average precision of the apron object detection based on SAFRCNN can reach 95.75%,and the detection effect of some hard-to-detect categories has been significantly improved.The proposed method effectively improves the detection accuracy on the apron dataset,which has a leading advantage over other methods.
基金National Natural Science Foundation Grant No.60072029
文摘A new real-time algorithm is proposed in this paperfor detecting moving object in color image sequencestaken from stationary cameras.This algorithm combines a temporal difference with an adaptive background subtraction where the combination is novel.Ⅷ1en changes OCCUr.the background is automatically adapted to suit the new conditions.Forthe background model,a new model is proposed with each frame decomposed into regions and the model is based not only upon single pixel but also on the characteristic of a region.The hybrid presentationincludes a model for single pixel information and a model for the pixel’s neighboring area information.This new model of background can both improve the accuracy of segmentation due to that spatialinformation is taken into account and salientl5r speed up the processing procedure because porlion of neighboring pixel call be selected into modeling.The algorithm was successfully used in a video surveillance systern and the experiment result showsit call obtain a clearer foreground than the singleframe difference or background subtraction method.
基金Supported by the National Natural Science Foundation of China (No.60634030 and No.60372085)
文摘This paper presents a video context enhancement method for night surveillance. The basic idea is to extract and fuse the meaningful information of video sequence captured from a fixed camera under different illuminations. A unique characteristic of the algorithm is to separate the image context into two classes and estimate them in different ways. One class contains basic surrounding scene in- formation and scene model, which is obtained via background modeling and object tracking in daytime video sequence. The other class is extracted from nighttime video, including frequently moving region, high illumination region and high gradient region. The scene model and pixel-wise difference method are used to segment the three regions. A shift-invariant discrete wavelet based image fusion technique is used to integral all those context information in the final result. Experiment results demonstrate that the proposed approach can provide much more details and meaningful information for nighttime video.