Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal features.VSOD poses a challenging task in computer vision,as it involves processing com...Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal features.VSOD poses a challenging task in computer vision,as it involves processing complex spatial data that is also influenced by temporal dynamics.Despite the progress made in existing VSOD models,they still struggle in scenes of great background diversity within and between frames.Additionally,they encounter difficulties related to accumulated noise and high time consumption during the extraction of temporal features over a long-term duration.We propose a multi-stream temporal enhanced network(MSTENet)to address these problems.It investigates saliency cues collaboration in the spatial domain with a multi-stream structure to deal with the great background diversity challenge.A straightforward,yet efficient approach for temporal feature extraction is developed to avoid the accumulative noises and reduce time consumption.The distinction between MSTENet and other VSOD methods stems from its incorporation of both foreground supervision and background supervision,facilitating enhanced extraction of collaborative saliency cues.Another notable differentiation is the innovative integration of spatial and temporal features,wherein the temporal module is integrated into the multi-stream structure,enabling comprehensive spatial-temporal interactions within an end-to-end framework.Extensive experimental results demonstrate that the proposed method achieves state-of-the-art performance on five benchmark datasets while maintaining a real-time speed of 27 fps(Titan XP).Our code and models are available at https://github.com/RuJiaLe/MSTENet.展开更多
What causes object detection in video to be less accurate than it is in still images?Because some video frames have degraded in appearance from fast movement,out-of-focus camera shots,and changes in posture.These reas...What causes object detection in video to be less accurate than it is in still images?Because some video frames have degraded in appearance from fast movement,out-of-focus camera shots,and changes in posture.These reasons have made video object detection(VID)a growing area of research in recent years.Video object detection can be used for various healthcare applications,such as detecting and tracking tumors in medical imaging,monitoring the movement of patients in hospitals and long-term care facilities,and analyzing videos of surgeries to improve technique and training.Additionally,it can be used in telemedicine to help diagnose and monitor patients remotely.Existing VID techniques are based on recurrent neural networks or optical flow for feature aggregation to produce reliable features which can be used for detection.Some of those methods aggregate features on the full-sequence level or from nearby frames.To create feature maps,existing VID techniques frequently use Convolutional Neural Networks(CNNs)as the backbone network.On the other hand,Vision Transformers have outperformed CNNs in various vision tasks,including object detection in still images and image classification.We propose in this research to use Swin-Transformer,a state-of-the-art Vision Transformer,as an alternative to CNN-based backbone networks for object detection in videos.The proposed architecture enhances the accuracy of existing VID methods.The ImageNet VID and EPIC KITCHENS datasets are used to evaluate the suggested methodology.We have demonstrated that our proposed method is efficient by achieving 84.3%mean average precision(mAP)on ImageNet VID using less memory in comparison to other leading VID techniques.The source code is available on the website https://github.com/amaharek/SwinVid.展开更多
Object detection plays a vital role in the video surveillance systems.To enhance security,surveillance cameras are now installed in public areas such as traffic signals,roadways,retail malls,train stations,and banks.Ho...Object detection plays a vital role in the video surveillance systems.To enhance security,surveillance cameras are now installed in public areas such as traffic signals,roadways,retail malls,train stations,and banks.However,monitor-ing the video continually at a quicker pace is a challenging job.As a consequence,security cameras are useless and need human monitoring.The primary difficulty with video surveillance is identifying abnormalities such as thefts,accidents,crimes,or other unlawful actions.The anomalous action does not occur at a high-er rate than usual occurrences.To detect the object in a video,first we analyze the images pixel by pixel.In digital image processing,segmentation is the process of segregating the individual image parts into pixels.The performance of segmenta-tion is affected by irregular illumination and/or low illumination.These factors highly affect the real-time object detection process in the video surveillance sys-tem.In this paper,a modified ResNet model(M-Resnet)is proposed to enhance the image which is affected by insufficient light.Experimental results provide the comparison of existing method output and modification architecture of the ResNet model shows the considerable amount improvement in detection objects in the video stream.The proposed model shows better results in the metrics like preci-sion,recall,pixel accuracy,etc.,andfinds a reasonable improvement in the object detection.展开更多
Currently,worldwide industries and communities are concerned with building,expanding,and exploring the assets and resources found in the oceans and seas.More precisely,to analyze a stock,archaeology,and surveillance,s...Currently,worldwide industries and communities are concerned with building,expanding,and exploring the assets and resources found in the oceans and seas.More precisely,to analyze a stock,archaeology,and surveillance,sev-eral cameras are installed underseas to collect videos.However,on the other hand,these large size videos require a lot of time and memory for their processing to extract relevant information.Hence,to automate this manual procedure of video assessment,an accurate and efficient automated system is a greater necessity.From this perspective,we intend to present a complete framework solution for the task of video summarization and object detection in underwater videos.We employed a perceived motion energy(PME)method tofirst extract the keyframes followed by an object detection model approach namely YoloV3 to perform object detection in underwater videos.The issues of blurriness and low contrast in underwater images are also taken into account in the presented approach by applying the image enhancement method.Furthermore,the suggested framework of underwater video summarization and object detection has been evaluated on a publicly available brackish dataset.It is observed that the proposed framework shows good performance and hence ultimately assists several marine researchers or scientists related to thefield of underwater archaeology,stock assessment,and surveillance.展开更多
In the environment of smart examination rooms, it is important to quickly and accurately detect abnormal behavior(human standing) for the construction of a smart campus. Based on deep learning, we propose an intellige...In the environment of smart examination rooms, it is important to quickly and accurately detect abnormal behavior(human standing) for the construction of a smart campus. Based on deep learning, we propose an intelligentstanding human detection (ISHD) method based on an improved single shot multibox detector to detect thetarget of standing human posture in the scene frame of exam room video surveillance at a specific examinationstage. ISHD combines the MobileNet network in a single shot multibox detector network, improves the posturefeature extractor of a standing person, merges prior knowledge, and introduces transfer learning in the trainingstrategy, which greatly reduces the computation amount, improves the detection accuracy, and reduces the trainingdifficulty. The experiment proves that the model proposed in this paper has a better detection ability for the smalland medium-sized standing human body posture in video test scenes on the EMV-2 dataset.展开更多
Moving object detection is one of the challenging problems in video monitoring systems, especially when the illumination changes and shadow exists. Amethod for real-time moving object detection is described. Anew back...Moving object detection is one of the challenging problems in video monitoring systems, especially when the illumination changes and shadow exists. Amethod for real-time moving object detection is described. Anew background model is proposed to handle the illumination varition problem. With optical flow technology and background subtraction, a moving object is extracted quickly and accurately. An effective shadow elimination algorithm based on color features is used to refine the moving obj ects. Experimental results demonstrate that the proposed method can update the background exactly and quickly along with the varition of illumination, and the shadow can be eliminated effectively. The proposed algorithm is a real-time one which the foundation for further object recognition and understanding of video mum'toting systems.展开更多
In video surveillance, there are many interference factors such as target changes, complex scenes, and target deformation in the moving object tracking. In order to resolve this issue, based on the comparative analysi...In video surveillance, there are many interference factors such as target changes, complex scenes, and target deformation in the moving object tracking. In order to resolve this issue, based on the comparative analysis of several common moving object detection methods, a moving object detection and recognition algorithm combined frame difference with background subtraction is presented in this paper. In the algorithm, we first calculate the average of the values of the gray of the continuous multi-frame image in the dynamic image, and then get background image obtained by the statistical average of the continuous image sequence, that is, the continuous interception of the N-frame images are summed, and find the average. In this case, weight of object information has been increasing, and also restrains the static background. Eventually the motion detection image contains both the target contour and more target information of the target contour point from the background image, so as to achieve separating the moving target from the image. The simulation results show the effectiveness of the proposed algorithm.展开更多
An approach to detection of moving objects in video sequences, with application to video surveillance is presented. The algorithm combines two kinds of change points, which are detected from the region-based frame dif...An approach to detection of moving objects in video sequences, with application to video surveillance is presented. The algorithm combines two kinds of change points, which are detected from the region-based frame difference and adjusted background subtraction. An adaptive threshold technique is employed to automatically choose the threshold value to segment the moving objects from the still background. And experiment results show that the algorithm is effective and efficient in practical situations. Furthermore, the algorithm is robust to the effects of the changing of lighting condition and can be applied for video surveillance system.展开更多
Video surveillance system is the most important issue in homeland security field. It is used as a security system because of its ability to track and to detect a particular person. To overcome the lack of the conventi...Video surveillance system is the most important issue in homeland security field. It is used as a security system because of its ability to track and to detect a particular person. To overcome the lack of the conventional video surveillance system that is based on human perception, we introduce a novel cognitive video surveillance system (CVS) that is based on mobile agents. CVS offers important attributes such as suspect objects detection and smart camera cooperation for people tracking. According to many studies, an agent-based approach is appropriate for distributed systems, since mobile agents can transfer copies of themselves to other servers in the system.展开更多
The devastating effects of wildland fire are an unsolved problem,resulting in human losses and the destruction of natural and economic resources.Convolutional neural network(CNN)is shown to perform very well in the ar...The devastating effects of wildland fire are an unsolved problem,resulting in human losses and the destruction of natural and economic resources.Convolutional neural network(CNN)is shown to perform very well in the area of object classification.This network has the ability to perform feature extraction and classification within the same architecture.In this paper,we propose a CNN for identifying fire in videos.A deep domain based method for video fire detection is proposed to extract a powerful feature representation of fire.Testing on real video sequences,the proposed approach achieves better classification performance as some of relevant conventional video based fire detection methods and indicates that using CNN to detect fire in videos is efficient.To balance the efficiency and accuracy,the model is fine-tuned considering the nature of the target problem and fire data.Experimental results on benchmark fire datasets reveal the effectiveness of the proposed framework and validate its suitability for fire detection in closed-circuit television surveillance systems compared to state-of-the-art methods.展开更多
Recently,a new research trend in our video salient object detection(VSOD)research community has focused on enhancing the detection results via model self-fine-tuning using sparsely mined high-quality keyframes from th...Recently,a new research trend in our video salient object detection(VSOD)research community has focused on enhancing the detection results via model self-fine-tuning using sparsely mined high-quality keyframes from the given sequence.Although such a learning scheme is generally effective,it has a critical limitation,i.e.,the model learned on sparse frames only possesses weak generalization ability.This situation could become worse on“long”videos since they tend to have intensive scene variations.Moreover,in such videos,the keyframe information from a longer time span is less relevant to the previous,which could also cause learning conflict and deteriorate the model performance.Thus,the learning scheme is usually incapable of handling complex pattern modeling.To solve this problem,we propose a divide-and-conquer framework,which can convert a complex problem domain into multiple simple ones.First,we devise a novel background consistency analysis(BCA)which effectively divides the mined frames into disjoint groups.Then for each group,we assign an individual deep model on it to capture its key attribute during the fine-tuning phase.During the testing phase,we design a model-matching strategy,which could dynamically select the best-matched model from those fine-tuned ones to handle the given testing frame.Comprehensive experiments show that our method can adapt severe background appearance variation coupling with object movement and obtain robust saliency detection compared with the previous scheme and the state-of-the-art methods.展开更多
Video understanding and content boundary detection are vital stages in video recommendation.However,previous content boundary detection methods require collecting information,including location,cast,action,and audio,a...Video understanding and content boundary detection are vital stages in video recommendation.However,previous content boundary detection methods require collecting information,including location,cast,action,and audio,and if any of these elements are missing,the results may be adversely affected.To address this issue and effectively detect transitions in video content,in this paper,we introduce a video classification and boundary detection method named JudPriNet.The focus of this paper is on objects in videos along with their labels,enabling automatic scene detection in video clips and establishing semantic connections among local objects in the images.As a significant contribution,JudPriNet presents a framework that maps labels to“Continuous Bag of Visual Words Model”to cluster labels and generates new standardized labels as video-type tags.This facilitates automatic classification of video clips.Furthermore,JudPriNet employs Monte Carlo sampling method to classify video clips,the features of video clips as elements within the framework.This proposed method seamlessly integrates video and textual components without compromising training and inference speed.Through experimentation,we have demonstrated that JudPriNet,with its semantic connections,is able to effectively classify videos alongside textual content.Our results indicate that,compared with several other detection approaches,JudPriNet excels in high-level content detection without disrupting the integrity of the video content,outperforming existing methods.展开更多
Collaborative Robotics is one of the high-interest research topics in the area of academia and industry.It has been progressively utilized in numerous applications,particularly in intelligent surveillance systems.It a...Collaborative Robotics is one of the high-interest research topics in the area of academia and industry.It has been progressively utilized in numerous applications,particularly in intelligent surveillance systems.It allows the deployment of smart cameras or optical sensors with computer vision techniques,which may serve in several object detection and tracking tasks.These tasks have been considered challenging and high-level perceptual problems,frequently dominated by relative information about the environment,where main concerns such as occlusion,illumination,background,object deformation,and object class variations are commonplace.In order to show the importance of top view surveillance,a collaborative robotics framework has been presented.It can assist in the detection and tracking of multiple objects in top view surveillance.The framework consists of a smart robotic camera embedded with the visual processing unit.The existing pre-trained deep learning models named SSD and YOLO has been adopted for object detection and localization.The detection models are further combined with different tracking algorithms,including GOTURN,MEDIANFLOW,TLD,KCF,MIL,and BOOSTING.These algorithms,along with detection models,help to track and predict the trajectories of detected objects.The pre-trained models are employed;therefore,the generalization performance is also investigated through testing the models on various sequences of top view data set.The detection models achieved maximum True Detection Rate 93%to 90%with a maximum 0.6%False Detection Rate.The tracking results of different algorithms are nearly identical,with tracking accuracy ranging from 90%to 94%.Furthermore,a discussion has been carried out on output results along with future guidelines.展开更多
This paper proposes a mobile video surveillance system consisting of intelligent video analysis and mobile communication networking. This multilevel distillation approach helps mobile users monitor tremendous surveill...This paper proposes a mobile video surveillance system consisting of intelligent video analysis and mobile communication networking. This multilevel distillation approach helps mobile users monitor tremendous surveillance videos on demand through video streaming over mobile communication networks. The intelligent video analysis includes moving object detection/tracking and key frame selection which can browse useful video clips. The communication networking services, comprising video transcoding, multimedia messaging, and mobile video streaming, transmit surveillance information into mobile appliances. Moving object detection is achieved by background subtraction and particle filter tracking. Key frame selection, which aims to deliver an alarm to a mobile client using multimedia messaging service accompanied with an extracted clear frame, is reached by devising a weighted importance criterion considering object clarity and face appearance. Besides, a spatial- domain cascaded transcoder is developed to convert the filtered image sequence of detected objects into the mobile video streaming format. Experimental results show that the system can successfully detect all events of moving objects for a complex surveillance scene, choose very appropriate key frames for users, and transcode the images with a high power signal-to-noise ratio (PSNR).展开更多
The region completeness of object detection is very crucial to video surveillance, such as the pedestrian and vehicle identifications. However, many conventional object detection approaches cannot guarantee the object...The region completeness of object detection is very crucial to video surveillance, such as the pedestrian and vehicle identifications. However, many conventional object detection approaches cannot guarantee the object region completeness because the object detection can be influenced by the illumination variations and clustering backgrounds. In order to overcome this problem, we propose the iterative superpixels grouping (ISPG) method to extract the precise object boundary and generate the object region with high completeness after the object detection. First, by extending the superpixel segmentation method, the proposed ISPG method can improve the inaccurate segmentation problem and guarantee the region completeness on the object regions. Second, the multi- resolution superpixel-based region completeness enhancement method is proposed to extract the object region with high precision and completeness. The simulation results show that the proposed method outperforms the conventional object detection methods in terms of object completeness evaluation.展开更多
The past two decades witnessed a broad-increase in web technology and on-line gaming.Enhancing the broadband confinements is viewed as one of the most significant variables that prompted new gaming technology.The imme...The past two decades witnessed a broad-increase in web technology and on-line gaming.Enhancing the broadband confinements is viewed as one of the most significant variables that prompted new gaming technology.The immense utilization of web applications and games additionally prompted growth in the handled devices and moving the limited gaming experience from user devices to online cloud servers.As internet capabilities are enhanced new ways of gaming are being used to improve the gaming experience.In cloud-based video gaming,game engines are hosted in cloud gaming data centers,and compressed gaming scenes are rendered to the players over the internet with updated controls.In such systems,the task of transferring games and video compression imposes huge computational complexity is required on cloud servers.The basic problems in cloud gaming in particular are high encoding time,latency,and low frame rates which require a new methodology for a better solution.To improve the bandwidth issue in cloud games,the compression of video sequences requires an alternative mechanism to improve gaming adaption without input delay.In this paper,the proposed improved methodology is used for automatic unnecessary scene detection,scene removing and bit rate reduction using an adaptive algorithm for object detection in a game scene.As a result,simulations showed without much impact on the players’quality experience,the selective object encoding method and object adaption technique decrease the network latency issue,reduce the game streaming bitrate at a remarkable scale on different games.The proposed algorithm was evaluated for three video game scenes.In this paper,achieved 14.6%decrease in encoding and 45.6%decrease in bit rate for the first video game scene.展开更多
Generating ground truth data for developing object detection algorithms of intelligent surveillance systems is a considerably important yet time-consuming task; therefore, a user-friendly tool to annotate videos effic...Generating ground truth data for developing object detection algorithms of intelligent surveillance systems is a considerably important yet time-consuming task; therefore, a user-friendly tool to annotate videos efficiently and accurately is required. In this paper, the development of a semi-automatic video annotation tool is described. For efficiency, the developed tool can automatically generate the initial annotation data for the input videos utilizing automatic object detection modules, which are developed independently and registered in the tool. To guarantee the accuracy of the ground truth data, the system also has several user-friendly functions to help users check and edit the initial annotation data generated by the automatic object detection modules. According to the experiment's results, employing the developed annotation tool is considerably beneficial for reducing annotation time; when compared to manual annotation schemes, using the tool resulted in an annotation time reduction of up to 2.3 times.展开更多
基金funded by the Natural Science Foundation China(NSFC)under Grant No.62203192.
文摘Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal features.VSOD poses a challenging task in computer vision,as it involves processing complex spatial data that is also influenced by temporal dynamics.Despite the progress made in existing VSOD models,they still struggle in scenes of great background diversity within and between frames.Additionally,they encounter difficulties related to accumulated noise and high time consumption during the extraction of temporal features over a long-term duration.We propose a multi-stream temporal enhanced network(MSTENet)to address these problems.It investigates saliency cues collaboration in the spatial domain with a multi-stream structure to deal with the great background diversity challenge.A straightforward,yet efficient approach for temporal feature extraction is developed to avoid the accumulative noises and reduce time consumption.The distinction between MSTENet and other VSOD methods stems from its incorporation of both foreground supervision and background supervision,facilitating enhanced extraction of collaborative saliency cues.Another notable differentiation is the innovative integration of spatial and temporal features,wherein the temporal module is integrated into the multi-stream structure,enabling comprehensive spatial-temporal interactions within an end-to-end framework.Extensive experimental results demonstrate that the proposed method achieves state-of-the-art performance on five benchmark datasets while maintaining a real-time speed of 27 fps(Titan XP).Our code and models are available at https://github.com/RuJiaLe/MSTENet.
文摘What causes object detection in video to be less accurate than it is in still images?Because some video frames have degraded in appearance from fast movement,out-of-focus camera shots,and changes in posture.These reasons have made video object detection(VID)a growing area of research in recent years.Video object detection can be used for various healthcare applications,such as detecting and tracking tumors in medical imaging,monitoring the movement of patients in hospitals and long-term care facilities,and analyzing videos of surgeries to improve technique and training.Additionally,it can be used in telemedicine to help diagnose and monitor patients remotely.Existing VID techniques are based on recurrent neural networks or optical flow for feature aggregation to produce reliable features which can be used for detection.Some of those methods aggregate features on the full-sequence level or from nearby frames.To create feature maps,existing VID techniques frequently use Convolutional Neural Networks(CNNs)as the backbone network.On the other hand,Vision Transformers have outperformed CNNs in various vision tasks,including object detection in still images and image classification.We propose in this research to use Swin-Transformer,a state-of-the-art Vision Transformer,as an alternative to CNN-based backbone networks for object detection in videos.The proposed architecture enhances the accuracy of existing VID methods.The ImageNet VID and EPIC KITCHENS datasets are used to evaluate the suggested methodology.We have demonstrated that our proposed method is efficient by achieving 84.3%mean average precision(mAP)on ImageNet VID using less memory in comparison to other leading VID techniques.The source code is available on the website https://github.com/amaharek/SwinVid.
文摘Object detection plays a vital role in the video surveillance systems.To enhance security,surveillance cameras are now installed in public areas such as traffic signals,roadways,retail malls,train stations,and banks.However,monitor-ing the video continually at a quicker pace is a challenging job.As a consequence,security cameras are useless and need human monitoring.The primary difficulty with video surveillance is identifying abnormalities such as thefts,accidents,crimes,or other unlawful actions.The anomalous action does not occur at a high-er rate than usual occurrences.To detect the object in a video,first we analyze the images pixel by pixel.In digital image processing,segmentation is the process of segregating the individual image parts into pixels.The performance of segmenta-tion is affected by irregular illumination and/or low illumination.These factors highly affect the real-time object detection process in the video surveillance sys-tem.In this paper,a modified ResNet model(M-Resnet)is proposed to enhance the image which is affected by insufficient light.Experimental results provide the comparison of existing method output and modification architecture of the ResNet model shows the considerable amount improvement in detection objects in the video stream.The proposed model shows better results in the metrics like preci-sion,recall,pixel accuracy,etc.,andfinds a reasonable improvement in the object detection.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2020R1G1A1099559).
文摘Currently,worldwide industries and communities are concerned with building,expanding,and exploring the assets and resources found in the oceans and seas.More precisely,to analyze a stock,archaeology,and surveillance,sev-eral cameras are installed underseas to collect videos.However,on the other hand,these large size videos require a lot of time and memory for their processing to extract relevant information.Hence,to automate this manual procedure of video assessment,an accurate and efficient automated system is a greater necessity.From this perspective,we intend to present a complete framework solution for the task of video summarization and object detection in underwater videos.We employed a perceived motion energy(PME)method tofirst extract the keyframes followed by an object detection model approach namely YoloV3 to perform object detection in underwater videos.The issues of blurriness and low contrast in underwater images are also taken into account in the presented approach by applying the image enhancement method.Furthermore,the suggested framework of underwater video summarization and object detection has been evaluated on a publicly available brackish dataset.It is observed that the proposed framework shows good performance and hence ultimately assists several marine researchers or scientists related to thefield of underwater archaeology,stock assessment,and surveillance.
基金supported by the Natural Science Foundation of China 62102147National Science Foundation of Hunan Province 2022JJ30424,2022JJ50253,and 2022JJ30275+2 种基金Scientific Research Project of Hunan Provincial Department of Education 21B0616 and 21B0738Hunan University of Arts and Sciences Ph.D.Start-Up Project BSQD02,20BSQD13the Construct Program of Applied Characteristic Discipline in Hunan University of Science and Engineering.
文摘In the environment of smart examination rooms, it is important to quickly and accurately detect abnormal behavior(human standing) for the construction of a smart campus. Based on deep learning, we propose an intelligentstanding human detection (ISHD) method based on an improved single shot multibox detector to detect thetarget of standing human posture in the scene frame of exam room video surveillance at a specific examinationstage. ISHD combines the MobileNet network in a single shot multibox detector network, improves the posturefeature extractor of a standing person, merges prior knowledge, and introduces transfer learning in the trainingstrategy, which greatly reduces the computation amount, improves the detection accuracy, and reduces the trainingdifficulty. The experiment proves that the model proposed in this paper has a better detection ability for the smalland medium-sized standing human body posture in video test scenes on the EMV-2 dataset.
基金This project was supported by the foundation of the Visual and Auditory Information Processing Laboratory of BeijingUniversity of China (0306) and the National Science Foundation of China (60374031).
文摘Moving object detection is one of the challenging problems in video monitoring systems, especially when the illumination changes and shadow exists. Amethod for real-time moving object detection is described. Anew background model is proposed to handle the illumination varition problem. With optical flow technology and background subtraction, a moving object is extracted quickly and accurately. An effective shadow elimination algorithm based on color features is used to refine the moving obj ects. Experimental results demonstrate that the proposed method can update the background exactly and quickly along with the varition of illumination, and the shadow can be eliminated effectively. The proposed algorithm is a real-time one which the foundation for further object recognition and understanding of video mum'toting systems.
文摘In video surveillance, there are many interference factors such as target changes, complex scenes, and target deformation in the moving object tracking. In order to resolve this issue, based on the comparative analysis of several common moving object detection methods, a moving object detection and recognition algorithm combined frame difference with background subtraction is presented in this paper. In the algorithm, we first calculate the average of the values of the gray of the continuous multi-frame image in the dynamic image, and then get background image obtained by the statistical average of the continuous image sequence, that is, the continuous interception of the N-frame images are summed, and find the average. In this case, weight of object information has been increasing, and also restrains the static background. Eventually the motion detection image contains both the target contour and more target information of the target contour point from the background image, so as to achieve separating the moving target from the image. The simulation results show the effectiveness of the proposed algorithm.
文摘An approach to detection of moving objects in video sequences, with application to video surveillance is presented. The algorithm combines two kinds of change points, which are detected from the region-based frame difference and adjusted background subtraction. An adaptive threshold technique is employed to automatically choose the threshold value to segment the moving objects from the still background. And experiment results show that the algorithm is effective and efficient in practical situations. Furthermore, the algorithm is robust to the effects of the changing of lighting condition and can be applied for video surveillance system.
文摘Video surveillance system is the most important issue in homeland security field. It is used as a security system because of its ability to track and to detect a particular person. To overcome the lack of the conventional video surveillance system that is based on human perception, we introduce a novel cognitive video surveillance system (CVS) that is based on mobile agents. CVS offers important attributes such as suspect objects detection and smart camera cooperation for people tracking. According to many studies, an agent-based approach is appropriate for distributed systems, since mobile agents can transfer copies of themselves to other servers in the system.
基金National Natural Science Foundation of China(No.61573095)Natural Science Foundation of Shanghai,China(No.6ZR1446700)
文摘The devastating effects of wildland fire are an unsolved problem,resulting in human losses and the destruction of natural and economic resources.Convolutional neural network(CNN)is shown to perform very well in the area of object classification.This network has the ability to perform feature extraction and classification within the same architecture.In this paper,we propose a CNN for identifying fire in videos.A deep domain based method for video fire detection is proposed to extract a powerful feature representation of fire.Testing on real video sequences,the proposed approach achieves better classification performance as some of relevant conventional video based fire detection methods and indicates that using CNN to detect fire in videos is efficient.To balance the efficiency and accuracy,the model is fine-tuned considering the nature of the target problem and fire data.Experimental results on benchmark fire datasets reveal the effectiveness of the proposed framework and validate its suitability for fire detection in closed-circuit television surveillance systems compared to state-of-the-art methods.
基金supported in part by the CAMS Innovation Fund for Medical Sciences,China(No.2019-I2M5-016)National Natural Science Foundation of China(No.62172246)+1 种基金the Youth Innovation and Technology Support Plan of Colleges and Universities in Shandong Province,China(No.2021KJ062)National Science Foundation of USA(Nos.IIS-1715985 and IIS1812606).
文摘Recently,a new research trend in our video salient object detection(VSOD)research community has focused on enhancing the detection results via model self-fine-tuning using sparsely mined high-quality keyframes from the given sequence.Although such a learning scheme is generally effective,it has a critical limitation,i.e.,the model learned on sparse frames only possesses weak generalization ability.This situation could become worse on“long”videos since they tend to have intensive scene variations.Moreover,in such videos,the keyframe information from a longer time span is less relevant to the previous,which could also cause learning conflict and deteriorate the model performance.Thus,the learning scheme is usually incapable of handling complex pattern modeling.To solve this problem,we propose a divide-and-conquer framework,which can convert a complex problem domain into multiple simple ones.First,we devise a novel background consistency analysis(BCA)which effectively divides the mined frames into disjoint groups.Then for each group,we assign an individual deep model on it to capture its key attribute during the fine-tuning phase.During the testing phase,we design a model-matching strategy,which could dynamically select the best-matched model from those fine-tuned ones to handle the given testing frame.Comprehensive experiments show that our method can adapt severe background appearance variation coupling with object movement and obtain robust saliency detection compared with the previous scheme and the state-of-the-art methods.
文摘Video understanding and content boundary detection are vital stages in video recommendation.However,previous content boundary detection methods require collecting information,including location,cast,action,and audio,and if any of these elements are missing,the results may be adversely affected.To address this issue and effectively detect transitions in video content,in this paper,we introduce a video classification and boundary detection method named JudPriNet.The focus of this paper is on objects in videos along with their labels,enabling automatic scene detection in video clips and establishing semantic connections among local objects in the images.As a significant contribution,JudPriNet presents a framework that maps labels to“Continuous Bag of Visual Words Model”to cluster labels and generates new standardized labels as video-type tags.This facilitates automatic classification of video clips.Furthermore,JudPriNet employs Monte Carlo sampling method to classify video clips,the features of video clips as elements within the framework.This proposed method seamlessly integrates video and textual components without compromising training and inference speed.Through experimentation,we have demonstrated that JudPriNet,with its semantic connections,is able to effectively classify videos alongside textual content.Our results indicate that,compared with several other detection approaches,JudPriNet excels in high-level content detection without disrupting the integrity of the video content,outperforming existing methods.
基金the Framework of International Cooperation Program managed by the National Research Foundation of Korea(2019K1A3A1A8011295711).
文摘Collaborative Robotics is one of the high-interest research topics in the area of academia and industry.It has been progressively utilized in numerous applications,particularly in intelligent surveillance systems.It allows the deployment of smart cameras or optical sensors with computer vision techniques,which may serve in several object detection and tracking tasks.These tasks have been considered challenging and high-level perceptual problems,frequently dominated by relative information about the environment,where main concerns such as occlusion,illumination,background,object deformation,and object class variations are commonplace.In order to show the importance of top view surveillance,a collaborative robotics framework has been presented.It can assist in the detection and tracking of multiple objects in top view surveillance.The framework consists of a smart robotic camera embedded with the visual processing unit.The existing pre-trained deep learning models named SSD and YOLO has been adopted for object detection and localization.The detection models are further combined with different tracking algorithms,including GOTURN,MEDIANFLOW,TLD,KCF,MIL,and BOOSTING.These algorithms,along with detection models,help to track and predict the trajectories of detected objects.The pre-trained models are employed;therefore,the generalization performance is also investigated through testing the models on various sequences of top view data set.The detection models achieved maximum True Detection Rate 93%to 90%with a maximum 0.6%False Detection Rate.The tracking results of different algorithms are nearly identical,with tracking accuracy ranging from 90%to 94%.Furthermore,a discussion has been carried out on output results along with future guidelines.
文摘This paper proposes a mobile video surveillance system consisting of intelligent video analysis and mobile communication networking. This multilevel distillation approach helps mobile users monitor tremendous surveillance videos on demand through video streaming over mobile communication networks. The intelligent video analysis includes moving object detection/tracking and key frame selection which can browse useful video clips. The communication networking services, comprising video transcoding, multimedia messaging, and mobile video streaming, transmit surveillance information into mobile appliances. Moving object detection is achieved by background subtraction and particle filter tracking. Key frame selection, which aims to deliver an alarm to a mobile client using multimedia messaging service accompanied with an extracted clear frame, is reached by devising a weighted importance criterion considering object clarity and face appearance. Besides, a spatial- domain cascaded transcoder is developed to convert the filtered image sequence of detected objects into the mobile video streaming format. Experimental results show that the system can successfully detect all events of moving objects for a complex surveillance scene, choose very appropriate key frames for users, and transcode the images with a high power signal-to-noise ratio (PSNR).
基金supported in part by the“MOST”under Grant No.103-2221-E-216-012
文摘The region completeness of object detection is very crucial to video surveillance, such as the pedestrian and vehicle identifications. However, many conventional object detection approaches cannot guarantee the object region completeness because the object detection can be influenced by the illumination variations and clustering backgrounds. In order to overcome this problem, we propose the iterative superpixels grouping (ISPG) method to extract the precise object boundary and generate the object region with high completeness after the object detection. First, by extending the superpixel segmentation method, the proposed ISPG method can improve the inaccurate segmentation problem and guarantee the region completeness on the object regions. Second, the multi- resolution superpixel-based region completeness enhancement method is proposed to extract the object region with high precision and completeness. The simulation results show that the proposed method outperforms the conventional object detection methods in terms of object completeness evaluation.
文摘The past two decades witnessed a broad-increase in web technology and on-line gaming.Enhancing the broadband confinements is viewed as one of the most significant variables that prompted new gaming technology.The immense utilization of web applications and games additionally prompted growth in the handled devices and moving the limited gaming experience from user devices to online cloud servers.As internet capabilities are enhanced new ways of gaming are being used to improve the gaming experience.In cloud-based video gaming,game engines are hosted in cloud gaming data centers,and compressed gaming scenes are rendered to the players over the internet with updated controls.In such systems,the task of transferring games and video compression imposes huge computational complexity is required on cloud servers.The basic problems in cloud gaming in particular are high encoding time,latency,and low frame rates which require a new methodology for a better solution.To improve the bandwidth issue in cloud games,the compression of video sequences requires an alternative mechanism to improve gaming adaption without input delay.In this paper,the proposed improved methodology is used for automatic unnecessary scene detection,scene removing and bit rate reduction using an adaptive algorithm for object detection in a game scene.As a result,simulations showed without much impact on the players’quality experience,the selective object encoding method and object adaption technique decrease the network latency issue,reduce the game streaming bitrate at a remarkable scale on different games.The proposed algorithm was evaluated for three video game scenes.In this paper,achieved 14.6%decrease in encoding and 45.6%decrease in bit rate for the first video game scene.
文摘Generating ground truth data for developing object detection algorithms of intelligent surveillance systems is a considerably important yet time-consuming task; therefore, a user-friendly tool to annotate videos efficiently and accurately is required. In this paper, the development of a semi-automatic video annotation tool is described. For efficiency, the developed tool can automatically generate the initial annotation data for the input videos utilizing automatic object detection modules, which are developed independently and registered in the tool. To guarantee the accuracy of the ground truth data, the system also has several user-friendly functions to help users check and edit the initial annotation data generated by the automatic object detection modules. According to the experiment's results, employing the developed annotation tool is considerably beneficial for reducing annotation time; when compared to manual annotation schemes, using the tool resulted in an annotation time reduction of up to 2.3 times.