Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal features.VSOD poses a challenging task in computer vision,as it involves processing com...Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal features.VSOD poses a challenging task in computer vision,as it involves processing complex spatial data that is also influenced by temporal dynamics.Despite the progress made in existing VSOD models,they still struggle in scenes of great background diversity within and between frames.Additionally,they encounter difficulties related to accumulated noise and high time consumption during the extraction of temporal features over a long-term duration.We propose a multi-stream temporal enhanced network(MSTENet)to address these problems.It investigates saliency cues collaboration in the spatial domain with a multi-stream structure to deal with the great background diversity challenge.A straightforward,yet efficient approach for temporal feature extraction is developed to avoid the accumulative noises and reduce time consumption.The distinction between MSTENet and other VSOD methods stems from its incorporation of both foreground supervision and background supervision,facilitating enhanced extraction of collaborative saliency cues.Another notable differentiation is the innovative integration of spatial and temporal features,wherein the temporal module is integrated into the multi-stream structure,enabling comprehensive spatial-temporal interactions within an end-to-end framework.Extensive experimental results demonstrate that the proposed method achieves state-of-the-art performance on five benchmark datasets while maintaining a real-time speed of 27 fps(Titan XP).Our code and models are available at https://github.com/RuJiaLe/MSTENet.展开更多
What causes object detection in video to be less accurate than it is in still images?Because some video frames have degraded in appearance from fast movement,out-of-focus camera shots,and changes in posture.These reas...What causes object detection in video to be less accurate than it is in still images?Because some video frames have degraded in appearance from fast movement,out-of-focus camera shots,and changes in posture.These reasons have made video object detection(VID)a growing area of research in recent years.Video object detection can be used for various healthcare applications,such as detecting and tracking tumors in medical imaging,monitoring the movement of patients in hospitals and long-term care facilities,and analyzing videos of surgeries to improve technique and training.Additionally,it can be used in telemedicine to help diagnose and monitor patients remotely.Existing VID techniques are based on recurrent neural networks or optical flow for feature aggregation to produce reliable features which can be used for detection.Some of those methods aggregate features on the full-sequence level or from nearby frames.To create feature maps,existing VID techniques frequently use Convolutional Neural Networks(CNNs)as the backbone network.On the other hand,Vision Transformers have outperformed CNNs in various vision tasks,including object detection in still images and image classification.We propose in this research to use Swin-Transformer,a state-of-the-art Vision Transformer,as an alternative to CNN-based backbone networks for object detection in videos.The proposed architecture enhances the accuracy of existing VID methods.The ImageNet VID and EPIC KITCHENS datasets are used to evaluate the suggested methodology.We have demonstrated that our proposed method is efficient by achieving 84.3%mean average precision(mAP)on ImageNet VID using less memory in comparison to other leading VID techniques.The source code is available on the website https://github.com/amaharek/SwinVid.展开更多
In order to obtain the initial video objects from the video sequences, an improved initial video object extraction algorithm based on motion connectivity is proposed. Moving objects in video sequences are highly conne...In order to obtain the initial video objects from the video sequences, an improved initial video object extraction algorithm based on motion connectivity is proposed. Moving objects in video sequences are highly connected and structured, which makes motion connectivity an advanced feature for segmentation. Accordingly, after sharp noise elimination, the cumulated difference image, which exhibits the coherent motion of the moving object, is adaptively thresholded. Then the maximal connected region is labeled, post-processed and output as the final segmenting mask. Hence the initial video object is effectively extracted. Comparative experimental results show that the proposed algorithm extracts the initial video object automatically, promptly and properly, thereby achieving satisfactory subjective and objective performance.展开更多
Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to ...Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to obtain.To tackle this problem,we make an early attempt to achieve video object segmentation with scribble-level supervision,which can alleviate large amounts of human labor for collecting the manual annotation.However,using conventional network architectures and learning objective functions under this scenario cannot work well as the supervision information is highly sparse and incomplete.To address this issue,this paper introduces two novel elements to learn the video object segmentation model.The first one is the scribble attention module,which captures more accurate context information and learns an effective attention map to enhance the contrast between foreground and background.The other one is the scribble-supervised loss,which can optimize the unlabeled pixels and dynamically correct inaccurate segmented areas during the training stage.To evaluate the proposed method,we implement experiments on two video object segmentation benchmark datasets,You Tube-video object segmentation(VOS),and densely annotated video segmentation(DAVIS)-2017.We first generate the scribble annotations from the original per-pixel annotations.Then,we train our model and compare its test performance with the baseline models and other existing works.Extensive experiments demonstrate that the proposed method can work effectively and approach to the methods requiring the dense per-pixel annotations.展开更多
While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance. In t...While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance. In this paper, we propose a methodology to objectively evaluate video segmentation algorithm with ground-truth, which is based on computing the deviation of segmentation results from the reference segmentation. Four different metrics based on classification pixels, edges, relative foreground area and relative position respectively are combined to address the spatial accuracy. Temporal coherency is evaluated by utilizing the difference of spatial accuracy between successive frames. The experimental results show the feasibility of our approach. Moreover, it is computationally more efficient than previous methods. It can be applied to provide an offline ranking among different segmentation algorithms and to optimally set the parameters for a given algorithm.展开更多
With the development of the modern information society, more and more multimedia information is available. So the technology of multimedia processing is becoming the important task for the irrelevant area of scientist...With the development of the modern information society, more and more multimedia information is available. So the technology of multimedia processing is becoming the important task for the irrelevant area of scientist. Among of the multimedia, the visual informarion is more attractive due to its direct, vivid characteristic, but at the same rime the huge amount of video data causes many challenges if the video storage, processing and transmission.展开更多
MPEG 4 is a basic tool for interactivity and manipulation of video sequences. Video object segmentation is a key issue in defining the content of any video sequence, which is often divided into two steps: initial obj...MPEG 4 is a basic tool for interactivity and manipulation of video sequences. Video object segmentation is a key issue in defining the content of any video sequence, which is often divided into two steps: initial object segmentation and object tracking. In this paper, an initial object segmentation method for video object plane(VOP) generation using color information is proposed. Based on 3 by 3 linear templates, a cellular neural network (CNN) is used to implemented object segmentation. The Experimental results are presented to verify the efficiency and robustness of this approach.展开更多
Segmentation of semantic Video Object Planes (VOP's) from video sequence is a key to the standard MPEG-4 with content-based video coding. In this paper, the approach of automatic Segmentation of VOP's Based on...Segmentation of semantic Video Object Planes (VOP's) from video sequence is a key to the standard MPEG-4 with content-based video coding. In this paper, the approach of automatic Segmentation of VOP's Based on Spatio-Temporal Information (SBSTI) is proposed.The proceeding results demonstrate the good performance of the algorithm.展开更多
Video object extraction is a key technology in content-based video coding.A novel video object extracting algorithm by two Dimensional (2-D) mesh-based motion analysis is proposed in this paper.Firstly,a 2-D mesh fitt...Video object extraction is a key technology in content-based video coding.A novel video object extracting algorithm by two Dimensional (2-D) mesh-based motion analysis is proposed in this paper.Firstly,a 2-D mesh fitting the original frame image is obtained via feature detection algorithm. Then,higher order statistics motion analysis is applied on the 2-D mesh representation to get an initial motion detection mask.After post-processing,the final segmenting mask is quickly obtained.And hence the video object is effectively extracted.Experimental results show that the proposed algorithm combines the merits of mesh-based segmenting algorithms and pixel-based segmenting algorithms,and hereby achieves satisfactory subjective and objective performance while dramatically increasing the segmenting speed.展开更多
Recently,a new research trend in our video salient object detection(VSOD)research community has focused on enhancing the detection results via model self-fine-tuning using sparsely mined high-quality keyframes from th...Recently,a new research trend in our video salient object detection(VSOD)research community has focused on enhancing the detection results via model self-fine-tuning using sparsely mined high-quality keyframes from the given sequence.Although such a learning scheme is generally effective,it has a critical limitation,i.e.,the model learned on sparse frames only possesses weak generalization ability.This situation could become worse on“long”videos since they tend to have intensive scene variations.Moreover,in such videos,the keyframe information from a longer time span is less relevant to the previous,which could also cause learning conflict and deteriorate the model performance.Thus,the learning scheme is usually incapable of handling complex pattern modeling.To solve this problem,we propose a divide-and-conquer framework,which can convert a complex problem domain into multiple simple ones.First,we devise a novel background consistency analysis(BCA)which effectively divides the mined frames into disjoint groups.Then for each group,we assign an individual deep model on it to capture its key attribute during the fine-tuning phase.During the testing phase,we design a model-matching strategy,which could dynamically select the best-matched model from those fine-tuned ones to handle the given testing frame.Comprehensive experiments show that our method can adapt severe background appearance variation coupling with object movement and obtain robust saliency detection compared with the previous scheme and the state-of-the-art methods.展开更多
Current mainstream unsupervised video object segmentation(UVOS) approaches typically incorporate optical flow as motion information to locate the primary objects in coherent video frames. However, they fuse appearance...Current mainstream unsupervised video object segmentation(UVOS) approaches typically incorporate optical flow as motion information to locate the primary objects in coherent video frames. However, they fuse appearance and motion information without evaluating the quality of the optical flow. When poor-quality optical flow is used for the interaction with the appearance information, it introduces significant noise and leads to a decline in overall performance. To alleviate this issue, we first employ a quality evaluation module(QEM) to evaluate the optical flow. Then, we select high-quality optical flow as motion cues to fuse with the appearance information, which can prevent poor-quality optical flow from diverting the network's attention. Moreover, we design an appearance-guided fusion module(AGFM) to better integrate appearance and motion information. Extensive experiments on several widely utilized datasets, including DAVIS-16, FBMS-59, and You Tube-Objects, demonstrate that the proposed method outperforms existing methods.展开更多
Previous video object segmentation approachesmainly focus on simplex solutions linking appearance and motion,limiting effective feature collaboration between these two cues.In this work,we study a novel and efficient ...Previous video object segmentation approachesmainly focus on simplex solutions linking appearance and motion,limiting effective feature collaboration between these two cues.In this work,we study a novel and efficient full-duplex strategy network(FSNet)to address this issue,by considering a better mutual restraint scheme linking motion and appearance allowing exploitation of cross-modal features from the fusion and decoding stage.Specifically,we introduce a relational cross-attention module(RCAM)to achieve bidirectional message propagation across embedding sub-spaces.To improve the model’s robustness and update inconsistent features from the spatiotemporal embeddings,we adopt a bidirectional purification module after the RCAM.Extensive experiments on five popular benchmarks show that our FSNet is robust to various challenging scenarios(e.g.,motion blur and occlusion),and compares well to leading methods both for video object segmentation and video salient object detection.The project is publicly available at https://github.com/GewelsJI/FSNet.展开更多
This paper presented an object-based fast motion estimation (ME) algorithm for object-based texture coding in moving picture experts group four (MPEG-4), which takes full advantage of the shape information of video ob...This paper presented an object-based fast motion estimation (ME) algorithm for object-based texture coding in moving picture experts group four (MPEG-4), which takes full advantage of the shape information of video object. Compared with the full search (FS) algorithm, the proposed algorithm can significantly speed the ME process. The speed of ME using the proposed algorithm is faster than that using new three-step search (NTSS), four-step search (4SS), diamond search (DS), and block-based gradient descent search (BBGDS) algorithms with similar motion compensation (MC) errors. The proposed algorithm can be combined with other fast ME algorithm to make the ME process faster.展开更多
While quality assessment is essential for testing, optimizing, benchmarking, monitoring, and inspecting related systems and services, it also plays an essential role in the design of virtually all visual signal proces...While quality assessment is essential for testing, optimizing, benchmarking, monitoring, and inspecting related systems and services, it also plays an essential role in the design of virtually all visual signal processing and communication algorithms, as well as various related decision-making processes. In this paper, we first provide an overview of recently derived quality assessment approaches for traditional visual signals (i.e., 2D images/videos), with highlights for new trends (such as machine learning approaches). On the other hand, with the ongoing development of devices and multimedia services, newly emerged visual signals (e.g., mobile/3D videos) are becoming more and more popular. This work focuses on recent progresses of quality metrics, which have been reviewed for the newly emerged forms of visual signals, which include scalable and mobile videos, High Dynamic Range (HDR) images, image segmentation results, 3D images/videos, and retargeted images.展开更多
In the context of object oriented video coding, the encoding of segmentation maps defined by contour networks is particularly critical. In this paper, we present a lossy contour network encoding algorithm where both t...In the context of object oriented video coding, the encoding of segmentation maps defined by contour networks is particularly critical. In this paper, we present a lossy contour network encoding algorithm where both the rate distortion contour encoding based on maximum operator and the prediction error for the current frame based on quadratic motion model are combined into a optimal polygon contour network compression scheme. The bit rate for the contour network can be further reduced by about 20% in comparison with that in the optimal polygonal boundary encoding scheme using maximum operator in the rate distortion sense.展开更多
We present a lightweight and efficient semisupervised video object segmentation network based on the space-time memory framework.To some extent,our method solves the two difficulties encountered in traditional video o...We present a lightweight and efficient semisupervised video object segmentation network based on the space-time memory framework.To some extent,our method solves the two difficulties encountered in traditional video object segmentation:one is that the single frame calculation time is too long,and the other is that the current frame’s segmentation should use more information from past frames.The algorithm uses a global context(GC)module to achieve highperformance,real-time segmentation.The GC module can effectively integrate multi-frame image information without increased memory and can process each frame in real time.Moreover,the prediction mask of the previous frame is helpful for the segmentation of the current frame,so we input it into a spatial constraint module(SCM),which constrains the areas of segments in the current frame.The SCM effectively alleviates mismatching of similar targets yet consumes few additional resources.We added a refinement module to the decoder to improve boundary segmentation.Our model achieves state-of-the-art results on various datasets,scoring 80.1%on YouTube-VOS 2018 and a J&F score of 78.0%on DAVIS 2017,while taking 0.05 s per frame on the DAVIS 2016 validation dataset.展开更多
Efficient, interactive foreground/background seg- mentation in video is of great practical importance in video editing. This paper proposes an interactive and unsupervised video object segmentation algorithm named E-G...Efficient, interactive foreground/background seg- mentation in video is of great practical importance in video editing. This paper proposes an interactive and unsupervised video object segmentation algorithm named E-GrabCut con- centrating on achieving both of the segmentation quality and time efficiency as highly demanded in the related filed. There are three features in the proposed algorithms. Firstly, we have developed a powerful, non-iterative version of the optimiza- tion process for each frame. Secondly, more user interaction in the first frame is used to improve the Gaussian Mixture Model (GMM). Thirdly, a robust algorithm for the follow- ing frame segmentation has been developed by reusing the previous GMM. Extensive experiments demonstrate that our method outperforms the state-of-the-art video segmentation algorithm in terms of integration of time efficiency and seg- mentation quality.展开更多
Technology used to automatically assess video quality plays a significant role in video processing areas. Because of the complexity of video media, there are great limitations to assess video quality with only one fac...Technology used to automatically assess video quality plays a significant role in video processing areas. Because of the complexity of video media, there are great limitations to assess video quality with only one factor. We propose a new method using artificial random neural networks (RNNs) with motion evaluation as an estimation of perceived visual distortion. The results are obtained through a nonlinear fitting procedure and well correlated with human perception. Compared with other methods, the proposed method performs more adaptable and accurate predictions.展开更多
Object detection is one of the hottest research directions in computer vision,has already made impressive progress in academia,and has many valuable applications in the industry.However,the mainstream detection method...Object detection is one of the hottest research directions in computer vision,has already made impressive progress in academia,and has many valuable applications in the industry.However,the mainstream detection methods still have two shortcomings:(1)even a model that is well trained using large amounts of data still cannot generally be used across different kinds of scenes;(2)once a model is deployed,it cannot autonomously evolve along with the accumulated unlabeled scene data.To address these problems,and inspired by visual knowledge theory,we propose a novel scene-adaptive evolution unsupervised video object detection algorithm that can decrease the impact of scene changes through the concept of object groups.We first extract a large number of object proposals from unlabeled data through a pre-trained detection model.Second,we build the visual knowledge dictionary of object concepts by clustering the proposals,in which each cluster center represents an object prototype.Third,we look into the relations between different clusters and the object information of different groups,and propose a graph-based group information propagation strategy to determine the category of an object concept,which can effectively distinguish positive and negative proposals.With these pseudo labels,we can easily fine-tune the pretrained model.The effectiveness of the proposed method is verified by performing different experiments,and the significant improvements are achieved.展开更多
基金funded by the Natural Science Foundation China(NSFC)under Grant No.62203192.
文摘Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal features.VSOD poses a challenging task in computer vision,as it involves processing complex spatial data that is also influenced by temporal dynamics.Despite the progress made in existing VSOD models,they still struggle in scenes of great background diversity within and between frames.Additionally,they encounter difficulties related to accumulated noise and high time consumption during the extraction of temporal features over a long-term duration.We propose a multi-stream temporal enhanced network(MSTENet)to address these problems.It investigates saliency cues collaboration in the spatial domain with a multi-stream structure to deal with the great background diversity challenge.A straightforward,yet efficient approach for temporal feature extraction is developed to avoid the accumulative noises and reduce time consumption.The distinction between MSTENet and other VSOD methods stems from its incorporation of both foreground supervision and background supervision,facilitating enhanced extraction of collaborative saliency cues.Another notable differentiation is the innovative integration of spatial and temporal features,wherein the temporal module is integrated into the multi-stream structure,enabling comprehensive spatial-temporal interactions within an end-to-end framework.Extensive experimental results demonstrate that the proposed method achieves state-of-the-art performance on five benchmark datasets while maintaining a real-time speed of 27 fps(Titan XP).Our code and models are available at https://github.com/RuJiaLe/MSTENet.
文摘What causes object detection in video to be less accurate than it is in still images?Because some video frames have degraded in appearance from fast movement,out-of-focus camera shots,and changes in posture.These reasons have made video object detection(VID)a growing area of research in recent years.Video object detection can be used for various healthcare applications,such as detecting and tracking tumors in medical imaging,monitoring the movement of patients in hospitals and long-term care facilities,and analyzing videos of surgeries to improve technique and training.Additionally,it can be used in telemedicine to help diagnose and monitor patients remotely.Existing VID techniques are based on recurrent neural networks or optical flow for feature aggregation to produce reliable features which can be used for detection.Some of those methods aggregate features on the full-sequence level or from nearby frames.To create feature maps,existing VID techniques frequently use Convolutional Neural Networks(CNNs)as the backbone network.On the other hand,Vision Transformers have outperformed CNNs in various vision tasks,including object detection in still images and image classification.We propose in this research to use Swin-Transformer,a state-of-the-art Vision Transformer,as an alternative to CNN-based backbone networks for object detection in videos.The proposed architecture enhances the accuracy of existing VID methods.The ImageNet VID and EPIC KITCHENS datasets are used to evaluate the suggested methodology.We have demonstrated that our proposed method is efficient by achieving 84.3%mean average precision(mAP)on ImageNet VID using less memory in comparison to other leading VID techniques.The source code is available on the website https://github.com/amaharek/SwinVid.
基金The National Natural Science Foundation of China(No60672094)
文摘In order to obtain the initial video objects from the video sequences, an improved initial video object extraction algorithm based on motion connectivity is proposed. Moving objects in video sequences are highly connected and structured, which makes motion connectivity an advanced feature for segmentation. Accordingly, after sharp noise elimination, the cumulated difference image, which exhibits the coherent motion of the moving object, is adaptively thresholded. Then the maximal connected region is labeled, post-processed and output as the final segmenting mask. Hence the initial video object is effectively extracted. Comparative experimental results show that the proposed algorithm extracts the initial video object automatically, promptly and properly, thereby achieving satisfactory subjective and objective performance.
基金supported in part by the National Key R&D Program of China(2017YFB0502904)the National Science Foundation of China(61876140)。
文摘Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to obtain.To tackle this problem,we make an early attempt to achieve video object segmentation with scribble-level supervision,which can alleviate large amounts of human labor for collecting the manual annotation.However,using conventional network architectures and learning objective functions under this scenario cannot work well as the supervision information is highly sparse and incomplete.To address this issue,this paper introduces two novel elements to learn the video object segmentation model.The first one is the scribble attention module,which captures more accurate context information and learns an effective attention map to enhance the contrast between foreground and background.The other one is the scribble-supervised loss,which can optimize the unlabeled pixels and dynamically correct inaccurate segmented areas during the training stage.To evaluate the proposed method,we implement experiments on two video object segmentation benchmark datasets,You Tube-video object segmentation(VOS),and densely annotated video segmentation(DAVIS)-2017.We first generate the scribble annotations from the original per-pixel annotations.Then,we train our model and compare its test performance with the baseline models and other existing works.Extensive experiments demonstrate that the proposed method can work effectively and approach to the methods requiring the dense per-pixel annotations.
文摘While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance. In this paper, we propose a methodology to objectively evaluate video segmentation algorithm with ground-truth, which is based on computing the deviation of segmentation results from the reference segmentation. Four different metrics based on classification pixels, edges, relative foreground area and relative position respectively are combined to address the spatial accuracy. Temporal coherency is evaluated by utilizing the difference of spatial accuracy between successive frames. The experimental results show the feasibility of our approach. Moreover, it is computationally more efficient than previous methods. It can be applied to provide an offline ranking among different segmentation algorithms and to optimally set the parameters for a given algorithm.
文摘With the development of the modern information society, more and more multimedia information is available. So the technology of multimedia processing is becoming the important task for the irrelevant area of scientist. Among of the multimedia, the visual informarion is more attractive due to its direct, vivid characteristic, but at the same rime the huge amount of video data causes many challenges if the video storage, processing and transmission.
文摘MPEG 4 is a basic tool for interactivity and manipulation of video sequences. Video object segmentation is a key issue in defining the content of any video sequence, which is often divided into two steps: initial object segmentation and object tracking. In this paper, an initial object segmentation method for video object plane(VOP) generation using color information is proposed. Based on 3 by 3 linear templates, a cellular neural network (CNN) is used to implemented object segmentation. The Experimental results are presented to verify the efficiency and robustness of this approach.
文摘Segmentation of semantic Video Object Planes (VOP's) from video sequence is a key to the standard MPEG-4 with content-based video coding. In this paper, the approach of automatic Segmentation of VOP's Based on Spatio-Temporal Information (SBSTI) is proposed.The proceeding results demonstrate the good performance of the algorithm.
基金Supported by the National Natural Science Foundation of China (No.60672094).
文摘Video object extraction is a key technology in content-based video coding.A novel video object extracting algorithm by two Dimensional (2-D) mesh-based motion analysis is proposed in this paper.Firstly,a 2-D mesh fitting the original frame image is obtained via feature detection algorithm. Then,higher order statistics motion analysis is applied on the 2-D mesh representation to get an initial motion detection mask.After post-processing,the final segmenting mask is quickly obtained.And hence the video object is effectively extracted.Experimental results show that the proposed algorithm combines the merits of mesh-based segmenting algorithms and pixel-based segmenting algorithms,and hereby achieves satisfactory subjective and objective performance while dramatically increasing the segmenting speed.
基金supported in part by the CAMS Innovation Fund for Medical Sciences,China(No.2019-I2M5-016)National Natural Science Foundation of China(No.62172246)+1 种基金the Youth Innovation and Technology Support Plan of Colleges and Universities in Shandong Province,China(No.2021KJ062)National Science Foundation of USA(Nos.IIS-1715985 and IIS1812606).
文摘Recently,a new research trend in our video salient object detection(VSOD)research community has focused on enhancing the detection results via model self-fine-tuning using sparsely mined high-quality keyframes from the given sequence.Although such a learning scheme is generally effective,it has a critical limitation,i.e.,the model learned on sparse frames only possesses weak generalization ability.This situation could become worse on“long”videos since they tend to have intensive scene variations.Moreover,in such videos,the keyframe information from a longer time span is less relevant to the previous,which could also cause learning conflict and deteriorate the model performance.Thus,the learning scheme is usually incapable of handling complex pattern modeling.To solve this problem,we propose a divide-and-conquer framework,which can convert a complex problem domain into multiple simple ones.First,we devise a novel background consistency analysis(BCA)which effectively divides the mined frames into disjoint groups.Then for each group,we assign an individual deep model on it to capture its key attribute during the fine-tuning phase.During the testing phase,we design a model-matching strategy,which could dynamically select the best-matched model from those fine-tuned ones to handle the given testing frame.Comprehensive experiments show that our method can adapt severe background appearance variation coupling with object movement and obtain robust saliency detection compared with the previous scheme and the state-of-the-art methods.
基金supported by the National Natural Science Foundation of China (No.61872189)。
文摘Current mainstream unsupervised video object segmentation(UVOS) approaches typically incorporate optical flow as motion information to locate the primary objects in coherent video frames. However, they fuse appearance and motion information without evaluating the quality of the optical flow. When poor-quality optical flow is used for the interaction with the appearance information, it introduces significant noise and leads to a decline in overall performance. To alleviate this issue, we first employ a quality evaluation module(QEM) to evaluate the optical flow. Then, we select high-quality optical flow as motion cues to fuse with the appearance information, which can prevent poor-quality optical flow from diverting the network's attention. Moreover, we design an appearance-guided fusion module(AGFM) to better integrate appearance and motion information. Extensive experiments on several widely utilized datasets, including DAVIS-16, FBMS-59, and You Tube-Objects, demonstrate that the proposed method outperforms existing methods.
基金This work was supported by the National Natural Science Foundation of China(62176169,61703077,and 62102207).
文摘Previous video object segmentation approachesmainly focus on simplex solutions linking appearance and motion,limiting effective feature collaboration between these two cues.In this work,we study a novel and efficient full-duplex strategy network(FSNet)to address this issue,by considering a better mutual restraint scheme linking motion and appearance allowing exploitation of cross-modal features from the fusion and decoding stage.Specifically,we introduce a relational cross-attention module(RCAM)to achieve bidirectional message propagation across embedding sub-spaces.To improve the model’s robustness and update inconsistent features from the spatiotemporal embeddings,we adopt a bidirectional purification module after the RCAM.Extensive experiments on five popular benchmarks show that our FSNet is robust to various challenging scenarios(e.g.,motion blur and occlusion),and compares well to leading methods both for video object segmentation and video salient object detection.The project is publicly available at https://github.com/GewelsJI/FSNet.
基金National High Technology Research and De-velopment Program of China (863 Program)(No.2003AA103810)
文摘This paper presented an object-based fast motion estimation (ME) algorithm for object-based texture coding in moving picture experts group four (MPEG-4), which takes full advantage of the shape information of video object. Compared with the full search (FS) algorithm, the proposed algorithm can significantly speed the ME process. The speed of ME using the proposed algorithm is faster than that using new three-step search (NTSS), four-step search (4SS), diamond search (DS), and block-based gradient descent search (BBGDS) algorithms with similar motion compensation (MC) errors. The proposed algorithm can be combined with other fast ME algorithm to make the ME process faster.
基金partially supported by the Research Grants Council of the Hong Kong SAR, China (Project CUHK 415712)the Ministry of Education Academic Research Fund (AcRF) Tier 2 in Singapore under Grant No. T208B1218
文摘While quality assessment is essential for testing, optimizing, benchmarking, monitoring, and inspecting related systems and services, it also plays an essential role in the design of virtually all visual signal processing and communication algorithms, as well as various related decision-making processes. In this paper, we first provide an overview of recently derived quality assessment approaches for traditional visual signals (i.e., 2D images/videos), with highlights for new trends (such as machine learning approaches). On the other hand, with the ongoing development of devices and multimedia services, newly emerged visual signals (e.g., mobile/3D videos) are becoming more and more popular. This work focuses on recent progresses of quality metrics, which have been reviewed for the newly emerged forms of visual signals, which include scalable and mobile videos, High Dynamic Range (HDR) images, image segmentation results, 3D images/videos, and retargeted images.
基金upported by the National Natural Science Foundation of China!( 6 95 72 0 2 3)bytheKeyProjectfromtheShanghaiEducationComm
文摘In the context of object oriented video coding, the encoding of segmentation maps defined by contour networks is particularly critical. In this paper, we present a lossy contour network encoding algorithm where both the rate distortion contour encoding based on maximum operator and the prediction error for the current frame based on quadratic motion model are combined into a optimal polygon contour network compression scheme. The bit rate for the contour network can be further reduced by about 20% in comparison with that in the optimal polygonal boundary encoding scheme using maximum operator in the rate distortion sense.
基金partially supported by the National Natural Science Foundation of China(Grant Nos.61802197,62072449,and 61632003)the Science and Technology Development Fund,Macao SAR(Grant Nos.0018/2019/AKP and SKL-IOTSC(UM)-2021-2023)+1 种基金the Guangdong Science and Technology Department(Grant No.2020B1515130001)University of Macao(Grant Nos.MYRG2020-00253-FST and MYRG2022-00059-FST).
文摘We present a lightweight and efficient semisupervised video object segmentation network based on the space-time memory framework.To some extent,our method solves the two difficulties encountered in traditional video object segmentation:one is that the single frame calculation time is too long,and the other is that the current frame’s segmentation should use more information from past frames.The algorithm uses a global context(GC)module to achieve highperformance,real-time segmentation.The GC module can effectively integrate multi-frame image information without increased memory and can process each frame in real time.Moreover,the prediction mask of the previous frame is helpful for the segmentation of the current frame,so we input it into a spatial constraint module(SCM),which constrains the areas of segments in the current frame.The SCM effectively alleviates mismatching of similar targets yet consumes few additional resources.We added a refinement module to the decoder to improve boundary segmentation.Our model achieves state-of-the-art results on various datasets,scoring 80.1%on YouTube-VOS 2018 and a J&F score of 78.0%on DAVIS 2017,while taking 0.05 s per frame on the DAVIS 2016 validation dataset.
文摘Efficient, interactive foreground/background seg- mentation in video is of great practical importance in video editing. This paper proposes an interactive and unsupervised video object segmentation algorithm named E-GrabCut con- centrating on achieving both of the segmentation quality and time efficiency as highly demanded in the related filed. There are three features in the proposed algorithms. Firstly, we have developed a powerful, non-iterative version of the optimiza- tion process for each frame. Secondly, more user interaction in the first frame is used to improve the Gaussian Mixture Model (GMM). Thirdly, a robust algorithm for the follow- ing frame segmentation has been developed by reusing the previous GMM. Extensive experiments demonstrate that our method outperforms the state-of-the-art video segmentation algorithm in terms of integration of time efficiency and seg- mentation quality.
文摘Technology used to automatically assess video quality plays a significant role in video processing areas. Because of the complexity of video media, there are great limitations to assess video quality with only one factor. We propose a new method using artificial random neural networks (RNNs) with motion evaluation as an estimation of perceived visual distortion. The results are obtained through a nonlinear fitting procedure and well correlated with human perception. Compared with other methods, the proposed method performs more adaptable and accurate predictions.
基金Project supported by the National Key R&D Program of China(No.2020AAA010400X)and the Hikvision Open Fund,China。
文摘Object detection is one of the hottest research directions in computer vision,has already made impressive progress in academia,and has many valuable applications in the industry.However,the mainstream detection methods still have two shortcomings:(1)even a model that is well trained using large amounts of data still cannot generally be used across different kinds of scenes;(2)once a model is deployed,it cannot autonomously evolve along with the accumulated unlabeled scene data.To address these problems,and inspired by visual knowledge theory,we propose a novel scene-adaptive evolution unsupervised video object detection algorithm that can decrease the impact of scene changes through the concept of object groups.We first extract a large number of object proposals from unlabeled data through a pre-trained detection model.Second,we build the visual knowledge dictionary of object concepts by clustering the proposals,in which each cluster center represents an object prototype.Third,we look into the relations between different clusters and the object information of different groups,and propose a graph-based group information propagation strategy to determine the category of an object concept,which can effectively distinguish positive and negative proposals.With these pseudo labels,we can easily fine-tune the pretrained model.The effectiveness of the proposed method is verified by performing different experiments,and the significant improvements are achieved.