期刊文献+
共找到13篇文章
< 1 >
每页显示 20 50 100
Multi-Stream Temporally Enhanced Network for Video Salient Object Detection
1
作者 Dan Xu Jiale Ru Jinlong Shi 《Computers, Materials & Continua》 SCIE EI 2024年第1期85-104,共20页
Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal features.VSOD poses a challenging task in computer vision,as it involves processing com... Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal features.VSOD poses a challenging task in computer vision,as it involves processing complex spatial data that is also influenced by temporal dynamics.Despite the progress made in existing VSOD models,they still struggle in scenes of great background diversity within and between frames.Additionally,they encounter difficulties related to accumulated noise and high time consumption during the extraction of temporal features over a long-term duration.We propose a multi-stream temporal enhanced network(MSTENet)to address these problems.It investigates saliency cues collaboration in the spatial domain with a multi-stream structure to deal with the great background diversity challenge.A straightforward,yet efficient approach for temporal feature extraction is developed to avoid the accumulative noises and reduce time consumption.The distinction between MSTENet and other VSOD methods stems from its incorporation of both foreground supervision and background supervision,facilitating enhanced extraction of collaborative saliency cues.Another notable differentiation is the innovative integration of spatial and temporal features,wherein the temporal module is integrated into the multi-stream structure,enabling comprehensive spatial-temporal interactions within an end-to-end framework.Extensive experimental results demonstrate that the proposed method achieves state-of-the-art performance on five benchmark datasets while maintaining a real-time speed of 27 fps(Titan XP).Our code and models are available at https://github.com/RuJiaLe/MSTENet. 展开更多
关键词 Video salient object detection deep learning temporally enhanced foreground-background collaboration
下载PDF
Local saliency consistency-based label inference for weakly supervised salient object detection using scribble annotations
2
作者 Shuo Zhao Peng Cui +1 位作者 Jing Shen Haibo Liu 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期239-249,共11页
Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully superv... Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results. 展开更多
关键词 label inference salient object detection weak supervision
下载PDF
Salient Object Detection Based on a Novel Combination Framework Using the Perceptual Matching and Subjective-Objective Mapping Technologies
3
作者 Jian Han Jialu Li +3 位作者 Meng Liu Zhe Ren Zhimin Cao Xingbin Liu 《Journal of Beijing Institute of Technology》 EI CAS 2023年第1期95-106,共12页
The integrity and fineness characterization of non-connected regions and contours is a major challenge for existing salient object detection.The key to address is how to make full use of the subjective and objective s... The integrity and fineness characterization of non-connected regions and contours is a major challenge for existing salient object detection.The key to address is how to make full use of the subjective and objective structural information obtained in different steps.Therefore,by simulating the human visual mechanism,this paper proposes a novel multi-decoder matching correction network and subjective structural loss.Specifically,the loss pays different attentions to the foreground,boundary,and background of ground truth map in a top-down structure.And the perceived saliency is mapped to the corresponding objective structure of the prediction map,which is extracted in a bottom-up manner.Thus,multi-level salient features can be effectively detected with the loss as constraint.And then,through the mapping of improved binary cross entropy loss,the differences between salient regions and objects are checked to pay attention to the error prone region to achieve excellent error sensitivity.Finally,through tracking the identifying feature horizontally and vertically,the subjective and objective interaction is maximized.Extensive experiments on five benchmark datasets demonstrate that compared with 12 state-of-the-art methods,the algorithm has higher recall and precision,less error and strong robustness and generalization ability,and can predict complete and refined saliency maps. 展开更多
关键词 salient object detection subjective-objective mapping perceptional separation and matching error sensitivity non-connected region detection
下载PDF
A Novel Divide and Conquer Solution for Long-term Video Salient Object Detection
4
作者 Yun-Xiao Li Cheng-Li-Zhao Chen +2 位作者 Shuai Li Ai-Min Hao Hong Qin 《Machine Intelligence Research》 EI CSCD 2024年第4期684-703,共20页
Recently,a new research trend in our video salient object detection(VSOD)research community has focused on enhancing the detection results via model self-fine-tuning using sparsely mined high-quality keyframes from th... Recently,a new research trend in our video salient object detection(VSOD)research community has focused on enhancing the detection results via model self-fine-tuning using sparsely mined high-quality keyframes from the given sequence.Although such a learning scheme is generally effective,it has a critical limitation,i.e.,the model learned on sparse frames only possesses weak generalization ability.This situation could become worse on“long”videos since they tend to have intensive scene variations.Moreover,in such videos,the keyframe information from a longer time span is less relevant to the previous,which could also cause learning conflict and deteriorate the model performance.Thus,the learning scheme is usually incapable of handling complex pattern modeling.To solve this problem,we propose a divide-and-conquer framework,which can convert a complex problem domain into multiple simple ones.First,we devise a novel background consistency analysis(BCA)which effectively divides the mined frames into disjoint groups.Then for each group,we assign an individual deep model on it to capture its key attribute during the fine-tuning phase.During the testing phase,we design a model-matching strategy,which could dynamically select the best-matched model from those fine-tuned ones to handle the given testing frame.Comprehensive experiments show that our method can adapt severe background appearance variation coupling with object movement and obtain robust saliency detection compared with the previous scheme and the state-of-the-art methods. 展开更多
关键词 Video salient object detection background consistency analysis weakly supervised learning long-term information background shift.
原文传递
Saliency Rank:Two-stage manifold ranking for salient object detection 被引量:5
5
作者 Wei Qi Ming-Ming Cheng +2 位作者 Ali Borji Huchuan Lu Lian-Fa Bai 《Computational Visual Media》 2015年第4期309-320,共12页
Salient object detection remains one of the most important and active research topics in computer vision,with wide-ranging applications to object recognition,scene understanding,image retrieval,context aware image edi... Salient object detection remains one of the most important and active research topics in computer vision,with wide-ranging applications to object recognition,scene understanding,image retrieval,context aware image editing,image compression,etc. Most existing methods directly determine salient objects by exploring various salient object features.Here,we propose a novel graph based ranking method to detect and segment the most salient object in a scene according to its relationship to image border(background) regions,i.e.,the background feature.Firstly,we use regions/super-pixels as graph nodes,which are fully connected to enable both long range and short range relations to be modeled. The relationship of each region to the image border(background) is evaluated in two stages:(i) ranking with hard background queries,and(ii) ranking with soft foreground queries. We experimentally show how this two-stage ranking based salient object detection method is complementary to traditional methods,and that integrated results outperform both. Our method allows the exploitation of intrinsic image structure to achieve high quality salient object determination using a quadratic optimization framework,with a closed form solution which can be easily computed.Extensive method evaluation and comparison using three challenging saliency datasets demonstrate that our method consistently outperforms 10 state-of-theart models by a big margin. 展开更多
关键词 salient object detection manifold ranking visual attention SALIENCY
原文传递
Light field salient object detection:A review and benchmark 被引量:2
6
作者 Keren Fu Yao Jiang +3 位作者 Ge-Peng Ji Tao Zhou Qijun Zhao Deng-Ping Fan 《Computational Visual Media》 SCIE EI CSCD 2022年第4期509-534,共26页
Salient object detection(SOD)is a long-standing research topic in computer vision with increasing interest in the past decade.Since light fields record comprehensive information of natural scenes that benefit SOD in a... Salient object detection(SOD)is a long-standing research topic in computer vision with increasing interest in the past decade.Since light fields record comprehensive information of natural scenes that benefit SOD in a number of ways,using light field inputs to improve saliency detection over conventional RGB inputs is an emerging trend.This paper provides the first comprehensive review and a benchmark for light field SOD,which has long been lacking in the saliency community.Firstly,we introduce light fields,including theory and data forms,and then review existing studies on light field SOD,covering ten traditional models,seven deep learning-based models,a comparative study,and a brief review.Existing datasets for light field SOD are also summarized.Secondly,we benchmark nine representative light field SOD models together with several cutting-edge RGB-D SOD models on four widely used light field datasets,providing insightful discussions and analyses,including a comparison between light field SOD and RGB-D SOD models.Due to the inconsistency of current datasets,we further generate complete data and supplement focal stacks,depth maps,and multi-view images for them,making them consistent and uniform.Our supplemental data make a universal benchmark possible.Lastly,light field SOD is a specialised problem,because of its diverse data representations and high dependency on acquisition hardware,so it differs greatly from other saliency detection tasks.We provide nine observations on challenges and future directions,and outline several open issues.All the materials including models,datasets,benchmarking results,and supplemented light field datasets are publicly available at https://github.com/kerenfu/LFSOD-Survey. 展开更多
关键词 light field salient object detection(SOD) deep learning BENCHMARKING
原文传递
A Multiscale Superpixel-Level Salient Object Detection Model Using Local-Global Contrast Cue
7
作者 穆楠 徐新 +1 位作者 王英林 张晓龙 《Journal of Shanghai Jiaotong university(Science)》 EI 2017年第1期121-128,共8页
The goal of salient object detection is to estimate the regions which are most likely to attract human's visual attention. As an important image preprocessing procedure to reduce the computational complexity, sali... The goal of salient object detection is to estimate the regions which are most likely to attract human's visual attention. As an important image preprocessing procedure to reduce the computational complexity, salient object detection is still a challenging problem in computer vision. In this paper, we proposed a salient object detection model by integrating local and global superpixel contrast at multiple scales. Three features are computed to estimate the saliency of superpixel. Two optimization measures are utilized to refine the resulting saliency map. Extensive experiments with the state-of-the-art saliency models on four public datasets demonstrate the effectiveness of the proposed model. 展开更多
关键词 salient object detection superpixel multiple scales local contrast global contrast TP 391.4 A
原文传递
WGI-Net:A weighted group integration network for RGB-D salient object detection
8
作者 Yanliang Ge Cong Zhang +2 位作者 Kang Wang Ziqi Liu Hongbo Bi 《Computational Visual Media》 EI CSCD 2021年第1期115-125,共11页
Salient object detection is used as a preprocess in many computer vision tasks(such as salient object segmentation,video salient object detection,etc.).When performing salient object detection,depth information can pr... Salient object detection is used as a preprocess in many computer vision tasks(such as salient object segmentation,video salient object detection,etc.).When performing salient object detection,depth information can provide clues to the location of target objects,so effective fusion of RGB and depth feature information is important.In this paper,we propose a new feature information aggregation approach,weighted group integration(WGI),to effectively integrate RGB and depth feature information.We use a dual-branch structure to slice the input RGB image and depth map separately and then merge the results separately by concatenation.As grouped features may lose global information about the target object,we also make use of the idea of residual learning,taking the features captured by the original fusion method as supplementary information to ensure both accuracy and completeness of the fused information.Experiments on five datasets show that our model performs better than typical existing approaches for four evaluation metrics. 展开更多
关键词 weighted group depth information RGBD information salient object detection deep learning
原文传递
SAM Era:Can It Segment Any Industrial Surface Defects?
9
作者 Kechen Song Wenqi Cui +2 位作者 Han Yu Xingjie Li Yunhui Yan 《Computers, Materials & Continua》 SCIE EI 2024年第3期3953-3969,共17页
Segment Anything Model(SAM)is a cutting-edge model that has shown impressive performance in general object segmentation.The birth of the segment anything is a groundbreaking step towards creating a universal intellige... Segment Anything Model(SAM)is a cutting-edge model that has shown impressive performance in general object segmentation.The birth of the segment anything is a groundbreaking step towards creating a universal intelligent model.Due to its superior performance in general object segmentation,it quickly gained attention and interest.This makes SAM particularly attractive in industrial surface defect segmentation,especially for complex industrial scenes with limited training data.However,its segmentation ability for specific industrial scenes remains unknown.Therefore,in this work,we select three representative and complex industrial surface defect detection scenarios,namely strip steel surface defects,tile surface defects,and rail surface defects,to evaluate the segmentation performance of SAM.Our results show that although SAM has great potential in general object segmentation,it cannot achieve satisfactory performance in complex industrial scenes.Our test results are available at:https://github.com/VDT-2048/SAM-IS. 展开更多
关键词 Segment anything SAM surface defect detection salient object detection
下载PDF
Specificity-preserving RGB-D saliency detection 被引量:1
10
作者 Tao Zhou Deng-Ping Fan +2 位作者 Geng Chen Yi Zhou Huazhu Fu 《Computational Visual Media》 SCIE EI CSCD 2023年第2期297-317,共21页
Salient object detection(SOD)in RGB and depth images has attracted increasing research interest.Existing RGB-D SOD models usually adopt fusion strategies to learn a shared representation from RGB and depth modalities,... Salient object detection(SOD)in RGB and depth images has attracted increasing research interest.Existing RGB-D SOD models usually adopt fusion strategies to learn a shared representation from RGB and depth modalities,while few methods explicitly consider how to preserve modality-specific characteristics.In this study,we propose a novel framework,the specificity-preserving network(SPNet),which improves SOD performance by exploring both the shared information and modality-specific properties.Specifically,we use two modality-specific networks and a shared learning network to generate individual and shared saliency prediction maps.To effectively fuse cross-modal features in the shared learning network,we propose a cross-enhanced integration module(CIM)and propagate the fused feature to the next layer to integrate cross-level information.Moreover,to capture rich complementary multi-modal information to boost SOD performance,we use a multi-modal feature aggregation(MFA)module to integrate the modalityspecific features from each individual decoder into the shared decoder.By using skip connections between encoder and decoder layers,hierarchical features can be fully combined.Extensive experiments demonstrate that our SPNet outperforms cutting-edge approaches on six popular RGB-D SOD and three camouflaged object detection benchmarks.The project is publicly available at https://github.com/taozh2017/SPNet. 展开更多
关键词 salient object detection(SOD) RGB-D cross-enhanced integration module(CIM) multi-modal feature aggregation(MFA)
原文传递
Full-duplex strategy for video object segmentation 被引量:1
11
作者 Ge-Peng Ji Deng-Ping Fan +3 位作者 Keren Fu Zhe Wu Jianbing Shen Ling Shao 《Computational Visual Media》 SCIE EI CSCD 2023年第1期155-175,共21页
Previous video object segmentation approachesmainly focus on simplex solutions linking appearance and motion,limiting effective feature collaboration between these two cues.In this work,we study a novel and efficient ... Previous video object segmentation approachesmainly focus on simplex solutions linking appearance and motion,limiting effective feature collaboration between these two cues.In this work,we study a novel and efficient full-duplex strategy network(FSNet)to address this issue,by considering a better mutual restraint scheme linking motion and appearance allowing exploitation of cross-modal features from the fusion and decoding stage.Specifically,we introduce a relational cross-attention module(RCAM)to achieve bidirectional message propagation across embedding sub-spaces.To improve the model’s robustness and update inconsistent features from the spatiotemporal embeddings,we adopt a bidirectional purification module after the RCAM.Extensive experiments on five popular benchmarks show that our FSNet is robust to various challenging scenarios(e.g.,motion blur and occlusion),and compares well to leading methods both for video object segmentation and video salient object detection.The project is publicly available at https://github.com/GewelsJI/FSNet. 展开更多
关键词 video object segmentation(VOS) video salient object detection(V-SOD) visual attention
原文传递
S4Net: Single stage salient-instance segmentation 被引量:2
12
作者 Ruochen Fan Ming-Ming Cheng +3 位作者 Qibin Hou Tai-Jiang Mu Jingdong Wang Shi-Min Hu 《Computational Visual Media》 CSCD 2020年第2期191-204,共14页
In this paper, we consider salient instance segmentation. As well as producing bounding boxes,our network also outputs high-quality instance-level segments as initial selections to indicate the regions of interest. Ta... In this paper, we consider salient instance segmentation. As well as producing bounding boxes,our network also outputs high-quality instance-level segments as initial selections to indicate the regions of interest. Taking into account the category-independent property of each target, we design a single stage salient instance segmentation framework, with a novel segmentation branch. Our new branch regards not only local context inside each detection window but also the surrounding context, enabling us to distinguish instances in the same scope even with partial occlusion.Our network is end-to-end trainable and is fast(running at 40 fps for images with resolution 320 × 320). We evaluate our approach on a publicly available benchmark and show that it outperforms alternative solutions. We also provide a thorough analysis of our design choices to help readers better understand the function of each part of our network. Source code can be found at https://github.com/Ruochen Fan/S4 Net. 展开更多
关键词 salient-instance segmentation salient object detection single stage region-of-interest masking
原文传递
CamDiff:Camouflage Image Augmentation via Diffusion
13
作者 Xue-Jing Luo Shuo Wang +4 位作者 Zongwei Wu Christos Sakaridis Yun Cheng Deng-Ping Fan Luc Van Gool 《CAAI Artificial Intelligence Research》 2023年第1期55-64,共10页
The burgeoning field of Camouflaged Object Detection(COD)seeks to identify objects that blend into their surroundings.Despite the impressive performance of recent learning-based models,their robustness is limited,as e... The burgeoning field of Camouflaged Object Detection(COD)seeks to identify objects that blend into their surroundings.Despite the impressive performance of recent learning-based models,their robustness is limited,as existing methods may misclassify salient objects as camouflaged ones,despite these contradictory characteristics.This limitation may stem from the lack of multipattern training images,leading to reduced robustness against salient objects.To overcome the scarcity of multi-pattern training images,we introduce CamDiff,a novel approach inspired by AI-Generated Content(AIGC).Specifically,we leverage a latent diffusion model to synthesize salient objects in camouflaged scenes,while using the zero-shot image classification ability of the Contrastive Language-Image Pre-training(CLIP)model to prevent synthesis failures and ensure that the synthesized objects align with the input prompt.Consequently,the synthesized image retains its original camouflage label while incorporating salient objects,yielding camouflaged scenes with richer characteristics.The results of user studies show that the salient objects in our synthesized scenes attract the user’s attention more;thus,such samples pose a greater challenge to the existing COD models.Our CamDiff enables flexible editing and effcient large-scale dataset generation at a low cost.It significantly enhances the training and testing phases of COD baselines,granting them robustness across diverse domains.Our newly generated datasets and source code are available at https://github.com/drlxj/CamDiff. 展开更多
关键词 AI-generated content diffusion model camouflaged object detection salient object detection
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部