期刊文献+
共找到8篇文章
< 1 >
每页显示 20 50 100
Fast speedometer identification in dynamic scene based on phase correlation 被引量:1
1
作者 王昱棠 付梦印 杨毅 《Journal of Beijing Institute of Technology》 EI CAS 2012年第3期394-399,共6页
Speedometer identification has been researched for many years.The common approaches to that problem are usually based on image subtraction,which does not adapt to image offsets caused by camera vibration.To cope with ... Speedometer identification has been researched for many years.The common approaches to that problem are usually based on image subtraction,which does not adapt to image offsets caused by camera vibration.To cope with the rapidity,robust and accurate requirements of this kind of work in dynamic scene,a fast speedometer identification algorithm is proposed,it utilizes phase correlation method based on regional entire template translation to estimate the offset between images.In order to effectively reduce unnecessary computation and false detection rate,an improved linear Hough transform method with two optimization strategies is presented for pointer line detection.Based on VC++ 6.0 software platform with OpenCV library,the algorithm performance under experiments has shown that it celerity and precision. 展开更多
关键词 speedometer dynamic scene image sequence phase correlation improved linear Hough transform
下载PDF
Introduction to Visual Surveillance of Dynamic Scenes
2
作者 Steve Maybank 《自动化学报》 EI CSCD 北大核心 2003年第3期319-320,共2页
关键词 of Introduction to Visual Surveillance of dynamic scenes
全文增补中
Progressive edge-sensing dynamic scene deblurring
3
作者 Tianlin Zhang Jinjiang Li Hui Fan 《Computational Visual Media》 SCIE EI CSCD 2022年第3期495-508,共14页
Deblurring images of dynamic scenes is a challenging task because blurring occurs due to a combination of many factors.In recent years,the use of multi-scale pyramid methods to recover high-resolution sharp images has... Deblurring images of dynamic scenes is a challenging task because blurring occurs due to a combination of many factors.In recent years,the use of multi-scale pyramid methods to recover high-resolution sharp images has been extensively studied.We have made improvements to the lack of detail recovery in the cascade structure through a network using progressive integration of data streams.Our new multi-scale structure and edge feature perception design deals with changes in blurring at different spatial scales and enhances the sensitivity of the network to blurred edges.The coarse-to-fine architecture restores the image structure,first performing global adjustments,and then performing local refinement.In this way,not only is global correlation considered,but also residual information is used to significantly improve image restoration and enhance texture details.Experimental results show quantitative and qualitative improvements over existing methods. 展开更多
关键词 image deblurring dynamic scenes multiscale edge features
原文传递
A temporal-spatial background modeling of dynamic scenes
4
作者 Jiuyue HAO Chao LI +1 位作者 Zhang XIONG Ejaz HUSSAIN 《Frontiers of Materials Science》 SCIE CSCD 2011年第3期290-299,共10页
Moving object detection in dynamic scenes is a basic task in a surveillance system for sensor data collection. In this paper, we present a powerful back- ground subtraction algorithm called Gaussian-kernel density est... Moving object detection in dynamic scenes is a basic task in a surveillance system for sensor data collection. In this paper, we present a powerful back- ground subtraction algorithm called Gaussian-kernel density estimator (G-KDE) that improves the accuracy and reduces the computational load. The main innovation is that we divide the changes of background into continuous and stable changes to deal with dynamic scenes and moving objects that first merge into the background, and separately model background using both KDE model and Gaussian models. To get a temporal- spatial background model, the sample selection is based on the concept of region average at the update stage. In the detection stage, neighborhood information content (NIC) is implemented which suppresses the false detection due to small and un-modeled movements in the scene. The experimental results which are generated on three separate sequences indicate that this method is well suited for precise detection of moving objects in complex scenes and it can be efficiently used in various detection systems. 展开更多
关键词 temporal-spatial background model Gaus-sian-kemel density estimator (G-KDE) dynamic scenes neighborhood information content (NIC) moving objectdetection
原文传递
HDR-Net-Fusion:Real-time 3D dynamic scene reconstruction with a hierarchical deep reinforcement network 被引量:1
5
作者 Hao-Xuan Song Jiahui Huang +1 位作者 Yan-Pei Cao Tai-Jiang Mu 《Computational Visual Media》 EI CSCD 2021年第4期419-435,共17页
Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing de... Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing devices and the inherently ill-posed nature of non-rigid registration with insufficient information,traditional approaches often produce low-quality geometry with holes,bumps,and misalignments.We propose a novel 3D dynamic reconstruction system,named HDR-Net-Fusion,which learns to simultaneously reconstruct and refine the geometry on the fly with a sparse embedded deformation graph of surfels,using a hierarchical deep reinforcement(HDR)network.The latter comprises two parts:a global HDR-Net which rapidly detects local regions with large geometric errors,and a local HDR-Net serving as a local patch refinement operator to promptly complete and enhance such regions.Training the global HDR-Net is formulated as a novel reinforcement learning problem to implicitly learn the region selection strategy with the goal of improving the overall reconstruction quality.The applicability and efficiency of our approach are demonstrated using a large-scale dynamic reconstruction dataset.Our method can reconstruct geometry with higher quality than traditional methods. 展开更多
关键词 dynamic 3D scene reconstruction deep reinforcement learning point cloud completion deep neural networks
原文传递
Multi-exposure fusion for high dynamic range scene
6
作者 申小禾 Liu Jinghong 《High Technology Letters》 EI CAS 2017年第4期343-349,共7页
Due to the existing limited dynamic range a camera cannot reveal all the details in a high-dynamic range scene. In order to solve this problem,this paper presents a multi-exposure fusion method for getting high qualit... Due to the existing limited dynamic range a camera cannot reveal all the details in a high-dynamic range scene. In order to solve this problem,this paper presents a multi-exposure fusion method for getting high quality images in high dynamic range scene. First,a set of multi-exposure images is obtained by multiple exposures in a same scene and their brightness condition is analyzed. Then,multi-exposure images under the same scene are decomposed using dual-tree complex wavelet transform( DT-CWT),and their low and high frequency components are obtained. Weight maps according to the brightness condition are assigned to the low components for fusion. Maximizing the region Sum Modified-Laplacian( SML) is adopted for high-frequency components fusing. Finally,the fused image is acquired by subjecting the low and high frequency coefficients to inverse DT-CWT.Experimental results show that the proposed approach generates high quality results with uniform distributed brightness and rich details. The proposed method is efficient and robust in varies scenes. 展开更多
关键词 multi-exposure fusion high dynamic range scene dual-tree complex wavelet transform(DT-CWT) brightness analysis
下载PDF
A robust RGB-D visual odometry with moving object detection in dynamic indoor scenes
7
作者 Xianglong Zhang Haiyang Yu Yan Zhuang 《IET Cyber-Systems and Robotics》 EI 2023年第1期79-88,共10页
Simultaneous localisation and mapping(SLAM)are the basis for many robotic applications.As the front end of SLAM,visual odometry is mainly used to estimate camera pose.In dynamic scenes,classical methods are deteriorat... Simultaneous localisation and mapping(SLAM)are the basis for many robotic applications.As the front end of SLAM,visual odometry is mainly used to estimate camera pose.In dynamic scenes,classical methods are deteriorated by dynamic objects and cannot achieve satisfactory results.In order to improve the robustness of visual odometry in dynamic scenes,this paper proposed a dynamic region detection method based on RGBD images.Firstly,all feature points on the RGB image are classified as dynamic and static using a triangle constraint and the epipolar geometric constraint successively.Meanwhile,the depth image is clustered using the K-Means method.The classified feature points are mapped to the clustered depth image,and a dynamic or static label is assigned to each cluster according to the number of dynamic feature points.Subsequently,a dynamic region mask for the RGB image is generated based on the dynamic clusters in the depth image,and the feature points covered by the mask are all removed.The remaining static feature points are applied to estimate the camera pose.Finally,some experimental results are provided to demonstrate the feasibility and performance. 展开更多
关键词 dynamic indoor scenes moving object detection RGB-D SLAM visual odometry
原文传递
DFPC-SLAM:A Dynamic Feature Point Constraints-Based SLAM Using Stereo Vision for Dynamic Environment
8
作者 Bo Zeng Chengqun Song +1 位作者 Cheng Jun Yuhang Kang 《Guidance, Navigation and Control》 2023年第1期46-60,共15页
Visual SLAM methods usually presuppose that the scene is static, so the SLAM algorithm formobile robots in dynamic scenes often results in a signicant decrease in accuracy due to thein°uence of dynamic objects. I... Visual SLAM methods usually presuppose that the scene is static, so the SLAM algorithm formobile robots in dynamic scenes often results in a signicant decrease in accuracy due to thein°uence of dynamic objects. In this paper, feature points are divided into dynamic and staticfrom semantic information and multi-view geometry information, and then static region featurepoints are added to the pose-optimization, and static scene maps are established for dynamicscenes. Finally, experiments are conducted in dynamic scenes using the KITTI dataset, and theresults show that the proposed algorithm has higher accuracy in highly dynamic scenes comparedto the visual SLAM baseline. 展开更多
关键词 SLAM stereo vision semantic segmentation multi-view geometry dynamic scenes.
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部