Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed ir...Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed irregularly and discontinuously in spatial and temporal domains,where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem.In this paper,we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression.The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis.Then,it introduces a prediction method where both intraframe and inter-frame point clouds are available,by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm.Finally,the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques.Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information,and is suitable for 3D point cloud compression applicable to various types of scenes.展开更多
The spatiotemporal evolution of hairpin vortex structures in a fully developed turbulent boundary layer is investigated qualitatively and quantitatively by using two image methods.In this paper,the moving single-frame...The spatiotemporal evolution of hairpin vortex structures in a fully developed turbulent boundary layer is investigated qualitatively and quantitatively by using two image methods.In this paper,the moving single-frame and long-exposure(MSFLE)image method is used to intuitively track the evolution process of a hairpin vortex,while the moving particle image velocimetry(moving-PIV)method is applied for obtaining a moving velocity field for quantitative analysis.According to the structural characteristics of the hairpin vortex,an inclined light sheet with an appropriate inclination of 53°is arranged to capture the complete hairpin vortex structure at Re_(θ)=97–194.In addition,the core size and the rotational strength of a hairpin vortex are further defined and quantified by the Liutex vector method.The evolution process of a complete hairpin vortex structure observed by MSFLE shows that the shear along the normal direction leads to an increasing strength of the hairpin vortex,accompanied by a lifting vortex head and a distance decrease between two vortex legs during the dissipation period.By combining moving-PIV with the Liutex identification,the spatiotemporal evolution of four typical regions of a hairpin vortex projecting into a 53°cross-section is obtained.The results show that the process from the generation to the dissipation of a single hairpin vortex can be well characterized and recorded by the Liutex based on the core size and rotational intensity,and the evolution process is consistent with the MSFLE result.According to the statistics of vortex core size and rotation intensity along time,the evolution of the hairpin vortex necks and legs can be described as a process of enhancement followed by dissipation.For the vortex head,its evolution maintains longer attributed to its far-from-wall position,which consists of an absolute enhancement process(stage 1)with an increasing rotation strength and a constant core size,and an absolute dissipation(stage 2)with a decreasing rotation strength and a constant core size.展开更多
The dynamic mechanism of the vortex generation and evolution process in a fully developed turbulent boundary layer with Reθ=97-194 is experimentally investigated.In this study,a moving single-frame and long-exposure(...The dynamic mechanism of the vortex generation and evolution process in a fully developed turbulent boundary layer with Reθ=97-194 is experimentally investigated.In this study,a moving single-frame and long-exposure(MSFLE)imaging method and a moving particle image velocimetry/particle tracing velocimetry(M-PIV/PTV)are designed and implemented for measuring the temporal and spatial evolution of vortex cores in both qualitative and quantitative ways,respectively.On the other hand,the Liutex vector,which is a new mathematical definition and identification of the vortex core proposed by Liu’s group,is first applied in the experiment for the structural visualization and quantitative analysis of the local fluid rotation.The results show that an intuitional process of vortex evolution can be clearly observed by tracking the vortex using MSFLE and verify that the roll-up of the shear layer induced by shear instability is the origin of vortex formation in turbulence.Furthermore,a quantitative investigation in terms of the critical vortex core boundary(size)and its accurate rotation strength is carried out based on the Liutex vector field analysis by M-PIV/PTV.According to statistics of the relation between vortex core size and the rotation strength during the whole process,the physical mechanism of vortex generation and evolution in a turbulent boundary layer of low Reynolds number can be summarized as a four-dominant-state course consisting of the“synchronous linear segment(SL)-absolute enhancement segment(AE)-absolute diffusion segment(AD)-skewing dissipation segment(SD)”.展开更多
Temporal action localization (TAL) is a task of detecting the start and end timestamps of action instances and classifying them in an untrimmed video. As the number of action categories per video increases, existing w...Temporal action localization (TAL) is a task of detecting the start and end timestamps of action instances and classifying them in an untrimmed video. As the number of action categories per video increases, existing weakly-supervised TAL (W-TAL) methods with only video-level labels cannot provide sufficient supervision. Single-frame supervision has attracted the interest of researchers. Existing paradigms model single-frame annotations from the perspective of video snippet sequences, neglect action discrimination of annotated frames, and do not pay sufficient attention to their correlations in the same category. Considering a category, the annotated frames exhibit distinctive appearance characteristics or clear action patterns.Thus, a novel method to enhance action discrimination via category-specific frame clustering for W-TAL is proposed. Specifically,the K-means clustering algorithm is employed to aggregate the annotated discriminative frames of the same category, which are regarded as exemplars to exhibit the characteristics of the action category. Then, the class activation scores are obtained by calculating the similarities between a frame and exemplars of various categories. Category-specific representation modeling can provide complimentary guidance to snippet sequence modeling in the mainline. As a result, a convex combination fusion mechanism is presented for annotated frames and snippet sequences to enhance the consistency properties of action discrimination,which can generate a robust class activation sequence for precise action classification and localization. Due to the supplementary guidance of action discriminative enhancement for video snippet sequences, our method outperforms existing single-frame annotation based methods. Experiments conducted on three datasets (THUMOS14, GTEA, and BEOID) show that our method achieves high localization performance compared with state-of-the-art methods.展开更多
文摘Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed irregularly and discontinuously in spatial and temporal domains,where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem.In this paper,we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression.The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis.Then,it introduces a prediction method where both intraframe and inter-frame point clouds are available,by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm.Finally,the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques.Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information,and is suitable for 3D point cloud compression applicable to various types of scenes.
基金Projects supported by the National Natural Science Foundation of China(Grant No.51906154)the National Science and Technology Major Project(Grant No.2017-V-0016-0069)the Natural Science Foundation of Shanghai(Grant No.21ZR1443700).
文摘The spatiotemporal evolution of hairpin vortex structures in a fully developed turbulent boundary layer is investigated qualitatively and quantitatively by using two image methods.In this paper,the moving single-frame and long-exposure(MSFLE)image method is used to intuitively track the evolution process of a hairpin vortex,while the moving particle image velocimetry(moving-PIV)method is applied for obtaining a moving velocity field for quantitative analysis.According to the structural characteristics of the hairpin vortex,an inclined light sheet with an appropriate inclination of 53°is arranged to capture the complete hairpin vortex structure at Re_(θ)=97–194.In addition,the core size and the rotational strength of a hairpin vortex are further defined and quantified by the Liutex vector method.The evolution process of a complete hairpin vortex structure observed by MSFLE shows that the shear along the normal direction leads to an increasing strength of the hairpin vortex,accompanied by a lifting vortex head and a distance decrease between two vortex legs during the dissipation period.By combining moving-PIV with the Liutex identification,the spatiotemporal evolution of four typical regions of a hairpin vortex projecting into a 53°cross-section is obtained.The results show that the process from the generation to the dissipation of a single hairpin vortex can be well characterized and recorded by the Liutex based on the core size and rotational intensity,and the evolution process is consistent with the MSFLE result.According to the statistics of vortex core size and rotation intensity along time,the evolution of the hairpin vortex necks and legs can be described as a process of enhancement followed by dissipation.For the vortex head,its evolution maintains longer attributed to its far-from-wall position,which consists of an absolute enhancement process(stage 1)with an increasing rotation strength and a constant core size,and an absolute dissipation(stage 2)with a decreasing rotation strength and a constant core size.
基金supported by the National Natural Science Foundation of China(Grants Nos.51906154,51576130)the National Science and Technology Major Project(Grant No.2017-V-0016-0069).
文摘The dynamic mechanism of the vortex generation and evolution process in a fully developed turbulent boundary layer with Reθ=97-194 is experimentally investigated.In this study,a moving single-frame and long-exposure(MSFLE)imaging method and a moving particle image velocimetry/particle tracing velocimetry(M-PIV/PTV)are designed and implemented for measuring the temporal and spatial evolution of vortex cores in both qualitative and quantitative ways,respectively.On the other hand,the Liutex vector,which is a new mathematical definition and identification of the vortex core proposed by Liu’s group,is first applied in the experiment for the structural visualization and quantitative analysis of the local fluid rotation.The results show that an intuitional process of vortex evolution can be clearly observed by tracking the vortex using MSFLE and verify that the roll-up of the shear layer induced by shear instability is the origin of vortex formation in turbulence.Furthermore,a quantitative investigation in terms of the critical vortex core boundary(size)and its accurate rotation strength is carried out based on the Liutex vector field analysis by M-PIV/PTV.According to statistics of the relation between vortex core size and the rotation strength during the whole process,the physical mechanism of vortex generation and evolution in a turbulent boundary layer of low Reynolds number can be summarized as a four-dominant-state course consisting of the“synchronous linear segment(SL)-absolute enhancement segment(AE)-absolute diffusion segment(AD)-skewing dissipation segment(SD)”.
基金supported by the National Natural Science Foundation of China(No.61672268)。
文摘Temporal action localization (TAL) is a task of detecting the start and end timestamps of action instances and classifying them in an untrimmed video. As the number of action categories per video increases, existing weakly-supervised TAL (W-TAL) methods with only video-level labels cannot provide sufficient supervision. Single-frame supervision has attracted the interest of researchers. Existing paradigms model single-frame annotations from the perspective of video snippet sequences, neglect action discrimination of annotated frames, and do not pay sufficient attention to their correlations in the same category. Considering a category, the annotated frames exhibit distinctive appearance characteristics or clear action patterns.Thus, a novel method to enhance action discrimination via category-specific frame clustering for W-TAL is proposed. Specifically,the K-means clustering algorithm is employed to aggregate the annotated discriminative frames of the same category, which are regarded as exemplars to exhibit the characteristics of the action category. Then, the class activation scores are obtained by calculating the similarities between a frame and exemplars of various categories. Category-specific representation modeling can provide complimentary guidance to snippet sequence modeling in the mainline. As a result, a convex combination fusion mechanism is presented for annotated frames and snippet sequences to enhance the consistency properties of action discrimination,which can generate a robust class activation sequence for precise action classification and localization. Due to the supplementary guidance of action discriminative enhancement for video snippet sequences, our method outperforms existing single-frame annotation based methods. Experiments conducted on three datasets (THUMOS14, GTEA, and BEOID) show that our method achieves high localization performance compared with state-of-the-art methods.