Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices...Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.展开更多
Background modeling and subtraction is a fundamental problem in video analysis. Many algorithms have been developed to date, but there are still some challenges in complex environments, especially dynamic scenes in wh...Background modeling and subtraction is a fundamental problem in video analysis. Many algorithms have been developed to date, but there are still some challenges in complex environments, especially dynamic scenes in which backgrounds are themselves moving, such as rippling water and swaying trees. In this paper, a novel background modeling method is proposed for dynamic scenes by combining both tensor representation and swarm intelligence. We maintain several video patches, which are naturally represented as higher order tensors,to represent the patterns of background, and utilize tensor low-rank approximation to capture the dynamic nature. Furthermore, we introduce an ant colony algorithm to improve the performance. Experimental results show that the proposed method is robust and adaptive in dynamic environments, and moving objects can be perfectly separated from the complex dynamic background.展开更多
基金supported by the National Natural Science Foundation of China(62171088,U19A2052,62020106011)the Medico-Engineering Cooperation Funds from University of Electronic Science and Technology of China(ZYGX2021YGLH215,ZYGX2022YGRH005)。
文摘Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.
基金supported by National Natural Science Foundation of China (Grant Nos. 11301137 and 11371036)the National Science Foundation of Hebei Province of China (Grant No. A2014205100
文摘Background modeling and subtraction is a fundamental problem in video analysis. Many algorithms have been developed to date, but there are still some challenges in complex environments, especially dynamic scenes in which backgrounds are themselves moving, such as rippling water and swaying trees. In this paper, a novel background modeling method is proposed for dynamic scenes by combining both tensor representation and swarm intelligence. We maintain several video patches, which are naturally represented as higher order tensors,to represent the patterns of background, and utilize tensor low-rank approximation to capture the dynamic nature. Furthermore, we introduce an ant colony algorithm to improve the performance. Experimental results show that the proposed method is robust and adaptive in dynamic environments, and moving objects can be perfectly separated from the complex dynamic background.