Studies show that encoding technologies in H.264/AVC,including prediction and conversion,are essential technologies.However,these technologies are more complicated than the MPEG-4,which is a standard method and widely...Studies show that encoding technologies in H.264/AVC,including prediction and conversion,are essential technologies.However,these technologies are more complicated than the MPEG-4,which is a standard method and widely adopted worldwide.Therefore,the amount of calculation in H.264/AVC is significantly up-regulated compared to that of the MPEG-4.In the present study,it is intended to simplify the computational expenses in the international standard compression coding system H.264/AVC for moving images.Inter prediction refers to the most feasible compression technology,taking up to 60%of the entire encoding.In this regard,prediction error and motion vector information are proposed to simplify the computation of inter predictive coding technology.In the initial frame,motion compensation is performed in all target modes and then basic information is collected and analyzed.After the initial frame,motion compensation is performed only in the middle 8×8 modes,and the basic information amount shifts.In order to evaluate the effectiveness of the proposed method and assess the motion image compression coding,four types of motion images,defined by the international telecommunication union(ITU),are employed.Based on the obtained results,it is concluded that the developed method is capable of simplifying the calculation,while it is slightly affected by the inferior image quality and the amount of information.展开更多
Extraction of traffic information from image or video sequence is a hot research topic in intelligenttransportation system and computer vision. A real-time traffic information extraction method based on com-pressed vi...Extraction of traffic information from image or video sequence is a hot research topic in intelligenttransportation system and computer vision. A real-time traffic information extraction method based on com-pressed video with interframe motion vectors for speed, density and flow detection, has been proposed for ex-traction of traffic information under fixed camera setting and well-defined environment. The motion vectors arefirst separated from the compressed video streams, and then filtered to eliminate incorrect and noisy vectors u-sing the well-defined environmental knowledge. By applying the projective transform and using the filtered mo-tion vectors, speed can be calculated from motion vector statistics, density can be estimated using the motionvector occupancy, and flow can be detected using the combination of speed and density. The embodiment of aprototype system for sky camera traffic monitoring using the MPEG video has been implemented, and experi-mental results proved the effectiveness of the method proposed.展开更多
Gait representation is an important issue in gait recognition. A simple yet efficient approach, called Interframe Variation Vector (IW), is proposed. IW considers the spatiotemporal motion characteristic of gait, an...Gait representation is an important issue in gait recognition. A simple yet efficient approach, called Interframe Variation Vector (IW), is proposed. IW considers the spatiotemporal motion characteristic of gait, and uses the shape variation information between successive frames to represent gait signature. Different from other features, IVV rather than condenses a gait sequence into single image resulting in spatial sequence lost; it records the whole moving process in an IVV sequence. IVV can encode whole essential features of gait and preserve all the movements of limbs. Experimental results show that the proposed gait representation has a promising recognition performance.展开更多
With the growth of digital media data manipulation in today’s era due to the availability of readily handy tampering software,the authenticity of records is at high risk,especially in video.There is a dire need to de...With the growth of digital media data manipulation in today’s era due to the availability of readily handy tampering software,the authenticity of records is at high risk,especially in video.There is a dire need to detect such problem and do the necessary actions.In this work,we propose an approach to detect the interframe video forgery utilizing the deep features obtained from the parallel deep neural network model and thorough analytical computations.The proposed approach only uses the deep features extracted from the CNN model and then applies the conventional mathematical approach to these features to find the forgery in the video.This work calculates the correlation coefficient from the deep features of the adjacent frames rather than calculating directly from the frames.We divide the procedure of forgery detection into two phases–video forgery detection and video forgery classification.In video forgery detection,this approach detect input video is original or tampered.If the video is not original,then the video is checked in the next phase,which is video forgery classification.In the video forgery classification,method review the forged video for insertion forgery,deletion forgery,and also again check for originality.The proposed work is generalized and it is tested on two different datasets.The experimental results of our proposed model show that our approach can detect the forgery with the accuracy of 91%on VIFFD dataset,90%in TDTV dataset and classify the type of forgery–insertion and deletion with the accuracy of 82%on VIFFD dataset,86%on TDTV dataset.This work can helps in the analysis of original and tempered video in various domain.展开更多
基金supported by QingLan Project of Jiangsu Province and National Science Fund of China(Nos.61806088,61902160)was supported by Changzhou Science and Technology Support Plan(No.CE20185044).
文摘Studies show that encoding technologies in H.264/AVC,including prediction and conversion,are essential technologies.However,these technologies are more complicated than the MPEG-4,which is a standard method and widely adopted worldwide.Therefore,the amount of calculation in H.264/AVC is significantly up-regulated compared to that of the MPEG-4.In the present study,it is intended to simplify the computational expenses in the international standard compression coding system H.264/AVC for moving images.Inter prediction refers to the most feasible compression technology,taking up to 60%of the entire encoding.In this regard,prediction error and motion vector information are proposed to simplify the computation of inter predictive coding technology.In the initial frame,motion compensation is performed in all target modes and then basic information is collected and analyzed.After the initial frame,motion compensation is performed only in the middle 8×8 modes,and the basic information amount shifts.In order to evaluate the effectiveness of the proposed method and assess the motion image compression coding,four types of motion images,defined by the international telecommunication union(ITU),are employed.Based on the obtained results,it is concluded that the developed method is capable of simplifying the calculation,while it is slightly affected by the inferior image quality and the amount of information.
文摘Extraction of traffic information from image or video sequence is a hot research topic in intelligenttransportation system and computer vision. A real-time traffic information extraction method based on com-pressed video with interframe motion vectors for speed, density and flow detection, has been proposed for ex-traction of traffic information under fixed camera setting and well-defined environment. The motion vectors arefirst separated from the compressed video streams, and then filtered to eliminate incorrect and noisy vectors u-sing the well-defined environmental knowledge. By applying the projective transform and using the filtered mo-tion vectors, speed can be calculated from motion vector statistics, density can be estimated using the motionvector occupancy, and flow can be detected using the combination of speed and density. The embodiment of aprototype system for sky camera traffic monitoring using the MPEG video has been implemented, and experi-mental results proved the effectiveness of the method proposed.
基金National Natural Science Foundation of China ( No.60873179)Shenzhen Technology Fundamental Research Project, China ( No.JC200903180630A)Doctoral Program Foundation of Institutions of Higher Education of China ( No.20090121110032)
文摘Gait representation is an important issue in gait recognition. A simple yet efficient approach, called Interframe Variation Vector (IW), is proposed. IW considers the spatiotemporal motion characteristic of gait, and uses the shape variation information between successive frames to represent gait signature. Different from other features, IVV rather than condenses a gait sequence into single image resulting in spatial sequence lost; it records the whole moving process in an IVV sequence. IVV can encode whole essential features of gait and preserve all the movements of limbs. Experimental results show that the proposed gait representation has a promising recognition performance.
文摘With the growth of digital media data manipulation in today’s era due to the availability of readily handy tampering software,the authenticity of records is at high risk,especially in video.There is a dire need to detect such problem and do the necessary actions.In this work,we propose an approach to detect the interframe video forgery utilizing the deep features obtained from the parallel deep neural network model and thorough analytical computations.The proposed approach only uses the deep features extracted from the CNN model and then applies the conventional mathematical approach to these features to find the forgery in the video.This work calculates the correlation coefficient from the deep features of the adjacent frames rather than calculating directly from the frames.We divide the procedure of forgery detection into two phases–video forgery detection and video forgery classification.In video forgery detection,this approach detect input video is original or tampered.If the video is not original,then the video is checked in the next phase,which is video forgery classification.In the video forgery classification,method review the forged video for insertion forgery,deletion forgery,and also again check for originality.The proposed work is generalized and it is tested on two different datasets.The experimental results of our proposed model show that our approach can detect the forgery with the accuracy of 91%on VIFFD dataset,90%in TDTV dataset and classify the type of forgery–insertion and deletion with the accuracy of 82%on VIFFD dataset,86%on TDTV dataset.This work can helps in the analysis of original and tempered video in various domain.