A 360°video stream provide users a choice of viewing one's own point of interest inside the immersive contents.Performing head or hand manipulations to view the interesting scene in a 360°video is very t...A 360°video stream provide users a choice of viewing one's own point of interest inside the immersive contents.Performing head or hand manipulations to view the interesting scene in a 360°video is very tedious and the user may view the interested frame during his head/hand movement or even lose it.While automatically extracting user's point of interest(UPI)in a 360°video is very challenging because of subjectivity and difference of comforts.To handle these challenges and provide user's the best and visually pleasant view,we propose an automatic approach by utilizing two CNN models:object detector and aesthetic score of the scene.The proposed framework is three folded:pre-processing,Deepdive architecture,and view selection pipeline.In first fold,an input 360°video-frame is divided into three sub frames,each one with 120°view.In second fold,each sub-frame is passed through CNN models to extract visual features in the sub-frames and calculate aesthetic score.Finally,decision pipeline selects the sub frame with salient object based on the detected object and calculated aesthetic score.As compared to other state-of-the-art techniques which are domain specific approaches i.e.,support sports 360°video,our syste m support most of the 360°videos genre.Performance evaluation of proposed framework on our own collected data from various websites indicate performance for different categories of 360°videos.展开更多
文摘A 360°video stream provide users a choice of viewing one's own point of interest inside the immersive contents.Performing head or hand manipulations to view the interesting scene in a 360°video is very tedious and the user may view the interested frame during his head/hand movement or even lose it.While automatically extracting user's point of interest(UPI)in a 360°video is very challenging because of subjectivity and difference of comforts.To handle these challenges and provide user's the best and visually pleasant view,we propose an automatic approach by utilizing two CNN models:object detector and aesthetic score of the scene.The proposed framework is three folded:pre-processing,Deepdive architecture,and view selection pipeline.In first fold,an input 360°video-frame is divided into three sub frames,each one with 120°view.In second fold,each sub-frame is passed through CNN models to extract visual features in the sub-frames and calculate aesthetic score.Finally,decision pipeline selects the sub frame with salient object based on the detected object and calculated aesthetic score.As compared to other state-of-the-art techniques which are domain specific approaches i.e.,support sports 360°video,our syste m support most of the 360°videos genre.Performance evaluation of proposed framework on our own collected data from various websites indicate performance for different categories of 360°videos.