Semantically Relevant Scene Detection Using Deep Learning
Conference proceedings article
Authors/Editors
Strategic Research Themes
Publication Details
Author list: Dipanita Chakraborty, Werapon Chiracharit, Kosin Chamnongthai
Publication year: 2021
Title of series: The 13th Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC 2021)
Start page: 1
End page: 4
Number of pages: 4
Abstract
Automatic detection of semantically relevant scene will make content-based video browsing, video retrieval, and video indexing tasks faster and efficient. Categorizing similar meaningful scenes of a large video is in high demand in video processing to understand the correlation between different scenes of a video, for example, which scenes have fighting action concept, or in which scenes similar criminal has appeared. Previous researches have shown supervised methods for semantic scene or concept understanding of a video; however, their method is unable to find out semantic relevance between different scenes. In this paper, a CNN-deeper LSTM based method is proposed which consists of three steps. Firstly, input video dataset is segmented into shots; secondly, features are extracted from candidate video frames, which are considered as feature descriptors; finally, these features descriptors are analyzed and evaluated for recognizing semantically relevant scenes. Experimental results show efficiency of the proposed method of this paper.
Keywords
No matching items found.