Semantically Relevant Scene Detection Using Deep Learning

Conference proceedings article


Authors/Editors


Strategic Research Themes


Publication Details

Author listDipanita Chakraborty, Werapon Chiracharit, Kosin Chamnongthai

Publication year2021

Title of seriesThe 13th Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC 2021)

Start page1

End page4

Number of pages4


Abstract

Automatic detection of semantically relevant scene will make content-based video browsing, video retrieval, and video indexing tasks faster and efficient. Categorizing similar meaningful scenes of a large video is in high demand in video processing to understand the correlation between different scenes of a video, for example, which scenes have fighting action concept, or in which scenes similar criminal has appeared. Previous researches have shown supervised methods for semantic scene or concept understanding of a video; however, their method is unable to find out semantic relevance between different scenes. In this paper, a CNN-deeper LSTM based method is proposed which consists of three steps. Firstly, input video dataset is segmented into shots; secondly, features are extracted from candidate video frames, which are considered as feature descriptors; finally, these features descriptors are analyzed and evaluated for recognizing semantically relevant scenes. Experimental results show efficiency of the proposed method of this paper.


Keywords

No matching items found.


Last updated on 2024-19-02 at 23:05