Semantically Relevant Scene Detection Using Deep Learning

Conference proceedings article


ผู้เขียน/บรรณาธิการ


กลุ่มสาขาการวิจัยเชิงกลยุทธ์


รายละเอียดสำหรับงานพิมพ์

รายชื่อผู้แต่งDipanita Chakraborty, Werapon Chiracharit, Kosin Chamnongthai

ปีที่เผยแพร่ (ค.ศ.)2021

ชื่อชุดThe 13th Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC 2021)

หน้าแรก1

หน้าสุดท้าย4

จำนวนหน้า4


บทคัดย่อ

Automatic detection of semantically relevant scene will make content-based video browsing, video retrieval, and video indexing tasks faster and efficient. Categorizing similar meaningful scenes of a large video is in high demand in video processing to understand the correlation between different scenes of a video, for example, which scenes have fighting action concept, or in which scenes similar criminal has appeared. Previous researches have shown supervised methods for semantic scene or concept understanding of a video; however, their method is unable to find out semantic relevance between different scenes. In this paper, a CNN-deeper LSTM based method is proposed which consists of three steps. Firstly, input video dataset is segmented into shots; secondly, features are extracted from candidate video frames, which are considered as feature descriptors; finally, these features descriptors are analyzed and evaluated for recognizing semantically relevant scenes. Experimental results show efficiency of the proposed method of this paper.


คำสำคัญ

ไม่พบข้อมูลที่เกี่ยวข้อง


อัพเดทล่าสุด 2024-19-02 ถึง 23:05