Unsupervised object of interest discovery in multi-view video sequence
Conference proceedings article
ผู้เขียน/บรรณาธิการ
กลุ่มสาขาการวิจัยเชิงกลยุทธ์
ไม่พบข้อมูลที่เกี่ยวข้อง
รายละเอียดสำหรับงานพิมพ์
รายชื่อผู้แต่ง: Thummanuntawat T., Kumwilaisak W., Chinrungrueng J.
ปีที่เผยแพร่ (ค.ศ.): 2009
Volume number: 3
หน้าแรก: 1622
หน้าสุดท้าย: 1627
จำนวนหน้า: 6
ISBN: 9788955191387
นอก: 1738-9445
ภาษา: English-Great Britain (EN-GB)
บทคัดย่อ
This paper presents a novel algorithm in unsupervised object of interest discovery for multi-view video sequences. We classify a multi-view video sequence based on the degree of movement in a video sequence. In a video sequence with movement, we first group video frames along and across views as a group of picture (GOP). Key points or feature vectors representing textures existing in video frames in GOP are extracted using Scale-Invariant Feature Transform (SIFT). Key points are clustered using K-mean algorithm. Visual words are assigned to all key points based on their clusters. Patches represented small areas with textures are generated using the Maximally Stable Extremal Regions (MSER) operator. One patch can contain more than one key point, which leads to more than one visual word. Therefore, the patch can be represented by different visual words in different degrees. Motion detection algorithm is used to determine movement regions in video frames. Patches in the movement regions have higher likelihoods to be parts of the object of interest. With the developed spatial modeling and appearance modeling as well as the motion detection, we compute the likelihood which patches will belong to the object of interest. The group of patches with high likelihoods is clustered and indicated as the object of interest. When there are no or not significant movement, we assume that the human subjects are the most important objects in video sequences. A face detection algorithm is used to determine the location of the object of interest. When there are no human subjects in video sequences, the frequencies of visual words occurring in video sequences are used to identify the object of interest. This can be done because patches, which will be parts of the objects of interest, can be derived from the visual words. The experimental results in various types of multi-view video sequences show that our proposed algorithm can discover the objects of interest in multi-view video sequences correctly over 80% by average.
คำสำคัญ
Maximally stable extremal regions, Motion detection, Multi-view video, Spatial and appearance modeling