Deep learning-based Multimodal Intention Retrieval for Human-Robot Collaboration

Conference proceedings article


ผู้เขียน/บรรณาธิการ


กลุ่มสาขาการวิจัยเชิงกลยุทธ์


รายละเอียดสำหรับงานพิมพ์

รายชื่อผู้แต่งPatipon Buason, Orapadee Joochim

ปีที่เผยแพร่ (ค.ศ.)2024

ภาษาEnglish-Great Britain (EN-GB)


บทคัดย่อ

The world is on the cusp of entering the Fifth Industrial Revolution, which will focus on integrating human creativity with the efficiency, intelligence, and precision of machines. To achieve this objective, it is necessary to have a support systems capable of handling data from diverse sources to obtain comprehensive information across all dimensions. These systems must also be adaptable to changing environments and conditions. These characteristics are the goals of this research. We proposed a system called multimodal intention retrieval, which utilizes a multimodal fusion network to transform input data from any modality into vector representations. These representations will be used in calculations for classification purposes through the k-nearest neighbor algorithm. The resulting classes are used to label the intentions of data stored in a database, enabling the system to respond to user queries, regardless of the modality of the query input.The results demonstrate the potential for integrating this system with traditional robot control systems


คำสำคัญ

ไม่พบข้อมูลที่เกี่ยวข้อง


อัพเดทล่าสุด 2024-29-11 ถึง 12:00