Evaluation of small-scale deep learning architectures in Thai speech recognition
Conference proceedings article
ผู้เขียน/บรรณาธิการ
กลุ่มสาขาการวิจัยเชิงกลยุทธ์
ไม่พบข้อมูลที่เกี่ยวข้อง
รายละเอียดสำหรับงานพิมพ์
รายชื่อผู้แต่ง: Kaewprateep J., Prom-On S.
ผู้เผยแพร่: Hindawi
ปีที่เผยแพร่ (ค.ศ.): 2018
หน้าแรก: 60
หน้าสุดท้าย: 64
จำนวนหน้า: 5
ISBN: 9781509052097
นอก: 0146-9428
eISSN: 1745-4557
ภาษา: English-Great Britain (EN-GB)
บทคัดย่อ
This paper presents a performance evaluation study for small-scale deep learning neural network for Thai speech recognition task. Convolutional neural network and long short term memory networks were built with a relatively small size dataset and small constructs. The aim of this study is to determine which method would be suitable for a small-scale deep learning study. Relatively small speech corpus was used to build deep-learning neural networks with two different architectures, including convolutional neural network (CNN) model and long short term memory (LSTM) model. Models were evaluated using cross validation technique and compare to one another. The result shows that CNN outperformed LSTM for a small-scale deep learning. This suggests that with the limited dataset and small-scale architecture CNN is a more suitable choice in the speech recognition study. ฉ 2018 IEEE.
คำสำคัญ
Long short term memory network, Thai speech recognition