Automated Essay Scoring of Thai Engineering Students Using Large Language Models
Conference proceedings article
ผู้เขียน/บรรณาธิการ
กลุ่มสาขาการวิจัยเชิงกลยุทธ์
รายละเอียดสำหรับงานพิมพ์
รายชื่อผู้แต่ง: Sirinaovakul B.; Techadisai K.
ผู้เผยแพร่: Institute of Electrical and Electronics Engineers Inc.
ปีที่เผยแพร่ (ค.ศ.): 2025
ISBN: 9798331522230
ภาษา: English-Great Britain (EN-GB)
บทคัดย่อ
This study investigates the effectiveness of Large Language Models (LLMs) in Automated Essay Scoring (AES) by comparing teacher-assigned scores with those generated by zero-shot and few-shot approaches. The dataset comprises the scores of forty- students essays evaluated by a teacher, a zero-shot approach, and a few-shot approach. The analysis focuses on the correlation between the teacher's scores and the model-generated scores to determine the alignment and reliability of LLMs in AES. The objective is to compare the performance of zero-shot, and few-shot approaches in automated essay grading using LLMs and evaluate their alignment with human grading. The methodology includes dataset preparation, model configuration, comparison with human grading, and evaluation of results. The findings indicate that both the zero-shot and few-shot approaches exhibit a weak positive correlation with the teacher's scores. © 2025 Elsevier B.V., All rights reserved.
คำสำคัญ
ไม่พบข้อมูลที่เกี่ยวข้อง






