Automated Essay Scoring of Thai Engineering Students Using Large Language Models

Conference proceedings article


ผู้เขียน/บรรณาธิการ


กลุ่มสาขาการวิจัยเชิงกลยุทธ์


รายละเอียดสำหรับงานพิมพ์

รายชื่อผู้แต่งSirinaovakul B.; Techadisai K.

ผู้เผยแพร่Institute of Electrical and Electronics Engineers Inc.

ปีที่เผยแพร่ (ค.ศ.)2025

ISBN9798331522230

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-105014425686&doi=10.1109%2FECTI-CON64996.2025.11101071&partnerID=40&md5=aebbf7b9b7ec1767a7a3a146bf07ab03

ภาษาEnglish-Great Britain (EN-GB)


ดูบนเว็บไซต์ของสำนักพิมพ์


บทคัดย่อ

This study investigates the effectiveness of Large Language Models (LLMs) in Automated Essay Scoring (AES) by comparing teacher-assigned scores with those generated by zero-shot and few-shot approaches. The dataset comprises the scores of forty- students essays evaluated by a teacher, a zero-shot approach, and a few-shot approach. The analysis focuses on the correlation between the teacher's scores and the model-generated scores to determine the alignment and reliability of LLMs in AES. The objective is to compare the performance of zero-shot, and few-shot approaches in automated essay grading using LLMs and evaluate their alignment with human grading. The methodology includes dataset preparation, model configuration, comparison with human grading, and evaluation of results. The findings indicate that both the zero-shot and few-shot approaches exhibit a weak positive correlation with the teacher's scores. © 2025 Elsevier B.V., All rights reserved.


คำสำคัญ

ไม่พบข้อมูลที่เกี่ยวข้อง


อัพเดทล่าสุด 2025-02-12 ถึง 00:00