Examining the Efficacy of Transformer Models in Radiology Report Labeling within a Thai Hospital

Conference proceedings article


ผู้เขียน/บรรณาธิการ


กลุ่มสาขาการวิจัยเชิงกลยุทธ์


รายละเอียดสำหรับงานพิมพ์

รายชื่อผู้แต่งLarpkiattaworn W., Promwisat T., Chamveha I., Chaisangmongkon W.

ผู้เผยแพร่Institute of Electrical and Electronics Engineers Inc.

ปีที่เผยแพร่ (ค.ศ.)2024

หน้าแรก226

หน้าสุดท้าย231

จำนวนหน้า6

ISBN9798350344349

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85189942610&doi=10.1109%2fICAIIC60209.2024.10463290&partnerID=40&md5=a7f5753f86587f3b519b898c04a9d651

ภาษาEnglish-Great Britain (EN-GB)


ดูบนเว็บไซต์ของสำนักพิมพ์


บทคัดย่อ

This study explores the application of Transformer models, specifically CheXbert and CharacterBERT, in extracting labels from radiology reports in a real-world clinical setting of a Thai hospital. This setting presents unique challenges, such as spelling errors, grammar mistakes, and diverse report formats, leading to 'noisy labels'. Previous natural language processing systems, including rule-based algorithms and Transformers, have been used for this task, but they face difficulties in such environment. Despite these challenges, our research demonstrates that training Transformers on a small dataset is sufficient to outperform rule-based labelers. The study also reveals that increasing dataset size and data augmentation do not necessarily enhance accuracy, due to the potential increase in noise. Further, a comparison between CharacterBERT and CheXbert is made, showing that despite CharacterBERT's ability to handle misspellings, its accuracy does not consistently surpass that of CheXbert. The paper concludes with a case study demonstrating how CheXbert, in collaboration with rule-based labelers, can assist in identifying and rectifying potentially noisy reports, thereby aiding in label purification. © 2024 IEEE.


คำสำคัญ

chest X-ray report labeling


อัพเดทล่าสุด 2024-04-11 ถึง 12:00