Examining the Efficacy of Transformer Models in Radiology Report Labeling within a Thai Hospital

Conference proceedings article


Authors/Editors


Strategic Research Themes


Publication Details

Author listLarpkiattaworn W., Promwisat T., Chamveha I., Chaisangmongkon W.

PublisherInstitute of Electrical and Electronics Engineers Inc.

Publication year2024

Start page226

End page231

Number of pages6

ISBN9798350344349

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85189942610&doi=10.1109%2fICAIIC60209.2024.10463290&partnerID=40&md5=a7f5753f86587f3b519b898c04a9d651

LanguagesEnglish-Great Britain (EN-GB)


View on publisher site


Abstract

This study explores the application of Transformer models, specifically CheXbert and CharacterBERT, in extracting labels from radiology reports in a real-world clinical setting of a Thai hospital. This setting presents unique challenges, such as spelling errors, grammar mistakes, and diverse report formats, leading to 'noisy labels'. Previous natural language processing systems, including rule-based algorithms and Transformers, have been used for this task, but they face difficulties in such environment. Despite these challenges, our research demonstrates that training Transformers on a small dataset is sufficient to outperform rule-based labelers. The study also reveals that increasing dataset size and data augmentation do not necessarily enhance accuracy, due to the potential increase in noise. Further, a comparison between CharacterBERT and CheXbert is made, showing that despite CharacterBERT's ability to handle misspellings, its accuracy does not consistently surpass that of CheXbert. The paper concludes with a case study demonstrating how CheXbert, in collaboration with rule-based labelers, can assist in identifying and rectifying potentially noisy reports, thereby aiding in label purification. © 2024 IEEE.


Keywords

chest X-ray report labeling


Last updated on 2024-04-11 at 12:00