Generating Synthetic Images Using Stable Diffusion Model for Skin Lesion Classification

Conference proceedings article


Authors/Editors


Strategic Research Themes


Publication Details

Author listParapat Patcharapimpisut, Paisit Khanarsa

Publication year2024

Start page184

End page189

Number of pages6

URLhttps://ieeexplore.ieee.org/document/10499667


View on publisher site


Abstract

This research explores the utilization of diffusion models for image augmentation aimed at generating an improved image dataset to enhance the performance of deep learning models in diagnosing skin diseases. The study utilizes the HAM10000 image dataset, which consists of seven skin diseases, to train the diffusion model for synthesizing images. The resultant synthetic images from the Stable Diffusion model are intended for augmenting the original real image dataset to train the skin disease classification model, utilizing InceptionResNetV2 with soft attention and ResNet50 as baseline models. The results reveal that synthetic images generated from the Stable Diffusion model, when combined with real images between 1 and 1.5 times the size of real images, lead to a higher macro-average recall of the model by 4-5%. Moreover, there is an approximate 1-2% enhancement in accuracy for both InceptionResNetV2 with Soft Attention and ResNet50, resulting in accuracies of 90.57% and 89.67%, respectively. Additionally, the AVC score per class of skin lesions demonstrates an improvement ranging from 1% to 28%.


Keywords

No matching items found.


Last updated on 2024-23-04 at 23:05