A Framework for Synthetic Audio Conversations Generation using Large Language Models

Conference proceedings article


Authors/Editors


Strategic Research Themes


Publication Details

Author listKaung Myat Kyaw, and Jonathan Hoyin Chan

Publication year2024

Start page1

End page5

Number of pages5

URLhttps://wi-iat2024.sit.kmutt.ac.th/register/program-event-2024.html

LanguagesEnglish-United States (EN-US)


Abstract

In this paper, we introduce ConversaSynth, a framework designed to generate synthetic conversation audio using large language models (LLMs) with multiple persona settings.  The framework first creates diverse and coherent text-based dialogues across various topics, which are then converted into audio using text-to-speech (TTS) systems. Our experiments demonstrate that ConversaSynth effectively generates highquality synthetic audio datasets, which can significantly enhance the training and evaluation of models for audio tagging, audio classification, and multi-speaker speech recognition. The results indicate that the synthetic datasets generated by ConversaSynth exhibit substantial diversity and realism, making them suitable for developing robust, adaptable audio-based AI systems.


Keywords

Conversational AILarge Language Models (LLM)Synthetic Audio GenerationText-to-Speech (TTS)


Last updated on 2025-25-02 at 12:00