A Framework for Synthetic Audio Conversations Generation using Large Language Models
Conference proceedings article
Authors/Editors
Strategic Research Themes
Publication Details
Author list: Kaung Myat Kyaw, and Jonathan Hoyin Chan
Publication year: 2024
Start page: 1
End page: 5
Number of pages: 5
URL: https://wi-iat2024.sit.kmutt.ac.th/register/program-event-2024.html
Languages: English-United States (EN-US)
Abstract
In this paper, we introduce ConversaSynth, a framework designed to generate synthetic conversation audio using large language models (LLMs) with multiple persona settings. The framework first creates diverse and coherent text-based dialogues across various topics, which are then converted into audio using text-to-speech (TTS) systems. Our experiments demonstrate that ConversaSynth effectively generates highquality synthetic audio datasets, which can significantly enhance the training and evaluation of models for audio tagging, audio classification, and multi-speaker speech recognition. The results indicate that the synthetic datasets generated by ConversaSynth exhibit substantial diversity and realism, making them suitable for developing robust, adaptable audio-based AI systems.
Keywords
Conversational AI, Large Language Models (LLM), Synthetic Audio Generation, Text-to-Speech (TTS)