AI-Powered Guitar Learning Platform: Advancing Digital Music Education for Diversity, Equity, and Inclusion

Conference proceedings article


Authors/Editors


Strategic Research Themes


Publication Details

Author listAwirut Phusaensaart, Punnapa Thianchai, Jittapat Chanyarungroj, Pirun Dilokpatpongsa, Warin Wattanapornprom, Thitaporn Ganokratanaa

Publication year2025

Start page226

End page244

Number of pages19

URLhttps://www.elfasia.org/2025/

LanguagesEnglish-United States (EN-US)


Abstract

vision and deep learning to deliver inclusive, accessible, and personalized music instruction.
Traditional methods of learning the guitar—such as static chord charts, in-person lessons, or
prerecorded videos, often fail to meet the needs of beginners, especially those with limited
access to expert instruction, diverse learning preferences, or physical constraints. To overcome
these limitations, the proposed web-based platform integrates hand pose recognition using
MediaPipe and chord classification via TensorFlow to detect user-formed chord shapes in real
time. When a chord is recognized, the system provides immediate visual and auditory
feedback using WebAudio technology, reinforcing correct technique and offering guidance
when errors occur. This dual-modality feedback loop supports both self-correction and
motivation, enabling learners to progress independently. Designed to be lightweight, deviceagnostic,
and scalable, the platform was tested with ten users, including students, music
instructors, and a live performer, who evaluated its usability, engagement, clarity, and
technical responsiveness. Results showed high user satisfaction (average score: 4.2/5) and
consistent positive outcomes in terms of confidence, engagement, and perceived skill
improvement. The system also includes a real-time progress tracking dashboard, gamified
learning mechanics, and a responsive chord library, making it suitable for a wide range of
learners and contexts. Technically, the model was trained on a dataset of 2,400 labeled images
spanning seven chord classes (None, A, C, D, E, F, G), with an 80:20 train-test split and
optimized parameters for dropout, batch size, and learning rate decay. This platform
contributes to the advancement of AI in education by combining pedagogical insight with
real-time interactivity, supporting informal and formal learning environments alike. Beyond
music instruction, it demonstrates how accessible AI technologies can be used to reduce
educational inequities, promote digital literacy, and support skill-based learning across
socioeconomic and geographic boundaries. The project aligns with global educational goals
emphasizing diversity, equity, and inclusion (DEI), offering a blueprint for future intelligent
learning tools that prioritize user-centric design and educational fairness. By bridging
technical innovation with meaningful pedagogy, the platform sets a foundation for future work
in AI-enhanced creative education.


Keywords

AI-Powered Music EducationComputer VisionInclusive LearningReal-Time Chord Detection


Last updated on 2025-12-11 at 12:00