Scheduling Deep Learning Training in GPU Cluster Using the Model-Similarity-Based Policy

Conference proceedings article


Authors/Editors

No matching items found.


Strategic Research Themes


Publication Details

Author listThanapol P.; Lavangnananda K.; Leprvost F.; Schleich J.; Bouvry P.

PublisherSpringer Science and Business Media Deutschland GmbH

Publication year2023

Volume number13996 LNAI

Start page363

End page374

Number of pages12

ISBN978-981995836-8

ISSN3029743

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85173575623&doi=10.1007%2f978-981-99-5837-5_30&partnerID=40&md5=3f0a32644833b07180699b661811e641

LanguagesEnglish-Great Britain (EN-GB)


View on publisher site


Abstract

Training large neural networks with huge amount of data using multiple Graphic Processing Units (GPUs) became widespread with the emergence of Deep Learning (DL) technology. It is usually operated in datacenters featuring multiple GPU clusters, which are shared amongst users. However, different GPU architectures co-exist on the market and differ in training performance. To maximise the utilisation of a GPU cluster, the scheduler plays an important role in managing the resources by dispatching the jobs to the GPUs. An efficient scheduling strategy should take into account that the training performance of each GPU architecture varies for the different DL models. In this work, an original model-similarity-based scheduling policy is introduced that takes into account the GPU architectures that match with the DL models. The results show that using the model-similarity-based scheduling policy for distributed training across multiple GPUs of a DL model with a large batch size can reduce the makespan. ฉ The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023.


Keywords

Deep learningDistributed TrainingGPU ClusterschedulingScheduling PolicySimilarity Measurement


Last updated on 2024-08-03 at 23:05