Scheduling Deep Learning Training in GPU Cluster Using the Model-Similarity-Based Policy
Conference proceedings article
Authors/Editors
No matching items found.
Strategic Research Themes
Publication Details
Author list: Thanapol P.; Lavangnananda K.; Leprvost F.; Schleich J.; Bouvry P.
Publisher: Springer Science and Business Media Deutschland GmbH
Publication year: 2023
Volume number: 13996 LNAI
Start page: 363
End page: 374
Number of pages: 12
ISBN: 978-981995836-8
ISSN: 3029743
Languages: English-Great Britain (EN-GB)
Abstract
Training large neural networks with huge amount of data using multiple Graphic Processing Units (GPUs) became widespread with the emergence of Deep Learning (DL) technology. It is usually operated in datacenters featuring multiple GPU clusters, which are shared amongst users. However, different GPU architectures co-exist on the market and differ in training performance. To maximise the utilisation of a GPU cluster, the scheduler plays an important role in managing the resources by dispatching the jobs to the GPUs. An efficient scheduling strategy should take into account that the training performance of each GPU architecture varies for the different DL models. In this work, an original model-similarity-based scheduling policy is introduced that takes into account the GPU architectures that match with the DL models. The results show that using the model-similarity-based scheduling policy for distributed training across multiple GPUs of a DL model with a large batch size can reduce the makespan. ฉ The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023.
Keywords
Deep learning, Distributed Training, GPU Cluster, scheduling, Scheduling Policy, Similarity Measurement