With the rising popularity of deep learning, the popularity of GPUs has increased in recent years. Modern GPUs are extremely powerful with a lot of resources. A key challenge in this context is making sure that these devices are highly utilized. Although there has been a lot of research on improving GPU efficiency at the cluster level (e.g., our Tiresias in NSDI’19), little is known about how well individual GPUs are being utilized today. Worse, even if they are underutilized, little can be done because GPUs are opaque black boxes without any primitives for sharing them. Existing mechanisms for GPU sharing, such as NVIDIA MPS, are coarse-grained and cannot leverage application-specific information. Salus is our foray into the GPU sharing domain by providing two key sharing primitives that allows one to develop a variety of algorithms and improve GPU efficiency for training, inference, and hyperparameter tuning workloads.
Unlike traditional resources such as CPU or the network, modern GPUs do not natively support fine-grained sharing primitives. Consequently, implementing common policies such as time-sharing and preemption are expensive. Worse, when a deep learning (DL) application cannot completely use a GPU’s resources, the GPU cannot be efficiently shared between multiple applications, leading to GPU underutilization.
We present Salus to enable two GPU sharing primitives: fast job switching and memory sharing, to achieve fine-grained GPU sharing among multiple DL applications. Salus is an efficient, consolidated execution service that exposes a GPU to different DL applications, and enforces fine-grained sharing by performing iteration scheduling and addressing associated memory management issues. We show that these primitives can then be used to implement flexible sharing policies. Our integration of Salus with TensorFlow and evaluation on popular DL jobs shows that Salus can improve the average completion time of DL training jobs by 3.19X, GPU utilization for hyper-parameter tuning by 2.38X, and GPU utilization of DL inference applications by 42X over not sharing the GPU and 7X over NVIDIA MPS with small overhead.
Salus has long been in the making and is the first project for me to get into systems for AI and GPU resource management. Peifeng has been diligently working on it since 2017! While it took a long time, I’m excited that it has found a great home and looking forward to building on top of it. This is Peifeng’s first major paper, and the future is even brighter.
This year’s MLSys has 34 accepted papers and remained highly competitive as its previous iteration.
One thought on “Salus Accepted to Appear at MLSys'2020”
Comments are closed.