XEngine: Optimal Tensor Rematerialization for Neural Networks in Heterogeneous Environments

teaser for XEngine: Optimal Tensor Rematerialization for Neural Networks in Heterogeneous Environments

Memory efficiency is crucial in training deep learning networks on resource-restricted devices. During backpropagation, forward tensors are used to calculate gradients. Despite the option of keeping those dependencies in memory until they are reused in backpropagation, some forward tensors can be discarded and recomputed later from saved tensors, so-called checkpoints. This allows in particular for resource-constrained heterogeneous environments to make use of all available compute devices. Unfortunately, the definition of these checkpoints is a non-trivial problem and poses a challenge to the programmer—improper or excessive recomputations negate the benefit of checkpointing.

In this paper, we present XEngine, an approach that schedules network operators to heterogeneous devices in low memory environments by determining checkpoints and recomputations of tensors. Our approach selects suitable resources per timestep and operator and optimizes the end-to-end time for neural networks taking the memory limitation of each device into account. For this, we formulate a mixed integer quadratic program (MIQP) to schedule operators of deep learning networks onto heterogeneous systems. We compare our MIQP solver XEngine against Checkmate, a mixed integer linear programming (MILP) approach that solves recomputation on a single device. Our solver finds solutions that are up to 22.5% faster than the fastest Checkmate schedule where the network is computed exclusively on a single device. Moreover, we also find valid schedules for networks making use of both, CPU and GPU, if memory limitations don’t allow to schedule exclusively to the GPU.

  author          = {Schuler, Manuela and Membarth, Richard and Slusallek, Philipp},
  title           = {{XEngine}: Optimal Tensor Rematerialization for Neural Networks in Heterogeneous Environments},
  year            = {2022},
  issue_date      = {March 2023},
  publisher       = {Association for Computing Machinery},
  address         = {New York, NY, USA},
  volume          = {20},
  number          = {1},
  issn            = {1544-3566},
  url             = {https://doi.org/10.1145/3568956},
  doi             = {10.1145/3568956},
  journal         = {ACM Transactions on Architecture and Code Optimization (TACO)},
  month           = {dec},
  articleno       = {17},
  numpages        = {25},
  keywords        = {Rematerialization, heterogeneous computing, memory management, neural networks, integer linear programming}