Efficient Mapping of Streaming Applications for Image Processing on Graphics Cards

In the last decade, there has been a dramatic growth in research and development of massively parallel commodity graphics hardware both in academia and industry. Graphics card architectures provide an optimal platform for parallel execution of many number crunching loop programs from fields like image processing or linear algebra. However, it is hard to efficiently map such algorithms to the graphics hardware even with detailed insight into the architecture. This paper presents a multiresolution image processing algorithm and shows the efficient mapping of this type of algorithms to graphics hardware as well as double buffering concepts to hide memory transfers. Furthermore, the impact of execution configuration is illustrated and a method is proposed to determine offline the best configuration. Using CUDA as programming model, it is demonstrated that the image processing algorithm is significantly accelerated and that a speedup of more than 145x can be achieved on NVIDIA’s Tesla C1060 compared to a parallelized implementation on a Xeon Quad Core. For deployment in a streaming application with steadily new incoming data, it is shown that the memory transfer overhead to the graphics card is reduced by a factor of six using double buffering.

BibTeX
@article{membarth2019efficientmapping,
  author          = {Membarth, Richard and Dutta, Hritam and Hannig, Frank and Teich, Jürgen},
  title           = {Efficient Mapping of Streaming Applications for Image Processing on Graphics Cards},
  journal         = {Transactions on High-Performance Embedded Architectures and Compilers (Transactions on HiPEAC)},
  pages           = {1--20},
  volume          = {V},
  %year            = 2019,
  %month           = feb,
  date            = {2019-02},
  doi             = {10.1007/978-3-662-58834-5_1},
  publisher       = {Springer}
}