Share Email Print

Proceedings Paper

Optimization techniques for OpenCL-based linear algebra routines
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The OpenCL standard for general-purpose parallel programming allows a developer to target highly parallel computations towards graphics processing units (GPUs), CPUs, co-processing devices, and field programmable gate arrays (FPGAs). The computationally intense domains of linear algebra and image processing have shown significant speedups when implemented in the OpenCL environment. A major benefit of OpenCL is that a routine written for one device can be run across many different devices and architectures; however, a kernel optimized for one device may not exhibit high performance when executed on a different device. For this reason kernels must typically be hand-optimized for every target device family. Due to the large number of parameters that can affect performance, hand tuning for every possible device is impractical and often produces suboptimal results. For this work, we focused on optimizing the general matrix multiplication routine. General matrix multiplication is used as a building block for many linear algebra routines and often comprises a large portion of the run-time. Prior work has shown this routine to be a good candidate for high-performance implementation in OpenCL. We selected several candidate algorithms from the literature that are suitable for parameterization. We then developed parameterized kernels implementing these algorithms using only portable OpenCL features. Our implementation queries device information supplied by the OpenCL runtime and utilizes this as well as user input to generate a search space that satisfies device and algorithmic constraints. Preliminary results from our work confirm that optimizations are not portable from one device to the next, and show the benefits of automatic tuning. Using a standard set of tuning parameters seen in the literature for the NVIDIA Fermi architecture achieves a performance of 1.6 TFLOPS on an AMD 7970 device, while automatically tuning achieves a peak of 2.7 TFLOPS

Paper Details

Date Published: 13 June 2014
PDF: 6 pages
Proc. SPIE 9095, Modeling and Simulation for Defense Systems and Applications IX, 90950D (13 June 2014); doi: 10.1117/12.2050673
Show Author Affiliations
Stephen Kozacik, Univ. of Delaware (United States)
Paul Fox, EM Photonics, Inc. (United States)
John Humphrey, EM Photonics, Inc. (United States)
Aryeh Kuller, EM Photonics, Inc. (United States)
Eric Kelmelis, EM Photonics, Inc. (United States)
Dennis W. Prather, Univ. of Delaware (United States)

Published in SPIE Proceedings Vol. 9095:
Modeling and Simulation for Defense Systems and Applications IX
Eric J. Kelmelis, Editor(s)

© SPIE. Terms of Use
Back to Top