Return to Topics

12. Accelerator Computing

Hardware accelerators of various kinds offer a potential for achieving massive performance in applications that can leverage their high degree of parallelism and customization. Examples include graphics processors (GPUs), manycore” co-processors, as well as more custom devices, customizable FPGA-based systems, and streaming data-flow architectures.

The research challenge for this topic is to explore new avenues for actually realizing this potential. We encourage submissions in all areas related to accelerators: architectures, algorithms, languages,
compilers, libraries, runtime systems, coordination of accelerators and CPU, and debugging and profiling tools. Application-related submissions that contribute new insights into fundamental problems or solution approaches in this domain are welcome as well.


  • New accelerator architectures
  • Languages, compilers, and runtime environments for accelerator programming
  • Programing techniques for clusters of accelerators
  • Tools for debugging, profiling, and optimizing programs on accelerators
  • Hybrid and heterogeneous computing with several, possibly different types of accelerators
  • Parallel algorithms for accelerators
  • Models and benchmarks for accelerators
  • Manual optimization and auto-tuning
  • Library support for accelerators


Chair: Enrique S. Quintana-Orti (Universidad Jaume I, Spain)
Local chair: Samuel Thibault (University of Bordeaux, France)

Taisuke -Arai- Boku (University of Tsukuba, Japan)
Esteban Clua (Universidade Federal Fluminense, Brasil)
Hatem Ltaief (King Abdullah University of Science and Technology, Saudi Arabia)
Jeff Hammond (Intel, USA)
John Stone (University of Illinous at Urbana-Champaign, USA)
Robert Strzodka (Universitat Heidelberg, Germany)

Permanent link to this article: