Accelerating divergent applications on SIMD architectures using neural networks
Beayna Grigorian
1
,
Glenn Reinman
1
Publication type: Proceedings Article
Publication date: 2014-10-01
Abstract
In this work, we investigate neural-network-based solutions to the well-known problem of branch divergence in Single Instruction Multiple Data (SIMD) architectures. Our approach isolates code regions with performance degradation due to branch divergence, trains neural networks (NNs) offline to approximate these regions, and replaces the regions with their NN approximations. By directly manipulating source code, this platform-agnostic methodology translates control flow into non-divergent computation, trading-off precision for performance and energy gains. We present the Neuralizer (our automated software flow), and evaluate our approach on various divergent GPU applications, achieving average performance gains of 13.6× and energy savings of 14.8× with 96% accuracy.
Are you a researcher?
Create a profile to get free access to personal recommendations for colleagues and new articles.