Accelerating divergent applications on SIMD architectures using neural networks

Publication typeProceedings Article
Publication date2014-10-01
Abstract
In this work, we investigate neural-network-based solutions to the well-known problem of branch divergence in Single Instruction Multiple Data (SIMD) architectures. Our approach isolates code regions with performance degradation due to branch divergence, trains neural networks (NNs) offline to approximate these regions, and replaces the regions with their NN approximations. By directly manipulating source code, this platform-agnostic methodology translates control flow into non-divergent computation, trading-off precision for performance and energy gains. We present the Neuralizer (our automated software flow), and evaluate our approach on various divergent GPU applications, achieving average performance gains of 13.6× and energy savings of 14.8× with 96% accuracy.
Found 
Found 

Top-30

  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
Share
Found error?