Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Approximate Cache in GPGPUs

Approximate Cache in GPGPUs There is a growing number of application domains ranging from multimedia to machine learning where a certain level of inexactness can be tolerated. For these applications, approximate computing is an effective technique that trades off some loss in output data integrity for energy and/or performance gains. In this article, we present the approximate cache, which approximates similar values and saves energy in the L2 cache of general-purpose graphics processing units (GPGPUs). The L2 cache is a critical component in memory hierarchy of GPGPUs, as it accommodates data of thousands of simultaneously executing threads. Simply increasing the size of the L2 cache is not a viable solution to keep up with the growing size of data in many-core applications. This work is motivated by the observation that threads within a warp write values into memory that are arithmetically similar. We exploit this property and propose a low-cost and implementation-efficient hardware to trade off accuracy for energy. The approximate cache identifies similar values during the runtime and allows only one thread writes into the cache in the event of similarity. Since the approximate cache is able to pack more data in a smaller space, it enables downsizing of the data array with negligible impact on cache misses and lower-level memory. The approximate cache reduces both dynamic and static energy. By storing data of a thread into a cache block, each memory instruction requires accessing fewer cache cells, thus reducing dynamic energy. In addition, the approximate cache increases frequency of bank idleness. By power gating idle banks, static energy is reduced. Our evaluations reveal that the approximate cache reduces energy by 52% with minimal quality degradation while maintaining performance of a diverse set of GPGPU applications. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Embedded Computing Systems (TECS) Association for Computing Machinery

Loading next page...
 
/lp/association-for-computing-machinery/approximate-cache-in-gpgpus-vPNDNujJWD
Publisher
Association for Computing Machinery
Copyright
Copyright © 2020 ACM
ISSN
1539-9087
eISSN
1558-3465
DOI
10.1145/3407904
Publisher site
See Article on Publisher Site

Abstract

There is a growing number of application domains ranging from multimedia to machine learning where a certain level of inexactness can be tolerated. For these applications, approximate computing is an effective technique that trades off some loss in output data integrity for energy and/or performance gains. In this article, we present the approximate cache, which approximates similar values and saves energy in the L2 cache of general-purpose graphics processing units (GPGPUs). The L2 cache is a critical component in memory hierarchy of GPGPUs, as it accommodates data of thousands of simultaneously executing threads. Simply increasing the size of the L2 cache is not a viable solution to keep up with the growing size of data in many-core applications. This work is motivated by the observation that threads within a warp write values into memory that are arithmetically similar. We exploit this property and propose a low-cost and implementation-efficient hardware to trade off accuracy for energy. The approximate cache identifies similar values during the runtime and allows only one thread writes into the cache in the event of similarity. Since the approximate cache is able to pack more data in a smaller space, it enables downsizing of the data array with negligible impact on cache misses and lower-level memory. The approximate cache reduces both dynamic and static energy. By storing data of a thread into a cache block, each memory instruction requires accessing fewer cache cells, thus reducing dynamic energy. In addition, the approximate cache increases frequency of bank idleness. By power gating idle banks, static energy is reduced. Our evaluations reveal that the approximate cache reduces energy by 52% with minimal quality degradation while maintaining performance of a diverse set of GPGPU applications.

Journal

ACM Transactions on Embedded Computing Systems (TECS)Association for Computing Machinery

Published: Sep 26, 2020

Keywords: Approximate computing

References