File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3394885.3431554
- Scopus: eid_2-s2.0-85100574970
- WOS: WOS:000668583700069
- Find via
Supplementary
- Citations:
- Appears in Collections:
Conference Paper: Mixed Precision Quantization for ReRAM-based DNN Inference Accelerators
Title | Mixed Precision Quantization for ReRAM-based DNN Inference Accelerators |
---|---|
Authors | |
Keywords | Mixed precision quantization ReRAM DNN inference accelerators |
Issue Date | 2021 |
Publisher | Association for Computing Machinery. The Proceedings' web site is located at https://dl.acm.org/conference/aspdac/proceedings {http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000194) |
Citation | Proceedings of the 26th Asia and South Pacific Design Automation Conference (ASP-DAC), Virtual Conference, Tokyo, Japan, 18-21 January 2021, p. 372–377 How to Cite? |
Abstract | ReRAM-based accelerators have shown great potential for accelerating DNN inference because ReRAM crossbars can perform analog matrix-vector multiplication operations with low latency and energy consumption. However, these crossbars require the use of ADCs which constitute a significant fraction of the cost of MVM operations. The overhead of ADCs can be mitigated via partial sum quantization. However, prior quantization flows for DNN inference accelerators do not consider partial sum quantization which is not highly relevant to traditional digital architectures. To address this issue, we propose a mixed precision quantization scheme for ReRAM-based DNN inference accelerators where weight quantization, input quantization, and partial sum quantization are jointly applied for each DNN layer. We also propose an automated quantization flow powered by deep reinforcement learning to search for the best quantization configuration in the large design space. Our evaluation shows that the proposed mixed precision quantization scheme and quantization flow reduce inference latency and energy consumption by up to 3.89× and 4.84×, respectively, while only losing 1.18% in DNN inference accuracy. |
Persistent Identifier | http://hdl.handle.net/10722/305214 |
ISBN | |
ISSN | |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Huang, S | - |
dc.contributor.author | Ankit, A | - |
dc.contributor.author | Silveira, P | - |
dc.contributor.author | Antunes, R | - |
dc.contributor.author | Chalamalasetti, SR | - |
dc.contributor.author | Hajj, IE | - |
dc.contributor.author | Kim, DE | - |
dc.contributor.author | Aguiar, G | - |
dc.contributor.author | Bruel, P | - |
dc.contributor.author | Serebryakov, G | - |
dc.contributor.author | Xu, C | - |
dc.contributor.author | Li, C | - |
dc.contributor.author | Faraboschi, P | - |
dc.contributor.author | Strachan, JP | - |
dc.contributor.author | Chen, D | - |
dc.contributor.author | Roy, K | - |
dc.contributor.author | Hwu, WW | - |
dc.contributor.author | Milojicic, D | - |
dc.date.accessioned | 2021-10-20T10:06:15Z | - |
dc.date.available | 2021-10-20T10:06:15Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Proceedings of the 26th Asia and South Pacific Design Automation Conference (ASP-DAC), Virtual Conference, Tokyo, Japan, 18-21 January 2021, p. 372–377 | - |
dc.identifier.isbn | 9781450379991 | - |
dc.identifier.issn | 2153-6961 | - |
dc.identifier.uri | http://hdl.handle.net/10722/305214 | - |
dc.description.abstract | ReRAM-based accelerators have shown great potential for accelerating DNN inference because ReRAM crossbars can perform analog matrix-vector multiplication operations with low latency and energy consumption. However, these crossbars require the use of ADCs which constitute a significant fraction of the cost of MVM operations. The overhead of ADCs can be mitigated via partial sum quantization. However, prior quantization flows for DNN inference accelerators do not consider partial sum quantization which is not highly relevant to traditional digital architectures. To address this issue, we propose a mixed precision quantization scheme for ReRAM-based DNN inference accelerators where weight quantization, input quantization, and partial sum quantization are jointly applied for each DNN layer. We also propose an automated quantization flow powered by deep reinforcement learning to search for the best quantization configuration in the large design space. Our evaluation shows that the proposed mixed precision quantization scheme and quantization flow reduce inference latency and energy consumption by up to 3.89× and 4.84×, respectively, while only losing 1.18% in DNN inference accuracy. | - |
dc.language | eng | - |
dc.publisher | Association for Computing Machinery. The Proceedings' web site is located at https://dl.acm.org/conference/aspdac/proceedings {http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000194) | - |
dc.relation.ispartof | Asia and South Pacific Design Automation Conference Proceedings | - |
dc.rights | Asia and South Pacific Design Automation Conference Proceedings. Copyright © Association for Computing Machinery. | - |
dc.subject | Mixed precision quantization | - |
dc.subject | ReRAM | - |
dc.subject | DNN inference accelerators | - |
dc.title | Mixed Precision Quantization for ReRAM-based DNN Inference Accelerators | - |
dc.type | Conference_Paper | - |
dc.identifier.email | Li, C: canl@hku.hk | - |
dc.identifier.authority | Li, C=rp02706 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3394885.3431554 | - |
dc.identifier.scopus | eid_2-s2.0-85100574970 | - |
dc.identifier.hkuros | 328219 | - |
dc.identifier.spage | 372 | - |
dc.identifier.epage | 377 | - |
dc.identifier.isi | WOS:000668583700069 | - |
dc.publisher.place | United States | - |