File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: A Hardware-Aware Neural Architecture Search Pareto Front Exploration for In-Memory Computing

TitleA Hardware-Aware Neural Architecture Search Pareto Front Exploration for In-Memory Computing
Authors
Issue Date25-Oct-2022
PublisherIEEE
Abstract

Traditional neural networks deployed on CPU/GPU architectures have achieved impressive results on various AI tasks. However, the growing model sizes and intensive computation have presented stringent challenges for deployment on edge devices with restrictive compute and storage resources. This paper proposes a one-shot training-evaluation framework to solve the neural architecture search (NAS) problem for in-memory computing, targeting the emerging resistive random-access memory (RRAM) analog AI platform. We test inference accuracy and hardware performance of subnets sampled in different dimensions of a pretrained supernet. Experiments show that the proposed one-shot hardware-aware NAS (HW-NAS) framework can effectively explore the Pareto front considering both accuracy and hardware performance, and generate more optimal models via morphing a standard backbone model.


Persistent Identifierhttp://hdl.handle.net/10722/339479

 

DC FieldValueLanguage
dc.contributor.authorGuan, Ziyi-
dc.contributor.authorZhou, Wenyong-
dc.contributor.authorRen, Yuan-
dc.contributor.authorXie, Rui-
dc.contributor.authorYu, Hao-
dc.contributor.authorWong, Ngai-
dc.date.accessioned2024-03-11T10:36:58Z-
dc.date.available2024-03-11T10:36:58Z-
dc.date.issued2022-10-25-
dc.identifier.urihttp://hdl.handle.net/10722/339479-
dc.description.abstract<p>Traditional neural networks deployed on CPU/GPU architectures have achieved impressive results on various AI tasks. However, the growing model sizes and intensive computation have presented stringent challenges for deployment on edge devices with restrictive compute and storage resources. This paper proposes a one-shot training-evaluation framework to solve the neural architecture search (NAS) problem for in-memory computing, targeting the emerging resistive random-access memory (RRAM) analog AI platform. We test inference accuracy and hardware performance of subnets sampled in different dimensions of a pretrained supernet. Experiments show that the proposed one-shot hardware-aware NAS (HW-NAS) framework can effectively explore the Pareto front considering both accuracy and hardware performance, and generate more optimal models via morphing a standard backbone model.<br></p>-
dc.languageeng-
dc.publisherIEEE-
dc.relation.ispartofIEEE 16th International Conference on Solid-State & Integrated Circuit Technology (ICSICT) (25/10/2022-28/10/2022, , , Nanjing)-
dc.titleA Hardware-Aware Neural Architecture Search Pareto Front Exploration for In-Memory Computing-
dc.typeConference_Paper-
dc.identifier.doi10.1109/ICSICT55466.2022.9963263-
dc.identifier.scopuseid_2-s2.0-85143977797-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats