File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: In situ training of feed-forward and recurrent convolutional memristor networks

TitleIn situ training of feed-forward and recurrent convolutional memristor networks
Authors
Issue Date2019
PublisherNature Research (part of Springer Nature). The Journal's web site is located at https://www.nature.com/natmachintell/
Citation
Nature Machine Intelligence, 2019, v. 1 n. 9, p. 434-442 How to Cite?
AbstractThe explosive growth of machine learning is largely due to the recent advancements in hardware and architecture. The engineering of network structures, taking advantage of the spatial or temporal translational isometry of patterns, naturally leads to bio-inspired, shared-weight structures such as convolutional neural networks, which have markedly reduced the number of free parameters. State-of-the-art microarchitectures commonly rely on weight-sharing techniques, but still suffer from the von Neumann bottleneck of transistor-based platforms. Here, we experimentally demonstrate the in situ training of a five-level convolutional neural network that self-adapts to non-idealities of the one-transistor one-memristor array to classify the MNIST dataset, achieving similar accuracy to the memristor-based multilayer perceptron with a reduction in trainable parameters of ~75% owing to the shared weights. In addition, the memristors encoded both spatial and temporal translational invariance simultaneously in a convolutional long short-term memory network—a memristor-based neural network with intrinsic 3D input processing—which was trained in situ to classify a synthetic MNIST sequence dataset using just 850 weights. These proof-of-principle demonstrations combine the architectural advantages of weight sharing and the area/energy efficiency boost of the memristors, paving the way to future edge artificial intelligence.
Persistent Identifierhttp://hdl.handle.net/10722/291116
ISSN
2021 Impact Factor: 25.898
2020 SCImago Journal Rankings: 4.894
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Z-
dc.contributor.authorLi, C-
dc.contributor.authorLin, P-
dc.contributor.authorRao, M-
dc.contributor.authorNie, Y-
dc.contributor.authorSong, W-
dc.contributor.authorQiu, Q-
dc.contributor.authorLi, Y-
dc.contributor.authorYan, P-
dc.contributor.authorStrachan, JP-
dc.contributor.authorGe, N-
dc.contributor.authorMcDonald, N-
dc.contributor.authorWu, Q-
dc.contributor.authorHu, M-
dc.contributor.authorWu, H-
dc.contributor.authorWilliams, RS-
dc.contributor.authorXia, Q-
dc.contributor.authorYang, JJ-
dc.date.accessioned2020-11-04T08:41:35Z-
dc.date.available2020-11-04T08:41:35Z-
dc.date.issued2019-
dc.identifier.citationNature Machine Intelligence, 2019, v. 1 n. 9, p. 434-442-
dc.identifier.issn2522-5839-
dc.identifier.urihttp://hdl.handle.net/10722/291116-
dc.description.abstractThe explosive growth of machine learning is largely due to the recent advancements in hardware and architecture. The engineering of network structures, taking advantage of the spatial or temporal translational isometry of patterns, naturally leads to bio-inspired, shared-weight structures such as convolutional neural networks, which have markedly reduced the number of free parameters. State-of-the-art microarchitectures commonly rely on weight-sharing techniques, but still suffer from the von Neumann bottleneck of transistor-based platforms. Here, we experimentally demonstrate the in situ training of a five-level convolutional neural network that self-adapts to non-idealities of the one-transistor one-memristor array to classify the MNIST dataset, achieving similar accuracy to the memristor-based multilayer perceptron with a reduction in trainable parameters of ~75% owing to the shared weights. In addition, the memristors encoded both spatial and temporal translational invariance simultaneously in a convolutional long short-term memory network—a memristor-based neural network with intrinsic 3D input processing—which was trained in situ to classify a synthetic MNIST sequence dataset using just 850 weights. These proof-of-principle demonstrations combine the architectural advantages of weight sharing and the area/energy efficiency boost of the memristors, paving the way to future edge artificial intelligence.-
dc.languageeng-
dc.publisherNature Research (part of Springer Nature). The Journal's web site is located at https://www.nature.com/natmachintell/-
dc.relation.ispartofNature Machine Intelligence-
dc.titleIn situ training of feed-forward and recurrent convolutional memristor networks-
dc.typeArticle-
dc.identifier.emailWang, Z: zrwang@hku.hk-
dc.identifier.emailLi, C: canl@hku.hk-
dc.identifier.authorityWang, Z=rp02714-
dc.identifier.authorityLi, C=rp02706-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1038/s42256-019-0089-1-
dc.identifier.hkuros700003891-
dc.identifier.volume1-
dc.identifier.issue9-
dc.identifier.spage434-
dc.identifier.epage442-
dc.identifier.isiWOS:000571252900007-
dc.publisher.placeUnited Kingdom-
dc.identifier.issnl2522-5839-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats