File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1038/s42256-019-0089-1
- WOS: WOS:000571252900007
- Find via
Supplementary
-
Citations:
- Web of Science: 0
- Appears in Collections:
Article: In situ training of feed-forward and recurrent convolutional memristor networks
Title | In situ training of feed-forward and recurrent convolutional memristor networks |
---|---|
Authors | |
Issue Date | 2019 |
Publisher | Nature Research (part of Springer Nature). The Journal's web site is located at https://www.nature.com/natmachintell/ |
Citation | Nature Machine Intelligence, 2019, v. 1 n. 9, p. 434-442 How to Cite? |
Abstract | The explosive growth of machine learning is largely due to the recent advancements in hardware and architecture. The engineering of network structures, taking advantage of the spatial or temporal translational isometry of patterns, naturally leads to bio-inspired, shared-weight structures such as convolutional neural networks, which have markedly reduced the number of free parameters. State-of-the-art microarchitectures commonly rely on weight-sharing techniques, but still suffer from the von Neumann bottleneck of transistor-based platforms. Here, we experimentally demonstrate the in situ training of a five-level convolutional neural network that self-adapts to non-idealities of the one-transistor one-memristor array to classify the MNIST dataset, achieving similar accuracy to the memristor-based multilayer perceptron with a reduction in trainable parameters of ~75% owing to the shared weights. In addition, the memristors encoded both spatial and temporal translational invariance simultaneously in a convolutional long short-term memory network—a memristor-based neural network with intrinsic 3D input processing—which was trained in situ to classify a synthetic MNIST sequence dataset using just 850 weights. These proof-of-principle demonstrations combine the architectural advantages of weight sharing and the area/energy efficiency boost of the memristors, paving the way to future edge artificial intelligence. |
Persistent Identifier | http://hdl.handle.net/10722/291116 |
ISSN | 2023 Impact Factor: 18.8 2023 SCImago Journal Rankings: 5.940 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Z | - |
dc.contributor.author | Li, C | - |
dc.contributor.author | Lin, P | - |
dc.contributor.author | Rao, M | - |
dc.contributor.author | Nie, Y | - |
dc.contributor.author | Song, W | - |
dc.contributor.author | Qiu, Q | - |
dc.contributor.author | Li, Y | - |
dc.contributor.author | Yan, P | - |
dc.contributor.author | Strachan, JP | - |
dc.contributor.author | Ge, N | - |
dc.contributor.author | McDonald, N | - |
dc.contributor.author | Wu, Q | - |
dc.contributor.author | Hu, M | - |
dc.contributor.author | Wu, H | - |
dc.contributor.author | Williams, RS | - |
dc.contributor.author | Xia, Q | - |
dc.contributor.author | Yang, JJ | - |
dc.date.accessioned | 2020-11-04T08:41:35Z | - |
dc.date.available | 2020-11-04T08:41:35Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | Nature Machine Intelligence, 2019, v. 1 n. 9, p. 434-442 | - |
dc.identifier.issn | 2522-5839 | - |
dc.identifier.uri | http://hdl.handle.net/10722/291116 | - |
dc.description.abstract | The explosive growth of machine learning is largely due to the recent advancements in hardware and architecture. The engineering of network structures, taking advantage of the spatial or temporal translational isometry of patterns, naturally leads to bio-inspired, shared-weight structures such as convolutional neural networks, which have markedly reduced the number of free parameters. State-of-the-art microarchitectures commonly rely on weight-sharing techniques, but still suffer from the von Neumann bottleneck of transistor-based platforms. Here, we experimentally demonstrate the in situ training of a five-level convolutional neural network that self-adapts to non-idealities of the one-transistor one-memristor array to classify the MNIST dataset, achieving similar accuracy to the memristor-based multilayer perceptron with a reduction in trainable parameters of ~75% owing to the shared weights. In addition, the memristors encoded both spatial and temporal translational invariance simultaneously in a convolutional long short-term memory network—a memristor-based neural network with intrinsic 3D input processing—which was trained in situ to classify a synthetic MNIST sequence dataset using just 850 weights. These proof-of-principle demonstrations combine the architectural advantages of weight sharing and the area/energy efficiency boost of the memristors, paving the way to future edge artificial intelligence. | - |
dc.language | eng | - |
dc.publisher | Nature Research (part of Springer Nature). The Journal's web site is located at https://www.nature.com/natmachintell/ | - |
dc.relation.ispartof | Nature Machine Intelligence | - |
dc.title | In situ training of feed-forward and recurrent convolutional memristor networks | - |
dc.type | Article | - |
dc.identifier.email | Wang, Z: zrwang@hku.hk | - |
dc.identifier.email | Li, C: canl@hku.hk | - |
dc.identifier.authority | Wang, Z=rp02714 | - |
dc.identifier.authority | Li, C=rp02706 | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1038/s42256-019-0089-1 | - |
dc.identifier.hkuros | 700003891 | - |
dc.identifier.volume | 1 | - |
dc.identifier.issue | 9 | - |
dc.identifier.spage | 434 | - |
dc.identifier.epage | 442 | - |
dc.identifier.isi | WOS:000571252900007 | - |
dc.publisher.place | United Kingdom | - |
dc.identifier.issnl | 2522-5839 | - |