File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Reinforcement learning with analogue memristor arrays

TitleReinforcement learning with analogue memristor arrays
Authors
Issue Date2019
Citation
Nature Electronics, 2019, v. 2, n. 3, p. 115-124 How to Cite?
Abstract© 2019, The Author(s), under exclusive licence to Springer Nature Limited. Reinforcement learning algorithms that use deep neural networks are a promising approach for the development of machines that can acquire knowledge and solve problems without human input or supervision. At present, however, these algorithms are implemented in software running on relatively standard complementary metal–oxide–semiconductor digital platforms, where performance will be constrained by the limits of Moore’s law and von Neumann architecture. Here, we report an experimental demonstration of reinforcement learning on a three-layer 1-transistor 1-memristor (1T1R) network using a modified learning algorithm tailored for our hybrid analogue–digital platform. To illustrate the capabilities of our approach in robust in situ training without the need for a model, we performed two classic control problems: the cart–pole and mountain car simulations. We also show that, compared with conventional digital systems in real-world reinforcement learning tasks, our hybrid analogue–digital computing system has the potential to achieve a significant boost in speed and energy efficiency.
Persistent Identifierhttp://hdl.handle.net/10722/286986
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, Zhongrui-
dc.contributor.authorLi, Can-
dc.contributor.authorSong, Wenhao-
dc.contributor.authorRao, Mingyi-
dc.contributor.authorBelkin, Daniel-
dc.contributor.authorLi, Yunning-
dc.contributor.authorYan, Peng-
dc.contributor.authorJiang, Hao-
dc.contributor.authorLin, Peng-
dc.contributor.authorHu, Miao-
dc.contributor.authorStrachan, John Paul-
dc.contributor.authorGe, Ning-
dc.contributor.authorBarnell, Mark-
dc.contributor.authorWu, Qing-
dc.contributor.authorBarto, Andrew G.-
dc.contributor.authorQiu, Qinru-
dc.contributor.authorWilliams, R. Stanley-
dc.contributor.authorXia, Qiangfei-
dc.contributor.authorYang, J. Joshua-
dc.date.accessioned2020-09-07T11:46:11Z-
dc.date.available2020-09-07T11:46:11Z-
dc.date.issued2019-
dc.identifier.citationNature Electronics, 2019, v. 2, n. 3, p. 115-124-
dc.identifier.urihttp://hdl.handle.net/10722/286986-
dc.description.abstract© 2019, The Author(s), under exclusive licence to Springer Nature Limited. Reinforcement learning algorithms that use deep neural networks are a promising approach for the development of machines that can acquire knowledge and solve problems without human input or supervision. At present, however, these algorithms are implemented in software running on relatively standard complementary metal–oxide–semiconductor digital platforms, where performance will be constrained by the limits of Moore’s law and von Neumann architecture. Here, we report an experimental demonstration of reinforcement learning on a three-layer 1-transistor 1-memristor (1T1R) network using a modified learning algorithm tailored for our hybrid analogue–digital platform. To illustrate the capabilities of our approach in robust in situ training without the need for a model, we performed two classic control problems: the cart–pole and mountain car simulations. We also show that, compared with conventional digital systems in real-world reinforcement learning tasks, our hybrid analogue–digital computing system has the potential to achieve a significant boost in speed and energy efficiency.-
dc.languageeng-
dc.relation.ispartofNature Electronics-
dc.titleReinforcement learning with analogue memristor arrays-
dc.typeArticle-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1038/s41928-019-0221-6-
dc.identifier.scopuseid_2-s2.0-85063037929-
dc.identifier.volume2-
dc.identifier.issue3-
dc.identifier.spage115-
dc.identifier.epage124-
dc.identifier.eissn2520-1131-
dc.identifier.isiWOS:000463819800011-
dc.identifier.issnl2520-1131-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats