File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Learning by reusing previous advice: a memory-based teacher–student framework

TitleLearning by reusing previous advice: a memory-based teacher–student framework
Authors
Issue Date2022
PublisherSpringer. The Journal's web site is located at http://springerlink.metapress.com/openurl.asp?genre=journal&issn=1387-2532
Citation
Autonomous Agents and Multi-Agent Systems, 2022, v. 37 n. 1, p. 14 How to Cite?
AbstractReinforcement Learning (RL) has been widely used to solve sequential decision-making problems. However, it often suffers from slow learning speed in complex scenarios. Teacher–student frameworks address this issue by enabling agents to ask for and give advice so that a student agent can leverage the knowledge of a teacher agent to facilitate its learning. In this paper, we consider the effect of reusing previous advice, and propose a novel memory-based teacher–student framework such that student agents can memorize and reuse the previous advice from teacher agents. In particular, we propose two methods to decide whether previous advice should be reused: Q-Change per Step that reuses the advice if it leads to an increase in Q-values, and Decay Reusing Probability that reuses the advice with a decaying probability. The experiments on diverse RL tasks (Mario, Predator–Prey and Half Field Offense) confirm that our proposed framework significantly outperforms the existing frameworks in which previous advice is not reused.
Persistent Identifierhttp://hdl.handle.net/10722/324859
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorZhu, C-
dc.contributor.authorCai, Y-
dc.contributor.authorHu, S-
dc.contributor.authorLeung, H-
dc.contributor.authorChiu, KWD-
dc.date.accessioned2023-02-20T01:39:19Z-
dc.date.available2023-02-20T01:39:19Z-
dc.date.issued2022-
dc.identifier.citationAutonomous Agents and Multi-Agent Systems, 2022, v. 37 n. 1, p. 14-
dc.identifier.urihttp://hdl.handle.net/10722/324859-
dc.description.abstractReinforcement Learning (RL) has been widely used to solve sequential decision-making problems. However, it often suffers from slow learning speed in complex scenarios. Teacher–student frameworks address this issue by enabling agents to ask for and give advice so that a student agent can leverage the knowledge of a teacher agent to facilitate its learning. In this paper, we consider the effect of reusing previous advice, and propose a novel memory-based teacher–student framework such that student agents can memorize and reuse the previous advice from teacher agents. In particular, we propose two methods to decide whether previous advice should be reused: Q-Change per Step that reuses the advice if it leads to an increase in Q-values, and Decay Reusing Probability that reuses the advice with a decaying probability. The experiments on diverse RL tasks (Mario, Predator–Prey and Half Field Offense) confirm that our proposed framework significantly outperforms the existing frameworks in which previous advice is not reused.-
dc.languageeng-
dc.publisherSpringer. The Journal's web site is located at http://springerlink.metapress.com/openurl.asp?genre=journal&issn=1387-2532-
dc.relation.ispartofAutonomous Agents and Multi-Agent Systems-
dc.rightsThis version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/[insert DOI]-
dc.titleLearning by reusing previous advice: a memory-based teacher–student framework-
dc.typeArticle-
dc.identifier.emailChiu, KWD: dchiu88@hku.hk-
dc.identifier.doi10.1007/s10458-022-09595-1-
dc.identifier.hkuros343753-
dc.identifier.volume37-
dc.identifier.issue1-
dc.identifier.spage14-
dc.identifier.epage14-
dc.identifier.isiWOS:000905425700001-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats