File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/s00146-023-01748-4
- Scopus: eid_2-s2.0-85168370049
- WOS: WOS:001050722400002
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Language agents reduce the risk of existential catastrophe
Title | Language agents reduce the risk of existential catastrophe |
---|---|
Authors | |
Keywords | Existential risk Goal misgeneralization Interpretable AI Language agents Reward misspecification |
Issue Date | 2023 |
Citation | AI and Society, 2023 How to Cite? |
Abstract | Recent advances in natural-language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make and update plans to pursue their desires given their beliefs. We argue that the rise of language agents significantly reduces the probability of an existential catastrophe due to loss of control over an AGI. This is because the probability of such an existential catastrophe is proportional to the difficulty of aligning AGI systems, and language agents significantly reduce that difficulty. In particular, language agents help to resolve three important issues related to aligning AIs: reward misspecification, goal misgeneralization, and uninterpretability. |
Persistent Identifier | http://hdl.handle.net/10722/336392 |
ISSN | 2023 Impact Factor: 2.9 2023 SCImago Journal Rankings: 0.976 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Goldstein, Simon | - |
dc.contributor.author | Kirk-Giannini, Cameron Domenico | - |
dc.date.accessioned | 2024-01-15T08:26:28Z | - |
dc.date.available | 2024-01-15T08:26:28Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | AI and Society, 2023 | - |
dc.identifier.issn | 0951-5666 | - |
dc.identifier.uri | http://hdl.handle.net/10722/336392 | - |
dc.description.abstract | Recent advances in natural-language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make and update plans to pursue their desires given their beliefs. We argue that the rise of language agents significantly reduces the probability of an existential catastrophe due to loss of control over an AGI. This is because the probability of such an existential catastrophe is proportional to the difficulty of aligning AGI systems, and language agents significantly reduce that difficulty. In particular, language agents help to resolve three important issues related to aligning AIs: reward misspecification, goal misgeneralization, and uninterpretability. | - |
dc.language | eng | - |
dc.relation.ispartof | AI and Society | - |
dc.subject | Existential risk | - |
dc.subject | Goal misgeneralization | - |
dc.subject | Interpretable AI | - |
dc.subject | Language agents | - |
dc.subject | Reward misspecification | - |
dc.title | Language agents reduce the risk of existential catastrophe | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/s00146-023-01748-4 | - |
dc.identifier.scopus | eid_2-s2.0-85168370049 | - |
dc.identifier.eissn | 1435-5655 | - |
dc.identifier.isi | WOS:001050722400002 | - |