File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Group prioritarianism: why AI should not replace humanity

TitleGroup prioritarianism: why AI should not replace humanity
Authors
KeywordsAI
AI Safety
AI-Wellbeing
Population ethics
Prioritarianism
Super-beneficiaries
Utility monsters
Issue Date13-Jul-2024
PublisherSpringer
Citation
Philosophical Studies, 2024 How to Cite?
Abstract

If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we ought not always create the world with the most value, or non-welfarist theories that tell us that the best world may not be the world with the most welfare, I propose a new theory that justifies giving some resources to humanity in the face of overwhelming AI well-being. I call this new theory, “Group Prioritarianism".


Persistent Identifierhttp://hdl.handle.net/10722/357314
ISSN
2023 Impact Factor: 1.1
2023 SCImago Journal Rankings: 1.203
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorHong, Frank-
dc.date.accessioned2025-06-23T08:54:40Z-
dc.date.available2025-06-23T08:54:40Z-
dc.date.issued2024-07-13-
dc.identifier.citationPhilosophical Studies, 2024-
dc.identifier.issn0031-8116-
dc.identifier.urihttp://hdl.handle.net/10722/357314-
dc.description.abstract<p>If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we ought not always create the world with the most value, or non-welfarist theories that tell us that the best world may not be the world with the most welfare, I propose a new theory that justifies giving some resources to humanity in the face of overwhelming AI well-being. I call this new theory, “Group Prioritarianism".</p>-
dc.languageeng-
dc.publisherSpringer-
dc.relation.ispartofPhilosophical Studies-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectAI-
dc.subjectAI Safety-
dc.subjectAI-Wellbeing-
dc.subjectPopulation ethics-
dc.subjectPrioritarianism-
dc.subjectSuper-beneficiaries-
dc.subjectUtility monsters-
dc.titleGroup prioritarianism: why AI should not replace humanity-
dc.typeArticle-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1007/s11098-024-02189-5-
dc.identifier.scopuseid_2-s2.0-85198420711-
dc.identifier.eissn1573-0883-
dc.identifier.isiWOS:001270794200002-
dc.identifier.issnl0031-8116-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats