File Download
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/s11098-024-02189-5
- Scopus: eid_2-s2.0-85198420711
- WOS: WOS:001270794200002
- Find via

Supplementary
- Citations:
- Appears in Collections:
Article: Group prioritarianism: why AI should not replace humanity
| Title | Group prioritarianism: why AI should not replace humanity |
|---|---|
| Authors | |
| Keywords | AI AI Safety AI-Wellbeing Population ethics Prioritarianism Super-beneficiaries Utility monsters |
| Issue Date | 13-Jul-2024 |
| Publisher | Springer |
| Citation | Philosophical Studies, 2024 How to Cite? |
| Abstract | If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we ought not always create the world with the most value, or non-welfarist theories that tell us that the best world may not be the world with the most welfare, I propose a new theory that justifies giving some resources to humanity in the face of overwhelming AI well-being. I call this new theory, “Group Prioritarianism". |
| Persistent Identifier | http://hdl.handle.net/10722/357314 |
| ISSN | 2023 Impact Factor: 1.1 2023 SCImago Journal Rankings: 1.203 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Hong, Frank | - |
| dc.date.accessioned | 2025-06-23T08:54:40Z | - |
| dc.date.available | 2025-06-23T08:54:40Z | - |
| dc.date.issued | 2024-07-13 | - |
| dc.identifier.citation | Philosophical Studies, 2024 | - |
| dc.identifier.issn | 0031-8116 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/357314 | - |
| dc.description.abstract | <p>If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we ought not always create the world with the most value, or non-welfarist theories that tell us that the best world may not be the world with the most welfare, I propose a new theory that justifies giving some resources to humanity in the face of overwhelming AI well-being. I call this new theory, “Group Prioritarianism".</p> | - |
| dc.language | eng | - |
| dc.publisher | Springer | - |
| dc.relation.ispartof | Philosophical Studies | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | AI | - |
| dc.subject | AI Safety | - |
| dc.subject | AI-Wellbeing | - |
| dc.subject | Population ethics | - |
| dc.subject | Prioritarianism | - |
| dc.subject | Super-beneficiaries | - |
| dc.subject | Utility monsters | - |
| dc.title | Group prioritarianism: why AI should not replace humanity | - |
| dc.type | Article | - |
| dc.description.nature | published_or_final_version | - |
| dc.identifier.doi | 10.1007/s11098-024-02189-5 | - |
| dc.identifier.scopus | eid_2-s2.0-85198420711 | - |
| dc.identifier.eissn | 1573-0883 | - |
| dc.identifier.isi | WOS:001270794200002 | - |
| dc.identifier.issnl | 0031-8116 | - |
