File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1080/0020174x.2023.2261493
- Scopus: eid_2-s2.0-85172789581
- WOS: WOS:001071526100001
- Find via

Supplementary
- Citations:
- Appears in Collections:
Article: Predicting and preferring
| Title | Predicting and preferring |
|---|---|
| Authors | |
| Keywords | AI medical ethics patient preference predictors PPP preference shaping |
| Issue Date | 25-Sep-2023 |
| Publisher | Taylor and Francis Group |
| Citation | Inquiry, 2023 How to Cite? |
| Abstract | The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety. |
| Persistent Identifier | http://hdl.handle.net/10722/340930 |
| ISSN | 2023 Impact Factor: 1.0 2023 SCImago Journal Rankings: 0.769 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Sharadin, Nathaniel | - |
| dc.date.accessioned | 2024-03-11T10:48:23Z | - |
| dc.date.available | 2024-03-11T10:48:23Z | - |
| dc.date.issued | 2023-09-25 | - |
| dc.identifier.citation | Inquiry, 2023 | - |
| dc.identifier.issn | 0020-174X | - |
| dc.identifier.uri | http://hdl.handle.net/10722/340930 | - |
| dc.description.abstract | <p>The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.<br></p> | - |
| dc.language | eng | - |
| dc.publisher | Taylor and Francis Group | - |
| dc.relation.ispartof | Inquiry | - |
| dc.subject | AI | - |
| dc.subject | medical ethics | - |
| dc.subject | patient preference predictors | - |
| dc.subject | PPP | - |
| dc.subject | preference shaping | - |
| dc.title | Predicting and preferring | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1080/0020174x.2023.2261493 | - |
| dc.identifier.scopus | eid_2-s2.0-85172789581 | - |
| dc.identifier.eissn | 1502-3923 | - |
| dc.identifier.isi | WOS:001071526100001 | - |
| dc.identifier.issnl | 0020-174X | - |
