File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1007/s11098-023-02006-5
- Scopus: eid_2-s2.0-85168905641
- WOS: WOS:001060437400001
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Algorithmic fairness and resentment
Title | Algorithmic fairness and resentment |
---|---|
Authors | |
Keywords | Algorithmic ethics Bias Fairness Priors Resentment Statistical evidence |
Issue Date | 2023 |
Citation | Philosophical Studies, 2023 How to Cite? |
Abstract | In this paper we develop a general theory of algorithmic fairness. Drawing on Johnson King and Babic’s work on moral encroachment, on Gary Becker’s work on labor market discrimination, and on Strawson’s idea of resentment and indignation as responses to violations of the demand for goodwill toward oneself and others, we locate attitudes to fairness in an agent’s utility function. In particular, we first argue that fairness is a matter of a decision-maker’s relative concern for the plight of people from different groups, rather than of the outcomes produced for different groups. We then show how an agent’s preferences, including in particular their attitudes to error, give rise to their decision thresholds. Tying these points together, we argue that the agent’s relative degrees of concern for different groups manifest in a difference in decision thresholds applied to these groups. |
Persistent Identifier | http://hdl.handle.net/10722/334978 |
ISSN | 2023 Impact Factor: 1.1 2023 SCImago Journal Rankings: 1.203 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Babic, Boris | - |
dc.contributor.author | Johnson King, Zoë | - |
dc.date.accessioned | 2023-10-20T06:52:10Z | - |
dc.date.available | 2023-10-20T06:52:10Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Philosophical Studies, 2023 | - |
dc.identifier.issn | 0031-8116 | - |
dc.identifier.uri | http://hdl.handle.net/10722/334978 | - |
dc.description.abstract | In this paper we develop a general theory of algorithmic fairness. Drawing on Johnson King and Babic’s work on moral encroachment, on Gary Becker’s work on labor market discrimination, and on Strawson’s idea of resentment and indignation as responses to violations of the demand for goodwill toward oneself and others, we locate attitudes to fairness in an agent’s utility function. In particular, we first argue that fairness is a matter of a decision-maker’s relative concern for the plight of people from different groups, rather than of the outcomes produced for different groups. We then show how an agent’s preferences, including in particular their attitudes to error, give rise to their decision thresholds. Tying these points together, we argue that the agent’s relative degrees of concern for different groups manifest in a difference in decision thresholds applied to these groups. | - |
dc.language | eng | - |
dc.relation.ispartof | Philosophical Studies | - |
dc.subject | Algorithmic ethics | - |
dc.subject | Bias | - |
dc.subject | Fairness | - |
dc.subject | Priors | - |
dc.subject | Resentment | - |
dc.subject | Statistical evidence | - |
dc.title | Algorithmic fairness and resentment | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1007/s11098-023-02006-5 | - |
dc.identifier.scopus | eid_2-s2.0-85168905641 | - |
dc.identifier.eissn | 1573-0883 | - |
dc.identifier.isi | WOS:001060437400001 | - |