Algorithmic Fairness Revisited

Florian Tramèr

Master Thesis 2015



Abstract

We study fairness in algorithmic decision making, with the goal of proposing formal and robust definitions and measures of an algorithm’s bias towards sensitive user features. The main contribution of this work is a statistical framework for reasoning about equity in algorithmic decisions, while also considering various constraints on users’ and the algorithm vendors’ utilities. We first revisit previous notions of fairness from the literature, that are based on different measures of the dependency between sensitive features and algorithmic decisions. We illustrate several limitations of these measures, such as their failure to generalize to non-binary sensitive features or algorithm outputs, and we propose a more general and robust fairness measure based on mutual information, which has received little attention so far. In particular, we show that our fairness measure produces significantly better characterizations of the statistical significance of an algorithm’s bias, compared to the notion of statistical parity introduced by Dwork et al.
We further discuss the inadequacy of previously considered fairness measures, in their inability to detect large-scale discriminatory practices, that are due to algorithms with small biases being applied on a global scale. We instigate the discussion on statistical hypothesis tests, that, in spite of being standard tools in legal practices, have received little attention in the context of algorithmic fairness. In this regard, we present another advantage of mutual information, compared to other proposed fairness measures, in that it is directly linked to a popular statistical goodness-of-fit test known as the G-test.
We further reason about situations, where the absolute parity of an algorithm may be prohibitively at odds with the utility of an algorithm’s vendor or its users. We generalize our fairness definitions to include various utilitarian constraints, with a particular focus on discriminatory practices that are considered acceptable because of genuine business necessity requirements. We describe a framework mirroring legal practices, that allows businesses to discriminate users based on genuine task-specific qualification levels, in order to guarantee the organization’s well-being.
Finally, we consider practical issues related to the detection of algorithmic biases from empirical data, and propose a generic methodology, relying on cluster analysis techniques and robust statistical testing, to reason about discrimination in different subsets of the user population. We evaluate our methods on small artificial datasets, as well as on the Berkeley Graduate Admissions and Adult Census datasets, and illustrate how our techniques can either discover discrimination hidden in particular user subsets, or reveal potential business-necessary requirements that may account for an observed algorithmic bias.


BibTeX
@mastersthesis{Tra15,
  author   =   {Tram{\`e}r, Florian},
  title   =   {Algorithmic Fairness Revisited},
  year   =   {2015},
  school   =   {EPFL},
  url   =   {https://floriantramer.com/docs/papers/masterthesis.pdf}
}