November 12, 2019 (13:30 | SR 9): CT-Talk with Gilles Tredan

on "The Bouncer Problem: Challenges to Remote Explainability ?"

I will present some works revolving around the question of algorithmic transparency. Those works share very similar system model, where we study the possibility for the user of a software black box to learn information about the internals of this black box through request sequences. I will then focus on explainability in this context.

The concept of explainability is envisioned to satisfy society’s demands for transparency on machine learning decisions. The concept is simple: like humans, algorithms should explain the rationale be- hind their decisions so that their fairness can be assessed. While this approach is promising in a local context (e.g. to explain a model during debugging at training time), we argue that this reasoning cannot simply be transposed in a remote context, where a trained model by a service provider is only accessible through its API. This is problematic as it constitutes precisely the target use-case requiring transparency from a societal perspective. Through an analogy with a club bouncer (which may provide untruthful explanations upon customer reject), we show that providing explanations cannot prevent a remote service from lying about the true reasons leading to its decisions.

More precisely, we prove the impossibility of remote explainability for single explanations, by constructing an attack on explanations that hides discriminatory features to the querying user. We provide an example implementation of this attack. We then show that the probability that an observer spots the attack, using several explanations for attempting to find incoherences, is low in practical settings. This undermines the very concept of remote explainability in general.

Short bio: Gilles is a CNRS researcher since 2011. He enjoys algorithms and graph problems, applied to networking or algorithmic transparency.