About Fair EVA

Voice technologies should work equally for all users, irrespective of their demographic attributes.












Motivation


Voice assistants (VA) are a prominent AI technology in our everyday lives: they are embedded in smart phones and smart speakers, incorporated in cars and home appliances, and are becoming the backbone of call centres that not only resolve customer queries, but also deliver financial and government services. It is thus important that voice assistants work equally well for all users, irrespective of their demographic attributes. If this is the case, we can consider the voice assistant to be fair.


In practice, voice assistants cannot be assumed to be fair: the human voice is sensitive to our demographic attributes, our social context and economic circumstances. Yet the performance of deep learning technologies which drive most modern voice assistants is known to be insensitive to sensitive differences between humans. This makes it necessary to evaluate the fairness of voice assistant technologies, and in particular their deep learning components.


Currently the fairness challenges associated with voice technologies have only received limited attention, and tools for auditing voice assistant fairness are scarce. With Fair EVA we aim to address this gap.

Project Goal for 2022


Fair EVA is targeting fairness of speaker verification, a form of voice-based biometric identification that is widely used in voice assistants to identify speakers. Speaker verification systems are often used in sensitive domains: financial services, proof-of-life verification of pensioners, and voice-based interfaces in smart speakers for health and elderly care. While bias of facial recognition is well researched and understood, little attention has been paid to speaker verification.


With the Fair EVA project we aim to achieve two goals:

  1. Release an open source library for developers to test bias in speaker verification models during product development

  2. Increase public understanding and awareness around voice biometrics

Fair EVA aims to raise awareness of fairness challenges in voice technologies.

Intended Impact

The project has three contributor and user groups: speaker verification researchers and developers, technology activists and voice assistant technology users.

For researchers and developers the intended impact of the project is the uptake of internal fairness audits during speaker verification development. The uptake may be directly linked to the open source software of Fair EVA, in which case package downloads and installs can serve as a proxy for reach into the research and developer community. We also want to see fairness evaluations included in speaker verification challenges, and fairer datasets used as evaluation benchmarks. In this case the impact of Fair EVA could be directly measured by the challenges that use our evaluation tool and datasets, and indirectly by those that build their own, but acknowledge inspiration from ours, or use our work as a starting point.


For technology activists the intended impact is that sufficient knowledge resources are available to investigate fairness of existing commercial and public speaker verification deployments. Similarly, for voice assistant technology users the intended impact is that sufficient knowledge resources are available to understand how their personal voice is used in voice technologies, what potential risks this presents to their privacy, how the technology ought to behave when they use it, and how to identify when the system is biased against them. Web traffic to the database and video will be a way of measuring the reach of the content we produce, which indirectly gives an indication of potential impact. More directly, we hope to see more public scrutiny of and an increasing number of societal investigations into fairness in speaker verification systems.