About Fair EVA

Voice technologies should work equally for all users, irrespective of their demographic attributes.

Project Purpose

Voice technologies have become a part of modern life. They are integrated into every smartphone, they drive smart speakers, automate call centers, inform forensic investigations and activate hands-free interactions with products and services.


We believe that voice technologies should work reliably for all users, and are concerned about the potential for bias and discrimination presented by the unchecked use of data and AI in their development.
This is why we launched the Fair EVA project.

Motivation


Voice technologies are a prominent AI technology in our everyday lives: they are embedded in smart phones and smart speakers, incorporated in cars and home appliances, and are becoming the backbone of call centres that not only resolve customer queries, but also deliver financial and government services. Voice technologies should work equally well for all users, no matter what their age, gender, origin or ethnicity.


In practice, this is not the case. Our voice is shaped by our body, and is sensitive to where we come from, our social context and economic circumstances. Yet the performance of deep learning and AI, which drive most modern voice technologies is known to be insensitive to sensitive differences between humans. This makes it necessary to evaluate the fairness of voice technologies.


Voice technologies are often hidden. Their fairness challenges are not well known, and tools for auditing bias in voice technologies are scarce. With Fair EVA we aim to address this gap.

Fair EVA aims to raise awareness of fairness challenges in voice technologies.

Project Goal


Fair EVA is targeting fairness of speaker verification, a form of voice-based biometric identification that is widely used in voice assistants to identify speakers. Speaker verification systems are often used in sensitive domains: financial services, proof-of-life verification of pensioners, and voice-based interfaces in smart speakers for health and elderly care. While bias of facial recognition is well researched and understood, little attention has been paid to speaker verification.


With the Fair EVA project we aim to achieve two goals:

  1. Release an open source library for developers to test bias in speaker verification models during product development

  2. Increase public understanding and awareness around voice biometrics

Intended Impact

The project has three contributor and user groups: speaker verification researchers and developers, technology activists and voice assistant technology users.

For researchers and developers the intended impact of the project is the uptake of internal fairness audits during speaker verification development. The uptake may be directly linked to the open source software of Fair EVA, in which case package downloads and installs can serve as a proxy for reach into the research and developer community. We also want to see fairness evaluations included in speaker verification challenges, and fairer datasets used as evaluation benchmarks. In this case the impact of Fair EVA could be directly measured by the challenges that use our evaluation tool and datasets, and indirectly by those that build their own, but acknowledge inspiration from ours, or use our work as a starting point.


For technology activists the intended impact is that sufficient knowledge resources are available to investigate fairness of existing commercial and public speaker verification deployments. Similarly, for voice assistant technology users the intended impact is that sufficient knowledge resources are available to understand how their personal voice is used in voice technologies, what potential risks this presents to their privacy, how the technology ought to behave when they use it, and how to identify when the system is biased against them. Web traffic to the database and video will be a way of measuring the reach of the content we produce, which indirectly gives an indication of potential impact. More directly, we hope to see more public scrutiny of and an increasing number of societal investigations into fairness in speaker verification systems.