About Fair EVA
Voice technologies should work equally for all users, irrespective of their demographic attributes.
Voice technologies have become a part of modern life. They are integrated into every smartphone, they drive smart speakers, automate call centers, inform forensic investigations and activate hands-free interactions with products and services.
We believe that voice technologies should work reliably for all users, and are concerned about the potential for bias and discrimination presented by the unchecked use of data and AI in their development.
This is why we launched the Fair EVA project.
Fair EVA aims to raise awareness of fairness challenges in voice technologies.
The project has three contributor and user groups: speaker verification researchers and developers, technology activists and voice assistant technology users.
For researchers and developers the intended impact of the project is the uptake of internal fairness audits during speaker verification development. The uptake may be directly linked to the open source software of Fair EVA, in which case package downloads and installs can serve as a proxy for reach into the research and developer community. We also want to see fairness evaluations included in speaker verification challenges, and fairer datasets used as evaluation benchmarks. In this case the impact of Fair EVA could be directly measured by the challenges that use our evaluation tool and datasets, and indirectly by those that build their own, but acknowledge inspiration from ours, or use our work as a starting point.