Fairness EVAluation of Voice Technologies
Fair EVA is an open source project that is gathering resources and building tools
to help researchers and developers, technology activists and voice technology users
evaluate and audit bias and discrimination in voice technologies.
Enabled through the Mozilla Technology Fund Award!
We are proud to be part of Mozilla's first cohort of projects supported with a Mozilla Technology Fund Award.
News and Updates
The bt4vt library has been released!
After a year of hard work, the bias tests for voice tech python library is ready with bias evaluations for speaker verification. Check it out: https://github.com/wiebket/bt4vt
Preprint coming soon...
Keep an eye out for our longitudinal study on speaker recognition dataset dynamics.
INTERSPEECH '22: Design Guidelines for Speaker Verification Evaluation
We have published a technical analysis of design considerations for robust and inclusive speaker verification evaluation. Read it here: Design Guidelines for Inclusive Speaker Verification Evaluation Datasets
The Proposed EU AI Act and The Case of Biometrics
Fair EVA project lead, Wiebke Hutiri, shares her perspectives on voice biometrics and the proposed EU AI Act in the Mozilla Blog. Read it here: The Proposed EU AI Act and The Case of Biometrics
Presentation to the European Association on Biometrics (EAB)
The EAB invited us to present our work on voice recognition at the Artificial Intelligence Act Workshop 2022.
We'll be at ACM FAcct 2022 in Korea
The research that forms the foundation of the Fair EVA project will be presented at the ACM FAccT 2022 conference. Take a look:
Bias in Automated Speaker Recognition
Fair EVA Featured on the Mozilla Blog
Read more about Fair EVA and why we exist in this blog post: Access Denied! This Doesn’t Sound Like You
Seen at virtual MozFest 2022
We hosted a session on Fair Voice: What happens when your voice becomes your ID? at MozFest 2022, with panelists Halsey Burgund, Johann Diedrick, Kathleen Siminyu and Wiebke Toussaint Hutiri.