Search

Estimating Group Fairness in Speech Recognition

Affiliation
MARG
Presenter
김응범
Time
II / 10:10~10:25
Subject
Fairness in Automatic Speech Recognition

Abstract

Recently, the importance of algorithmic fairness has been emphasized in machine learning. In automatic speech recognition (ASR), however, fairness has been incorrectly measured due to differences between ASR and traditional classification problem. In this work, we address two remaining challenges in ASR fairness metric and aim to estimate the fairness metric in ASR systems. The first challenge is originated from large text space of ASR. Although paired speech with identical text from different groups is necessary in conditional independence based metric such as equalized odds or disparate treatment, this is not a common assumption in ASR dataset. Alternately, we estimate fairness using generative model and demonstrate that the error is bounded. Secondly, we introduce the fairness metric for continuous attributes to handle continuous human attributes for ASR. We believe that this is the first theoretical work for a fairness metric in ASR system.