Membership Inference for Speech Recognition
Legal regulations such as GDPR define the processing and storing of personal information from individuals in the European Union and are a significant gain to prevent private information. However, it also has to be made sure that these are complied. While there is a large body of research on the vulnerability of automatic speech recognition (ASR) against attacks such as adversarial examples, attacks that compromise the confidentiality of sensitive information of the training data of an ASR system are still neglected. Examples of such violations are membership inference attacks that aim to extract information of a machine learning model’s training data under regulations like GDPR.
This work aims to investigate whether it is possible to retrieve sensitive information about an ASR system’s training data. For this, we first want to answer the questions if it is possible to probe specific recordings and tell if these have been used to train a black-box ASR system. To further extend the attack, it should also be investigated if a specific speaker is part of the training data set by using voice fingerprints of samples of that specific speaker. Moreover, what are the effect of different assumptions about the shadow training data set, e.g., if parts of the data set are known, or attributes of the used speakers like demographic information are revealed.