Speech Self-Supervised Representations Benchmarks: Are we Doing it Right?
Published
Conference of the International Speech Communication Association (INTERSPEECH)
Abstract
Self-supervised learning (SSL) has recently allowed leveraging large datasets of unlabeled speech signals to reach low downstream error rates using only small amounts of annotated data. The high number of proposed approaches fostered the need and rise of extended benchmarks evaluating their performance on a set of downstream tasks exploring various aspects of the speech signal. However, and while the number of considered tasks has been growing, most rely upon a single decoding architecture that maps the frozen SSL representations to the downstream labels. In this work, we investigate the robustness of the benchmarking results to changes in the decoder architecture. Interestingly, we found that except for speech recognition, changing the downstream decoder leads to significant variations in the leaderboards of all the considered tasks. More concerning, our study reveals that benchmarking using tiny decoders may cause a counterproductive increase in the sizes of the developed SSL models.