Textually Pretrained Speech Language Models

Michael Hassid1,2,*, Tal Remez1,*, Tu Anh Nguyen1, Itai Gat1, Alexis Conneau3, Felix Kreuk1, Jade Copet1,
Alexandre Defossez1, Gabriel Synnaeve1, Emmanuel Dupoux1, Roy Schwartz2,*, Yossi Adi1,2,*

1FAIR, Meta AI

2The Hebrew University of Jerusalem

3OpenAI

*Core contributors

[paper] [code] [bib]

Abstract:


Speech language models (SpeechLMs) process and generate acoustic data only, without textual supervision. In this work, we propose TWIST, a method for training SpeechLMs using a warm-start from a pretrained textual language model. We show using both automatic and human evaluation that TWIST outperforms a cold-start SpeechLM across the board. We empirically analyze the effect of different model design choices such as the speech tokenizer, the pretrained textual model, and the dataset size. We find that model and dataset scale both play an important role in constructing better-performing SpeechLMs. Based on our observation, we present the largest (to the best of our knowledge) SpeechLM both in terms of number of parameters and training data. We additionally introduce two spoken versions of the StoryCloze textual benchmark to further improve model evaluation and advance future research in the field.

Natural speech examples:

Generated
Prompt Cold Init.-1.3B TWIST-1.3B TWIST-7B

LibriLight:

Prompt Generated Ground truth
(resynthesized from speech tokens using a vocoder)

LibriSpeech:

Prompt Generated Ground truth
(resynthesized from speech tokens using a vocoder)

BibTex:

    @article{hassid2024textually,
        title={Textually pretrained speech language models},
        author={Hassid, Michael and Remez, Tal and Nguyen, Tu Anh and Gat, Itai and Conneau, Alexis and Kreuk, Felix and Copet, Jade and Defossez, Alexandre and Synnaeve, Gabriel and Dupoux, Emmanuel and others},
        journal={Advances in Neural Information Processing Systems},
        volume={36},
        year={2024}
    }