Slamming: Training a Speech Language Model on One GPU in a Day

* indicates equal contribution

Abstract

We introduce slam, a recipe for training high-quality Speech Language Models (SLMs) on a single academic GPU in 24 hours. We do so through empirical analysis of model initialisation and architecture, synthetic training data, preference optimisation with synthetic data and tweaking all other components. We empirically demonstrate the obtained training recipe scales up to more compute getting results on par with leading SLMs in a fraction of the compute cost. We hope these insights will make SLM training and research more accessible. In the context of SLMs scaling laws, our results far outperform predicted compute optimal performance, giving an optimistic view to SLM feasibility. We open source code, data, models.

Generation Examples

We compare different textless speech language models and test their generation capabilities by completing a prefix prompt. We compare Slam and Slam (scaled) with the official TWIST-7B on in the wild samples. Note that all models use the same single speaker vocoder, and same Hubert tokeniser, thus the voice is changed in re-synthesis. We indicate the amount of GPU days for training each model.
Prompt Slam
1*A5000
Slam (scaled)
4*A100
TWIST-7B
160*V100

BibTeX


@misc{maimon2025slamming,
      title={Slamming: Training a Speech Language Model on One GPU in a Day}, 
      author={Gallil Maimon and Avishai Elmakies and Yossi Adi},
      year={2025},
      eprint={2502.15814},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2502.15814}, 
}