%0 Journal Article %T ProtMamba: a homology-aware but alignment-free protein state space model %V 41 %N 6 %P btaf348 %* https://creativecommons.org/licenses/by/4.0/ %U https://academic.oup.com/bioinformatics/article/doi/10.1093/bioinformatics/btaf348/8161314 %X Abstract Motivation Protein language models are enabling advances in elucidating the sequence-to-function mapping, and have important applications in protein design. Models based on multiple sequence alignments efficiently capture the evolutionary information in homologous protein sequences, but multiple sequence alignment construction is imperfect. Results We present ProtMamba, a homology-aware but alignment-free protein language model based on the Mamba architecture. In contrast with attention-based models, ProtMamba efficiently handles very long context, comprising hundreds of protein sequences. It is also computationally efficient. We train ProtMamba on a large dataset of concatenated homologous sequences, using two GPUs. We combine autoregressive modeling and masked language modeling through a fill-in-the-middle training objective. This makes the model adapted to various protein design applications. We demonstrate ProtMamba’s usefulness for sequence generation, motif inpainting, fitness prediction, and modeling intrinsically disordered regions. For homolog-conditioned sequence generation, ProtMamba outperforms state-of-the-art models. ProtMamba’s competitive performance, despite its relatively small size, sheds light on the importance of long-context conditioning. Availability and implementation A Python implementation of ProtMamba is freely available in our GitHub repository: https://github.com/Bitbol-Lab/ProtMamba-ssm and archived at https://doi.org/10.5281/zenodo.15584634. %G en %J Bioinformatics %A Sgarbossa, Damiano %A Malbranke, Cyril %A Bitbol, Anne-Florence %E Cheng, Jianlin %D 2025-06-02