Apertus: Democratizing Open and Compliant LLMs for Global Language Environments
Alejandro Hernández-Cano, Alexander Hägele, Allen Hao Huang, Angelika Romanou, Antoni-Joan Solergibert, Barna Pasztor, Bettina Messmer, Dhia Garbaya, Eduard Frank Ďurech, Ido Hakimi, Juan Garcı́a Giraldo, Mete Ismayilzada, Negar Foroutan, Skander Moalla, Tiancheng Chen, Vinko Sabolčec, Yixuan Xu, Michael Aerni, Badr AlKhamissi, Inés Altemir Mariñas, Mohammad Hossein Amani, Matin Ansaripour, Ilia Badanin, Harold Benoit, Emanuela Boros, Nicholas Browning, Fabian Bösch, Maximilian Böther, Niklas Canova, Camille Challier, Clement Charmillot, Jonathan Coles, Jan Deriu, Arnout Devos, Lukas Drescher, Daniil Dzenhaliou, Maud Ehrmann, Dongyang Fan, Simin Fan, Silin Gao, Miguel Gila, Marı́a Grandury, Diba Hashemi, Alexander Hoyle, Jiaming Jiang, Mark Klein, Andrei Kucharavy, Anastasiia Kucherenko, Frederike Lübeck, Roman Machacek, Theofilos Manitaras, Andreas Marfurt, Kyle Matoba, Simon Matrenok, Henrique Mendonça, Fawzi Roberto Mohamed, Syrielle Montariol, Luca Mouchel, Sven Najem-Meyer, Jingwei Ni, Gennaro Oliva, Matteo Pagliardini, Elia Palme, Andrei Panferov, Léo Paoletti, Marco Passerini, Ivan Pavlov, Auguste Poiroux, Kaustubh Ponkshe, Nathan Ranchin, Javi Rando, Mathieu Sauser, Jakhongir Saydaliev, Muhammad Ali Sayfiddinov, Marian Schneider, Stefano Schuppli, Marco Scialanga, Andrei Semenov, Kumar Shridhar, Raghav Singhal, Anna Sotnikova, Alexander Sternfeld, Ayush Kumar Tarun, Paul Teiletche, Jannis Vamvas, Xiaozhe Yao, Hao Zhao, Alexander Ilic, Ana Klimovic, Andreas Krause, Caglar Gulcehre, David Rosenthal, Elliott Ash, Florian Tramèr, Joost VandeVondele, Livio Veraldi, Martin Rajman, Thomas Schulthess, Torsten Hoefler, Antoine Bosselut, Martin Jaggi and Imanol Schlag
Annual Meeting of the Association for Computational Linguistics (ACL) 2026
We present Apertus, a fully open suite of large language models (LLMs) designed to address two systemic shortcomings in today’s open model ecosystem: data compliance and multilingual representation. Unlike many prior models that release weights without reproducible data pipelines or regard for content-owner rights, Apertus models are pretrained exclusively on openly available data, retroactively respecting robots.txt exclusions and filtering for non-permissive, toxic, and personally identifiable content. To mitigate risks of memorization, we adopt the Goldfish objective during pretraining, strongly suppressing verbatim recall of data while retaining downstream task performance. The Apertus models also expand multilingual coverage, training on 15T tokens from over 1800 languages, with 40% of pretraining data allocated to non-English content. Released at 8B and 70B scales, Apertus approaches state-of-the-art results among fully open models on multilingual benchmarks, rivalling or surpassing open-weight counterparts. Beyond model weights, we release all scientific artifacts from our development cycle with a permissive license, including data preparation scripts, checkpoints, evaluation suites, and training code, enabling transparent audit and extension.
| @inproceedings{HHHR+26, | |||
| author | = | {Hern{\'a}ndez-Cano, Alejandro and H{\"a}gele, Alexander and Huang, Allen Hao and Romanou, Angelika and Solergibert, Antoni-Joan and Pasztor, Barna and Messmer, Bettina and Garbaya, Dhia and {\v D}urech, Eduard Frank and Hakimi, Ido and Giraldo, Juan Garc{\'\i}a and Ismayilzada, Mete and Foroutan, Negar and Moalla, Skander and Chen, Tiancheng and Sabol{\v c}ec, Vinko and Xu, Yixuan and Aerni, Michael and AlKhamissi, Badr and Mari{\~n}as, In{\'e}s Altemir and Amani, Mohammad Hossein and Ansaripour, Matin and Badanin, Ilia and Benoit, Harold and Boros, Emanuela and Browning, Nicholas and B{\"o}sch, Fabian and B{\"o}ther, Maximilian and Canova, Niklas and Challier, Camille and Charmillot, Clement and Coles, Jonathan and Deriu, Jan and Devos, Arnout and Drescher, Lukas and Dzenhaliou, Daniil and Ehrmann, Maud and Fan, Dongyang and Fan, Simin and Gao, Silin and Gila, Miguel and Grandury, Mar{\'\i}a and Hashemi, Diba and Hoyle, Alexander and Jiang, Jiaming and Klein, Mark and Kucharavy, Andrei and Kucherenko, Anastasiia and L{\"u}beck, Frederike and Machacek, Roman and Manitaras, Theofilos and Marfurt, Andreas and Matoba, Kyle and Matrenok, Simon and Mendon{\c c}a, Henrique and Mohamed, Fawzi Roberto and Montariol, Syrielle and Mouchel, Luca and Najem-Meyer, Sven and Ni, Jingwei and Oliva, Gennaro and Pagliardini, Matteo and Palme, Elia and Panferov, Andrei and Paoletti, L{\'e}o and Passerini, Marco and Pavlov, Ivan and Poiroux, Auguste and Ponkshe, Kaustubh and Ranchin, Nathan and Rando, Javi and Sauser, Mathieu and Saydaliev, Jakhongir and Sayfiddinov, Muhammad Ali and Schneider, Marian and Schuppli, Stefano and Scialanga, Marco and Semenov, Andrei and Shridhar, Kumar and Singhal, Raghav and Sotnikova, Anna and Sternfeld, Alexander and Tarun, Ayush Kumar and Teiletche, Paul and Vamvas, Jannis and Yao, Xiaozhe and Zhao, Hao and Ilic, Alexander and Klimovic, Ana and Krause, Andreas and Gulcehre, Caglar and Rosenthal, David and Ash, Elliott and Tram{\`e}r, Florian and VandeVondele, Joost and Veraldi, Livio and Rajman, Martin and Schulthess, Thomas and Hoefler, Torsten and Bosselut, Antoine and Jaggi, Martin and Schlag, Imanol}, | |
| title | = | {{Apertus: Democratizing Open and Compliant LLMs for Global Language Environments}}, | |
| booktitle | = | {Annual Meeting of the Association for Computational Linguistics (ACL)}, | |
| year | = | {2026}, | |
| howpublished | = | {arXiv preprint arXiv:2509.14233}, | |
| url | = | {https://arxiv.org/abs/2509.14233} | |
| } | |||