Pré-publication, Document de travail, AO, Calcul parallèle, distribué et partagé, Informatique

Optimizing Distributed Training on Frontier for Large Language Models

Sajal Dash, Isaac Lyngaas, Junqi Yin, Xiao Wang, Romain Egele, et al.. Optimizing Distributed Training on Frontier for Large Language Models. 2024. ⟨hal-04393799⟩

Publié le