Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why is the storage cluster used to train Llama 3 so slow? (fbcdn.net)
2 points by 1a1a11a on July 31, 2024 | hide | past | favorite | 1 comment


"Tectonic (Pan et al., 2021), Meta’s general-purpose distributed file system, is used to build a storage fabric (Battey and Gupta, 2024) for Llama 3 pre-training. It offers 240 PB of storage out of 7,500 servers equipped with SSDs, and supports a sustainable throughput of 2 TB/s and a peak throughput of 7 TB/s"

I would expect at least 10s if not 100s TB/s from a cluster of 7500 servers with SSDs, what is the bottleneck?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: