A Globally Distributed AI Training Milestone In May 2025, the Prime Intellect research team unveiled INTELLECT‑2, a 32-billion-parameter large language model (LLM) that was trained not on a single data-center supercomputer, but on a globally distributed “swarm” of volunteer GPUs chakra.dev. This makes INTELLECT-2 the first LLM of its scale to be trained via fully asynchronous…
Read more