A Globally Distributed AI Training Milestone In May 2025, the Prime Intellect research team unveiled INTELLECT‑2, a 32-billion-parameter large language model (LLM) that was trained not on a single data-center supercomputer, but on a globally distributed “swarm” of volunteer GPUs chakra.dev. This makes INTELLECT-2 the first LLM of its scale to be trained via fully asynchronous…
Read more
Apple is weighing a dramatic shift away from its “build-everything-in-house” doctrine by licensing large-language models (LLMs) from Anthropic and OpenAI to power a rebuilt Siri—an about-face that underscores both the urgency of Apple’s AI catch-up plan and the scale of its recent missteps. Bloomberg first broke the story, and follow-on reports reveal protracted delays to…
Read more