Browse Category

Open Source News 18 June 2025 - 11 November 2025

Meta’s Chief AI Scientist Yann LeCun to Exit and Launch New AI Startup — What It Means for Meta, Open‑Source AI, and the Race to AGI (Nov. 11, 2025)

Meta’s Chief AI Scientist Yann LeCun to Exit and Launch New AI Startup — What It Means for Meta, Open‑Source AI, and the Race to AGI (Nov. 11, 2025)

Meta’s longtime chief AI scientist, Yann LeCun, is planning to leave the company to found a new artificial‑intelligence startup, according to reports published today. The move, which comes amid a sweeping reorganization of Meta’s AI efforts, would mark the most significant leadership change in Meta’s research ranks since the creation of its FAIR lab a decade ago. Reuters+1 Key facts at a glance Why LeCun matters Few researchers have shaped modern AI as profoundly as Yann LeCun. Alongside Geoffrey Hinton and Yoshua Bengio, he helped propel deep learning from an academic curiosity to the engine behind image recognition, speech, and
Mistral AI’s Meteoric Rise: Inside the $14B Open-Source Gambit Challenging OpenAI

Mistral AI’s Meteoric Rise: Inside the $14B Open-Source Gambit Challenging OpenAI

The Making of an Open-Source AI Juggernaut Founded in Paris in spring 2023, Mistral AI burst onto the scene with bold ambition: to challenge the likes of OpenAI and Google by doing things differently. Its three co-founders – Arthur Mensch (CEO, ex-DeepMind), Timothée Lacroix (CTO, ex-Meta AI) and Guillaume Lample (Chief Science Officer, ex-Meta) – left Big Tech research jobs to build foundation models on their own terms techcrunch.com techcrunch.com. The company’s very name (a “mistral” is a strong wind in Southern France) hints at its mission to blow fresh air into AI development techcrunch.com. Mensch and team saw that “proprietary [AI] was becoming
14 September 2025
RISC-V vs ARM vs x86: The 2025 Silicon Architecture Showdown

RISC-V’s Edge & IoT Takeover: Billions of Open-Source Cores Drive 2025 Tech Revolution

RISC-V Shipments Soar in Edge & IoT Devices Once a niche academic project, RISC-V is now shipping in billions of devices, especially at the edge. RISC-V International reports the ISA is implemented in over 13 billion cores on the market as of end-2023 riscv.org – a stunning rise driven largely by IoT and embedded use cases. In fact, some estimates suggest over 10 billion of those cores were shipped by 2023 primarily for IoT sensors, microcontrollers, storage controllers, and wireless chips ts2.tech. The momentum is accelerating: according to the CEO of RISC-V International, “the ecosystem is expanding rapidly… over two
28 August 2025
The Open-Source GPT Revolution: How Free LLMs Are Reshaping AI

The Open-Source GPT Revolution: How Free LLMs Are Reshaping AI

March 2021: EleutherAI released GPT-Neo at 2.7 billion parameters, the first free open alternative to GPT-3, under the MIT license. June 2021: EleutherAI released GPT-J at 6 billion parameters under the Apache 2.0 license, boosting startups in open GPT development. July 2022: The BigScience project released BLOOM at 176 billion parameters, covering 46 languages and released under the Responsible AI License. May 2022: Meta AI released OPT at 175 billion parameters under a non-commercial research license, including a logbook documenting the training process. February 2023: Meta AI unveiled LLaMA with 7B, 13B, 33B, and 65B models, and the weights leaked
8 August 2025
 ·  ·  ·  · 
32B AI Model Trained by a Swarm of Volunteer GPUs – Inside INTELLECT-2’s Decentralized Revolution

32B AI Model Trained by a Swarm of Volunteer GPUs – Inside INTELLECT-2’s Decentralized Revolution

In May 2025, Prime Intellect unveiled INTELLECT-2, a 32B-parameter LLM trained on a globally distributed swarm of volunteer GPUs. INTELLECT-2 is the first model of its scale trained via fully asynchronous reinforcement learning across hundreds of heterogeneous, permissionless machines on chakra.dev. PRIME-RL coordinates the training loop by separating experience generation on inference workers from policy updates on trainer nodes, eliminating traditional synchronization bottlenecks. SHARDCAST distributes the 32B model weights via a tree-based, pipelined transfer to hundreds of volunteer GPUs, accelerating weight propagation. TOPLOC provides a probabilistic verification fingerprint using locality-sensitive hashing to detect tampering or incorrect results from untrusted nodes.
AI Shake-Up: Pentagon’s $200M Contracts, Meta’s Mega Investment, and an Open-Source AI Outsmarting GPT-4

AI Shake-Up: Pentagon’s $200M Contracts, Meta’s Mega Investment, and an Open-Source AI Outsmarting GPT-4

The U.S. Department of Defense awarded up to $200 million contracts to four AI firms: OpenAI, Google, Anthropic, and xAI. The contracts fund development of agentic AI systems to tackle critical national security challenges, according to DoD official Doug Matty. Musk’s Grok chatbot will be adapted for government use as part of a new “Grok for Government” suite under the DoD deal. President Trump announced plans to invest $70 billion in AI and energy initiatives to be unveiled on July 14 at a Pittsburgh innovation summit. The plan includes funding for new data centers and power grid upgrades to support
Everything You Need to Know About Google Gemini CLI: Features, News, and Expert Insights

Google’s Gemini CLI Just Dropped—Here’s Why This Free, Open‑Source AI Agent Could Replace Your Favorite Coding Tool

Gemini CLI is a command‑line AI agent that passes natural-language prompts to Gemini 2.5 Pro and returns structured responses, code, or multimedia within the terminal. Gemini 2.5 Pro supports a 1,000,000-token context window, about 50–100× larger than mainstream LLMs, enabling repository-scale reasoning. The CLI integrates with Veo for video, Imagen for images, and Google Search via the Model Context Protocol (MCP) to handle multimodal tasks. During the preview, individuals can issue 60 requests per minute and 1,000 per day. Google released the entire Gemini CLI codebase under the Apache 2.0 license on GitHub, inviting pull requests and forks. Taylor Mullen,
Inside DeepSeek AI: The Chinese Foundation Model Powerhouse Revolutionizing Open-Source AI in 2025

Inside DeepSeek AI: The Chinese Foundation Model Powerhouse Revolutionizing Open-Source AI in 2025

DeepSeek AI was founded in 2023 as a spin-off from Hangzhou‑based High-Flyer Capital Management, following a March 2023 announcement that the fund would devote resources to AGI research. High-Flyer built two private AI clusters before export restrictions—1,100 NVIDIA A100 GPUs in 2020 and about 10,000 A100 GPUs in 2021, at a cost of ¥1 billion, with a 10,000‑GPU cluster online by 2022. In May 2024, DeepSeek‑V2 was released as a 236‑billion‑parameter Mixture-of-Experts model that ranked top three on AlignBench and was described as rivaling GPT‑4‑Turbo at low cost. Liang Wenfeng, born in 1985, is DeepSeek’s founder and CEO, owning about
Why iOS Isn’t Open Source: The Secrets Behind Apple’s Walled Garden

Why iOS Isn’t Open Source: The Secrets Behind Apple’s Walled Garden

Apple introduced the iPhone in 2007, originally naming the OS iPhone OS and presenting it as OS X with desktop-class applications to emphasize hardware–software integration. The App Store launched in 2008, with every native app required to use Apple’s SDK and undergo Apple’s review and guidelines. Apple released the Darwin core under the Apple Public Source License, and in 2017 posted the ARM64 iOS kernel source publicly, while higher‑level iOS remains closed. In 2015 Apple open-sourced the Swift programming language under an Apache license. iOS is tightly integrated with Apple hardware, including the Neural Engine and Secure Enclave, enabling optimizations
18 June 2025
 ·  ·  ·  · 
Go toTop