Browse Category

AI Hardware News 5 August 2025 - 28 August 2025

Chiplets & Advanced Packaging Market Report 2025: AI Demand Fuels 2.5D/3D Integration Boom

Chiplets & Advanced Packaging Market Report 2025: AI Demand Fuels 2.5D/3D Integration Boom

Key Facts Introduction: Chiplets and 3D Integration Come of Age Semiconductor design is undergoing a paradigm shift from giant monolithic chips toward chiplet-based and multi-die architectures. In a chiplet approach, a processor is disaggregated into multiple smaller dies (chiplets) that are later integrated in a package, rather than one large die. This strategy improves manufacturing yield and cost – smaller dies are easier to produce without defects – and allows mixing different process nodes and functions in one package astutegroup.com astutegroup.com. As traditional 2D scaling hits physical and economic limits, chiplets offer a practical path to keep improving performance and
Self-Driving Supercomputer Showdown: NVIDIA Drive Thor vs Tesla FSD Hardware 4 vs Qualcomm Snapdragon Ride Flex

Self-Driving Supercomputer Showdown: NVIDIA Drive Thor vs Tesla FSD Hardware 4 vs Qualcomm Snapdragon Ride Flex

NVIDIA Drive Thor delivers up to 1,000 TOPS (INT8 sparse) on a single chip and can be paired via NVLink-C2C to about 2,000 TOPS for high-end Level 4–5 autonomy. Thor combines a Blackwell GPU with 2560 CUDA cores and 96 Tensor Cores, a 14-core Arm Neoverse V3 CPU complex, on a 4nm TSMC process, and supports up to 128 GB LPDDR5X memory. Thor is designed as a centralized automotive computer unifying autonomous driving, ADAS, digital cockpit and infotainment on one platform. Production of Drive Thor is slated to begin in 2025, with design wins at Li Auto, ZEEKR, BYD, Hyper
NVIDIA Blackwell B200 vs AMD MI350 vs Google TPU v6e – 2025’s Ultimate AI Accelerator Showdown

NVIDIA Blackwell B200 vs AMD MI350 vs Google TPU v6e – 2025’s Ultimate AI Accelerator Showdown

NVIDIA’s Blackwell B200 features 180 GB of HBM3e memory per GPU with up to 8 TB/s bandwidth, 18 PFLOPS FP4 tensor throughput, 9 PFLOPS FP8, and 4.5 PFLOPS FP16, plus a second-generation Transformer Engine. NVIDIA claims DGX B200 delivers about 3× the training performance and 15× the inference performance of DGX H100 in end-to-end workflows. Google’s TPU v6e, codenamed Trillium, delivers 918 TFLOPS BF16 per chip, 1.836 PFLOPS INT8, 32 GB of HBM per chip, and 1.6 TB/s bandwidth per chip, with a 256-chip pod delivering about 234.9 PFLOPS BF16. AMD Instinct MI350X/MI355X offer 288 GB of HBM3e, up to
5 August 2025
Go toTop