Tags: Broadcom

Anthropic Mulls Custom AI Chip Design as Claude Revenue Tops $30 B Run Rate

Anthropic Mulls Custom AI Chip Design as Claude Revenue Tops $30 B Run Rate The Next Web
San Francisco‑based Anthropic is weighing the development of its own artificial‑intelligence chips, according to three sources familiar with the effort. The move comes as the company’s annualized revenue run rate for its Claude models surged past $30 billion, up from roughly $9 billion at the end of 2025. Anthropic still runs workloads on a mix of Google‑Broadcom TPUs, Amazon‑custom silicon and Nvidia GPUs, and has just secured a long‑term deal for 3.5 gigawatts of TPU capacity beginning in 2027. The firm has not yet formed a dedicated chip team and may continue buying off‑the‑shelf silicon. Read more

Anthropic expands compute partnership with Google and Broadcom to boost Claude AI

Anthropic expands compute partnership with Google and Broadcom to boost Claude AI TechCrunch
Anthropic announced a new agreement with Google Cloud and Broadcom that will add roughly 3.5 gigawatts of compute capacity, primarily in the United States, to power its Claude models. The expansion builds on a 2025 deal and is slated to be operational by 2027, reflecting surging demand from enterprise customers despite recent U.S. Defense Department concerns. CFO Krishna Rao called the move the company’s “most significant compute commitment to date.” Read more

Anthropic Secures 3.5 GW of Google TPU Capacity via Broadcom, Revenue Run Rate Tops $30 B

Anthropic Secures 3.5 GW of Google TPU Capacity via Broadcom, Revenue Run Rate Tops $30 B The Next Web
Anthropic announced on April 6 that it will tap roughly 3.5 gigawatts of next‑generation Google Tensor Processing Unit (TPU) compute through Broadcom starting in 2027, adding to the 1 GW already supplied for 2026. The move backs the AI lab’s $50 billion pledge to expand U.S. AI infrastructure and comes as the company reports a revenue run‑rate exceeding $30 billion—more than triple its figure at the end of 2025. Broadcom’s role as the silicon‑to‑workload bridge and the scale of the deal underscore the accelerating compute arms race among AI firms. Read more