Amazon is starting to talk about a part of its AI stack that has so far lived mostly behind the scenes. In his April 2026 shareholder letter, CEO Andy Jassy revealed that the company’s internal semiconductor unit – which designs Graviton CPUs, Trainium AI accelerators and Nitro networking chips for AWS – is already running at more than 20 billion dollars in annualized revenue and growing at triple‑digit rates, and that as a standalone product line sold more broadly it could plausibly support around 50 billion dollars a year. Until now, that silicon has been offered primarily as a service baked into AWS instances, but Jassy said Amazon is actively considering selling full chip racks to external customers, a move that would turn AWS from a closed ecosystem into a direct supplier of AI compute to enterprises that do not necessarily want to move everything into Amazon’s cloud.

At the same time, Amazon is finally putting hard numbers on AI revenue inside AWS itself. Jassy disclosed that AI services in the cloud unit crossed a 15 billion dollar annual run‑rate in the first quarter of 2026, while overall AWS revenue is tracking at roughly 142 billion dollars a year after reaccelerating in late 2025. To feed that demand, Amazon plans record capital expenditures of about 200 billion dollars in 2026, largely on AI infrastructure, but argues this is not a blind bet: management points to a swelling AWS backlog and long‑term customer commitments that already cover a substantial share of that budget, including what market reports describe as more than 100 billion dollars of contracted capacity tied to OpenAI and other frontier‑model customers.
Amazon's chip division: 20 billion today, 50 billion in potential
According to Jassy's letter, Amazon's chip business is running $AMZN on annualized revenue of over $20 billion, double the roughly $10 billion the company reported as recently as the fourth quarter results of the previous year. The division includes three key products: the Graviton CPU for general computing tasks, Trainium AI accelerators for training and model inference, and Nitro network cards that improve the efficiency and security of AWS servers.
Jassy describes that demand for these chips is so high that it's quite possible that they will sell entire racks of chips to third parties in the future. In the letter, he adds that if the chip division were to operate as a standalone company selling semiconductors to both AWS customers and external clients, its annual revenue run rate would be around $50 billion. That implicitly puts the chip unit in the league of the semiconductor market's big players - and also suggests that Amazon has a "hidden" business inside the company with the parameters of another megacap title.
Today, this division exists exclusively inside AWS: customers access both Traini and Graviton through EC2 instances, not by buying chips directly. Opening up direct sales would make Amazon a hybrid between a cloud provider and an AI hardware vendor, similar to what Google is doing with TPU via Google Cloud, but with a potentially broader reach towards on-premise installations by large customers.
Trainium3 sold out, Trainium4 reserved. AWS AI at 15 billion a year
A key driver of growth in the chip division is the Trainium generation. In December 2025, AWS announced the availability of Trainium3 UltraServers, which deliver up to 4.4x higher compute performance, four times the energy efficiency, and nearly four times the memory bandwidth versus Trainium2-based configurations. Trainium3 can scale up to 144 chips in a single system, with up to 362 FP8 PFLOPs of performance, allowing larger models to be trained faster and cheaper.
According to Jassy, Trainium3 is almost completely sold out, with customers that have moved their AI jobs to it including Uber $UBER. A significant portion of the next-generation Trainium4, which is roughly 18 months away from wide availability, is already booked by key AWS customers. This pre-sale demonstrates both the oversupply of high-end AI accelerators globally and the confidence of large clients in Amazon's roadmap.
At the same time, Amazon for the first time released a direct number for AWS' AI business: AI services reached an annual revenue run rate of over $15 billion in Q1 2026 and are "growing rapidly." By comparison, AWS's total revenue was around $128.7 billion in 2025, and the cloud division is expected to head to around $142 billion this year. Jassy notes that AWS' growth would be even more aggressive if the entire industry didn't run into infrastructure capacity constraints.
Two large customers, he says, have even asked for the option to buy all available Graviton chip capacity for 2026, which Amazon has declined in order to preserve CPU capacity for other users. This again confirms that demand for Amazon's own silicon exceeds current production capacity - and increases the attractiveness of a potential direct sale of the chips.
Duel with Nvidia in the $200 billion AI chip market
Amazon is entering the next phase of the conflict over AI hardware at a time when Nvidia $NVDA dominates the market with around 85% share in AI accelerators and 60-75% in inference thanks to a combination of GPUs and the CUDA software ecosystem. The AI chip market is estimated to exceed $200 billion by 2026, although Nvidia's share could gradually decline to around 75% due to the emergence of its own hyperscaler chips and competition from AMD.
AMD holds about 7% share of the fast-growing AI market, with the rest coming from the proprietary chips of big cloud players such as Google TPU, Microsoft's "Maia/Mauri" project and Amazon's Trainium. In his letter, Jassy openly says that Amazon will continue to use Nvidia's chips, but customers "want better price/performance ratio", which is exactly the area where Trainium is expected to deliver an advantage.
By considering selling racks directly with its own chips, Amazon is entering - at least in part - Nvidia and AMD territory. The difference is that Amazon can sell its own hardware "backed" by the AWS cloud and software, so customers get both on-premise performance and the ability to easily integrate with cloud services. If this model takes hold, it could erode some of the traditional GPU manufacturers' business while reinforcing the dependence of large clients directly on hyperscalers.
USD 200 billion capex: "We don't invest on a hunch"
The big question for investors is how Amazon is funding such an expansive AI strategy. In the letter, Jassy reminds us that the company is planning around $200 billion in capital expenditures in 2026, the vast majority of which is going into AI infrastructure - data centers, chip manufacturing, and networking capabilities. Some in the market have been spooked by these sums, but Jassy assures that Amazon is "not investing on a hunch" and that much of this capex is covered by long-term customer commitments.
According to information leaked to the media, those commitments include, among other things, a more than $100 billion contract with OpenAI to use AWS to train and run its models. At the same time, Jassy has previously hinted that AI should help AWS reach up to $600 billion in annual revenue in the longer term, roughly double its previously communicated goal.
These numbers show that Amazon sees AI infrastructure as the next "backbone layer" of its business after e-commerce, logistics and traditional cloud. Combined with the chip division's potential to become a standalone business with $50 billion in annual sales, this is one of the most ambitious investment theses within the big five tech companies.
How the market is responding to Jassy's AI vision
Following the release of the letter to shareholders and a new set of numbers for the AI and chip division, Amazon shares rose roughly 1.5-3.5%, according to various sources and measurement times. Investors particularly appreciated the transparency around the $20 billion number for the chip division and $15 billion for AI services, which gives a more concrete outline of the "hidden" components of Amazon's valuation.
It's a signal to the market that Amazon doesn't just want to play the role of "infra add-on" for OpenAI and others in AI, but that it potentially has another big semiconductor company and a standalone AI cloud giant in it at the same time. How quickly this story translates into actual profits and margins will depend on Amazon's ability to continue to increase its use of Trainium and Graviton chips, open up direct sales to third parties, while managing its giant capex plan in a disciplined manner.