NVIDIA - Earnings Call - Q1 2026
May 28, 2025
Executive Summary
- NVIDIA delivered Q1 FY26 revenue of $44.1B, up 69% YoY and 12% QoQ; non-GAAP EPS was $0.81 and GAAP EPS $0.76, with gross margin depressed by a $4.5B H20 charge tied to new China export licensing requirements.
- Results beat Wall Street consensus on revenue ($44.06B actual vs $43.25B estimate*) and EPS ($0.81 non-GAAP actual vs $0.75 estimate*), while gross margin missed due to the H20 charge (60.5% GAAP vs 67.1% estimate*). Excluding the charge, non-GAAP GM would have been 71.3% and EPS $0.96.
- Data Center revenue reached $39.1B (+73% YoY, +10% QoQ) on accelerating Blackwell ramp, with networking at $5.0B (+64% QoQ) and Gaming at a record $3.8B (+48% QoQ).
- Q2 FY26 guidance: revenue $45.0B ±2%, GAAP/non-GAAP GM ~71.8%/72.0%, GAAP/Non-GAAP OpEx ~$5.7B/$4.0B, OI&E ~$450M, tax rate ~16.5%; outlook reflects ~$8B H20 revenue loss from export limits and the offsetting ramp in Blackwell.
Values retrieved from S&P Global*.
What Went Well and What Went Wrong
What Went Well
- Blackwell ramp and AI factory demand: “Global demand for NVIDIA’s AI infrastructure is incredibly strong… AI inference token generation has surged tenfold… and as AI agents become mainstream, the demand for AI computing will accelerate.” — Jensen Huang.
- Segment outperformance: Data Center $39.1B (+73% YoY, +10% QoQ) with compute $34.2B (+76% YoY, +5% QoQ) and networking $5.0B (+56% YoY, +64% QoQ); Gaming a record $3.8B (+42% YoY, +48% QoQ).
- Cash generation and returns: Operating cash flow $27.4B (Q1) and free cash flow $26.1B; returned $14.3B to shareholders via $14.1B buybacks and $244M dividends.
What Went Wrong
- H20 export-control shock: $4.5B charge on excess inventory and purchase obligations; unable to ship additional $2.5B H20 revenue in Q1, with ~$8B loss embedded in Q2 outlook.
- Gross margin compression: GAAP GM fell to 60.5% (vs 73.0% prior quarter), primarily due to the H20 charge and initial ramp of more sophisticated systems in Data Center.
- China headwind: Data Center revenue from China decreased sequentially and is expected to be meaningfully lower in Q2; management has limited options to supply compliant products under revised rules.
Transcript
Operator (participant)
Good afternoon. My name is Sarah, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's First Quarter Fiscal 2026 Financial Results Conference Call. All lines have been placed on mute to prevent any background noise. After the speakers are marked, there will be a question-and-answer session. If you would like to ask a question during this time, simply press star one on your telephone keypad. If you would like to withdraw your question, please press star one again. Thank you. Toshiya Hari, you may begin your conference.
Toshiya Hari (Head of Investor Relations)
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the first quarter of fiscal 2026. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2026. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially.
For a discussion of factors that could affect our future financial results in business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, May 28th, 2025, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette.
Colette Kress (EVP and CFO)
Thank you, Toshiya. We delivered another strong quarter with revenue of $44 billion, up 69% year-over-year, exceeding our outlook in what proved to be a challenging operating environment. Data center revenue of $39 billion grew 73% year-on-year. AI workloads have transitioned strongly to inference, and AI factory buildouts are driving significant revenue. Our customers' commitments are firm. On April 9, the U.S. government issued new export controls on H20, our data center GPU designed specifically for the China market. We sold H20 with the approval of the previous administration. Although our H20 has been in the market for over a year and does not have a market outside of China, the new export controls on H20 did not provide a grace period to allow us to sell through our inventory.
In Q1, we recognized $4.6 billion in H20 revenue, which occurred prior to April 9th, but also recognized a $4.5 billion charge as we wrote down inventory and purchase obligations tied to orders we had received prior to April 9th. We were unable to ship $2.5 billion in H20 revenue in the first quarter due to the new export controls. The $4.5 billion charge was less than what we initially anticipated as we were able to reuse certain materials. We are still evaluating our limited options to supply data center compute products compliant with the U.S. government's revised export control rules. Losing access to the China AI accelerator market, which we believe will grow to nearly $50 billion, would have a material adverse impact on our business going forward and benefit our foreign competitors in China and worldwide.
Our Blackwell ramp, the fastest in our company's history, drove a 73% year-on-year increase in data center revenue. Blackwell contributed nearly 70% of data center compute revenue in the quarter, with a transition from Hopper nearly complete. The introduction of GB200 NVL was a fundamental architectural change to enable data center-scale workloads and to achieve the lowest cost per inference token. While these systems are complex to build, we have seen a significant improvement in manufacturing yields, and rack shipments are moving to strong rates to end customers. GB200 NVL racks are now generally available for model builders, enterprises, and sovereign customers to develop and deploy AI. On average, major hyperscalers are each deploying nearly 1,000 NVL72 racks or 72,000 Blackwell GPUs per week and are on track to further ramp output this quarter.
Microsoft, for example, has already deployed tens of thousands of Blackwell GPUs and is expected to ramp to hundreds of thousands of GB200s with OpenAI as one of its key customers. Key learnings from the GB200 ramp will allow for a smooth transition to the next phase of our product roadmap, Blackwell Ultra. Sampling of GB300 systems began earlier this month at the major CSPs, and we expect production shipments to commence later this quarter. GB300 will leverage the same architecture, same physical footprint, and the same electrical and mechanical specifications as GB200. The GB300 drop-in design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200 while maintaining high yields. B300 GPUs with 50% more HBM will deliver another 50% increase in dense FP4 inference compute performance compared to the B200.
We remain committed to our annual product cadence, with our roadmap extending through 2028, tightly aligned with the multiple-year planning cycles of our customers. We are witnessing a sharp jump in inference demand. OpenAI, Microsoft, and Google are seeing a step-function leap in token generation. Microsoft processed over 100 trillion tokens in Q1, a fivefold increase on a year-over-year basis. This exponential growth in Azure OpenAI is representative of strong demand for Azure AI Foundry, as well as other AI services across Microsoft's platform. Inference-serving startups are now serving models using B200, tripling their token generation rate and corresponding revenues for high-value reasoning models such as DeepSeek R1, as reported by Artificial Analysis. NVIDIA Dynamo on Blackwell NVL72 turbocharges AI inference throughput by 30x for the new reasoning models sweeping the industry.
Developer engagements increased with adoption ranging from LLM providers such as Perplexity to financial services institutions such as Capital One, who reduced agentic chatbot latency by 5x with Dynamo. In the latest MLPerf inference results, we submitted our first results using GB200 NVL72, delivering up to 30x higher inference throughput compared to our eight GPU H200 submission on the challenging Llama 3.1 benchmark. This feat was achieved through a combination of tripling the performance per GPU as well as 9x more GPUs, all connected on a single NVLink domain. While Blackwell is still early in its life cycle, software optimizations have already improved its performance by 1.5x in the last month alone. We expect to continue improving the performance of Blackwell through its operational life, as we have done with Hopper and Ampere.
For example, we increased the inference performance of Hopper by four times over two years. This is the benefit of NVIDIA's programmable CUDA architecture and rich ecosystem. The pace and scale of AI factory deployments are accelerating with nearly 100 NVIDIA-powered AI factories in flight this quarter, a twofold increase year-over-year, with the average number of GPUs powering each factory also doubling in the same period. More AI factory projects are starting across industries and geographies. NVIDIA's full-stack architecture is underpinning AI factory deployments as industry leaders like AT&T, BYD, Capital One, Foxconn, MediaTek, and Telenor are strategically vital sovereign clouds like those recently announced in Saudi Arabia, Taiwan, and the UAE. We have a line of sight to projects requiring tens of GW of NVIDIA AI infrastructure in the not-too-distant future.
The transition from generative to agentic AI, AI capable of perceiving, reasoning, planning, and acting, will transform every industry, every company, and country. We envision AI agents as a new digital workforce capable of handling tasks ranging from customer service to complex decision-making processes. We introduced the Llama Nemotron family of open reasoning models designed to supercharge agentic AI platforms for enterprises. Built on the Llama architecture, these models are available as NIMs or NVIDIA Inference Microservices with multiple sizes to meet diverse deployment needs. Our post-training enhancements have yielded a 20% accuracy boost and a 5x increase in inference speed. Leading platform companies, including Accenture, Cadence, Deloitte, and Microsoft, are transforming work with our reasoning models. NVIDIA NeMo Microservices are generally available across industries and are being leveraged by leading enterprises to build, optimize, and scale AI applications.
With NeMo, Cisco increased model accuracy by 40% and improved response time by 10x in its code assistant. Nasdaq realized a 30% improvement in accuracy and response time in its AI platform's search capabilities. Shell's custom LLM achieved a 30% increase in accuracy when trained with NVIDIA NeMo. NeMo's parallelism techniques accelerated model training time by 20% when compared to other frameworks. We also announced a partnership with Yum Brands, the world's largest restaurant company, to bring NVIDIA AI to 500 of its restaurants this year and expanding to 61,000 restaurants over time to streamline order taking, optimize operations, and enhance service across its restaurants. For AI-powered cybersecurity, leading companies like Check Point, CrowdStrike, and Palo Alto Networks are using NVIDIA's AI security and software stack to build, optimize, and secure agentic workflows, with CrowdStrike realizing 2x faster detection triage with 50% less compute cost.
Moving to networking, sequential growth in networking resumed in Q1, with revenue up 64% quarter-over-quarter to $5 billion. Our customers continue to leverage our platform to efficiently scale up and scale out AI factory workloads. We created the world's fastest switch, NVLink. For scale-up, our NVLink compute fabric in its fifth generation offers 14x the bandwidth of PCIe Gen 5. NVLink 72 carries 130 TB per second of bandwidth in a single rack, equivalent to the entirety of the world's peak internet traffic. NVLink is a new growth vector and is off to a great start, with Q1 shipments exceeding $1 billion. At Computex, we announced NVLink Fusion. Hyperscale customers can now build semi-custom CCUs and accelerators that connect directly to the NVIDIA platform with NVLink.
We are now enabling key partners, including ASIC providers such as MediaTek, Marvell, Alchip Technologies, and Astera Labs, as well as CPU suppliers such as Fujitsu and Qualcomm, to leverage NVLink Fusion to connect our respective ecosystems. For scale-out, our enhanced Ethernet offerings deliver the highest throughput, lowest latency networking for AI. SpectrumX posted strong sequential and year-on-year growth and is now annualizing over $8 billion in revenue. Adoption is widespread across major CSPs and consumer internet companies, including CoreWeave, Microsoft Azure, Oracle Cloud, and xAI. This quarter, we added Google Cloud and Meta to the growing list of SpectrumX customers. We introduced SpectrumX and QuantumX silicon photonics switches featuring the world's most advanced co-package optics. These platforms will enable next-level AI factory scaling to millions of GPUs through the increasingly power efficiency by 3.5x and network resiliency by 10x while accelerating customer time to market by 1.3x.
Transitioning to a quick summary of our revenue by geography. China, as a percentage of our data center revenue, was slightly below our expectations and down sequentially due to H20 export licensing controls. For Q2, we expect a meaningful decrease in China data center revenue. As a reminder, while Singapore represented nearly 20% of our Q1 build revenue, as many of our large customers use Singapore for centralized invoicing, our products are almost always shipped elsewhere. Note that over 99% of H100, H200, and Blackwell data center compute revenue billed to Singapore was for orders from U.S.-based customers. Moving to gaming and AI PCs. Gaming revenue was a record $3.8 billion, increasing 48% sequentially and 42% year-on-year. Strong adoption by gamers, creatives, and AI enthusiasts have made Blackwell our fastest ramp ever.
Against a backdrop of robust demand, we greatly improved our supply and availability in Q1 and expect to continue these efforts in Q2. AI is transforming PC and creator and gamers. With a 100 million user installed base, GeForce represents the largest footprint for PC developers. This quarter, we added to our AI PC laptop offerings, including models capable of running Microsoft's Copilot+. This past quarter, we brought Blackwell architecture to mainstream gaming with its launch of GeForce RTX 5060 and 5060 Ti, starting at just $299. The RTX 5060 also debuted in laptops, starting at $1,099. These systems that doubled the frame rate and slash latency. These GeForce RTX 5060 and 5060 Ti desktop GPUs and laptops are now available.
In console gaming, the recently unveiled Nintendo Switch 2 leverages NVIDIA's neural rendering and AI technologies, including next-generation custom RTX GPUs with DLSS technology, to deliver a giant leap in gaming performance to millions of players worldwide. Nintendo has shipped over 150 million Switch consoles to date, making it one of the most successful gaming systems in history. Moving to probe visualization. Revenue of $509 million was flat sequentially and up 19% year-on-year. Tariff-related uncertainty temporarily impacted Q1 systems, and demand for our AI workstations is strong, and we expect sequential revenue growth to resume in Q2. NVIDIA DGX Spark and Station revolutionized personal computing by putting the power of an AI supercomputer in a desktop form factor. DGX Spark delivers up to one petaflop of AI compute, while DGX Station offers an incredible 20 petaflops and is powered by the GB300 superchip.
DGX Spark will be available in calendar Q3 and DGX Station later this year. We have deepened Omniverse's integration and adoption into some of the world's leading software platforms, including Databricks, SAP, and Schneider Electric. New Omniverse blueprints such as Mega for at-scale robotic fleet management are being leveraged in Kion Group, Pegatron, Accenture, and other leading companies to enhance industrial operations. At Computex, we showcased Omniverse's great traction with technology manufacturing leaders, including TSMC, Quanta, Foxconn, Pegatron. Using Omniverse, TSMC saves months in work by designing fabs virtually. Foxconn accelerates thermal simulations by 150x, and Pegatron reduced assembly line defect rates by 67%. Lastly, with our automotive group, revenue was $567 million, down 1% sequentially, but up 72% year-on-year. Year-on-year growth was driven by the ramp of self-driving across a number of customers and robust end demand for NEVs.
We are partnering with GM to build the next-gen vehicles, factories, and robots using NVIDIA AI simulation and accelerated computing. We are now in production with our full-stack solution for Mercedes-Benz, starting with the new CLA, hitting roads in the next few months. We announced Isaac GR00T and won the world's first open, fully customizable foundation model for humanoid robots, enabling generalized reasoning and skill development. We also launched new open NVIDIA Cosmos World Foundation models. Leading companies include One X, Agility Robotics, Figure AI, Uber, and Wobot. We've begun integrating Cosmos into their operations for synthetic data generation, while Agility Robotics, Boston Dynamics, and XPeng Robotics are harnessing Isaac simulation to advance their humanoid efforts. GE Healthcare is using the new NVIDIA Isaac platform for healthcare simulation built on NVIDIA Omniverse and using NVIDIA Cosmos, the platform speeds development of robotic imaging and surgery systems.
The era of robotics is here. Billions of robots, hundreds of millions of autonomous vehicles, and hundreds of thousands of robotic factories and warehouses will be developed. All right, moving to the rest of the P&L. GAAP gross margins and non-GAAP gross margins were 60.5% and 61%, respectively. Excluding the $4.5 billion charge, Q1 non-GAAP gross margins would have been 71.3%, slightly above our outlook at the beginning of the quarter. Sequentially, GAAP operating expenses were up 7%, and non-GAAP operating expenses were up 6%, reflecting higher compensation and employee growth. Our investments include expanding our infrastructure capabilities and AI solutions, and we plan to grow these investments throughout the fiscal year. In Q1, we returned a record $14.3 billion to shareholders in the form of share repurchases and cash dividends. Our capital return program continues to be a key element of our capital allocation strategy.
Let me turn to the outlook for the second quarter. Total revenue is expected to be $45 billion, + or -2%. We expect modest sequential growth across all of our platforms. In data center, we anticipate the continued ramp of Blackwell to be partially offset by a decline in China revenue. Note, our outlook reflects a loss in H20 revenue of approximately $8 billion for the second quarter. GAAP and non-GAAP gross margins are expected to be 71.8% and 72%, respectively, + or-50 basis points. We expect better Blackwell profitability to drive modest sequential improvement in gross margins. We are continuing to work towards achieving gross margins in the mid-70s range late this year.
GAAP and non-GAAP operating expenses are expected to be approximately $5.7 billion and $4 billion, respectively, and we continue to expect full year fiscal year 2026 operating expense growth to be in the mid-30% range. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $450 million, excluding gain and losses from non-marketable and publicly held equity securities. GAAP and non-GAAP tax rates are expected to be 16.5%, + or -1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website, including a new Financial Information AI agent. Let me highlight upcoming events for the financial community.
We will be at the B of A Global Technology Conference in San Francisco on June 4th, the Rosenblatt Virtual AI Summit and Nasdaq Investor Conference in London on June 10th, and GTC Paris at VivaTech on June 11th in Paris. We look forward to seeing you at these events. Our earnings call to discuss the results of our second quarter of fiscal 2026 is scheduled for August 27th. Now let me turn it over to Jensen to make some remarks.
Jensen Huang (President and CEO)
Thanks, Colette. We've had a busy and productive year. Let me share my perspective on some topics we're frequently asked. On export control, China is one of the world's largest AI markets and a springboard to global success. With half of the world's AI researchers based there, the platform that wins China is positioned to lead globally.
Today, however, the $50 billion China market is effectively closed to U.S. industry. The H20 export ban ended our Hopper data center business in China. We cannot reduce Hopper further to comply. As a result, we are taking a multi-billion dollar write-off on inventory that cannot be sold or repurposed. We are exploring limited ways to compete, but Hopper is no longer an option. China's AI moves on with or without U.S. chips. It has to compute to train and deploy advanced models. The question is not whether China will have AI. It already does. The question is whether one of the world's largest AI markets will run on American platforms. Shielding Chinese chip makers from U.S. competition only strengthens them abroad and weakens America's position. Export restrictions have spurred China's innovation and scale. The AI race is not just about chips. It's about which stack the world runs on.
As that stack grows to include 6G and quantum, U.S. global infrastructure leadership is at stake. The U.S. has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable, and now it's clearly wrong. China has enormous manufacturing capability. In the end, the platform that wins the AI developers wins AI. Export controls should strengthen U.S. platforms, not drive half of the world's AI talent to rivals. On DeepSeek, DeepSeek and QN from China are among the best open-source AI models. Released freely, they've gained traction across the U.S., Europe, and beyond. DeepSeek R1, like ChatGPT, introduced Reasoning AI that produces better answers the longer it thinks. Reasoning AI enables step-by-step problem-solving, planning, and tool use, turning models into intelligent agents. Reasoning is compute-intensive, requires hundreds to thousands of times more tokens per task than previous one-shot inference.
Reasoning models are driving a step-function surge in inference demand. AI scaling laws remain firmly intact, not only for training, but now inference too requires massive-scale compute. DeepSeek also underscores the strategic value of open-source AI. When popular models are trained and optimized on U.S. platforms, it drives usage, feedback, and continuous improvement, reinforcing American leadership across the stack. U.S. platforms must remain the preferred platform for open-source AI. That means supporting collaboration with top developers globally, including in China. America wins when models like DeepSeek and QN run best on American infrastructure. Regarding onshore manufacturing, President Trump has outlined a bold vision to reshore advanced manufacturing, create jobs, and strengthen national security. Future plants will be highly computerized and robotics. We share this vision. TSMC is building six fabs and two advanced packaging plants in Arizona to make chips for NVIDIA.
Process qualification is underway, with volume production expected by year-end. Spill and Amcor are also investing in Arizona, constructing packaging, assembly, and test facilities. In Houston, we're partnering with Foxconn to construct a million-square-foot factory to build AI supercomputers. Wistron is building a similar plant in Fort Worth, Texas. To encourage and support these investments, we've made substantial long-term purchase commitments, a deep investment in America's AI manufacturing future. Our goal: from chip to supercomputer, built in America within a year. Each GB200 NVLink 72 racks contains 1.2 million components and weighs nearly 2 tons. No one has produced supercomputers on this scale. Our partners are doing an extraordinary job. On AI diffusion rule, President Trump rescinded the AI diffusion rule, calling it counterproductive, and proposed a new policy to promote U.S. AI tech with trusted partners. On his Middle East tour, he announced historic investments.
I was honored to join him in announcing a 500 MW AI infrastructure project in Saudi Arabia and a 5 GW AI campus in the UAE. President Trump wants U.S. tech to lead. The deals he announced are wins for America: creating jobs, advancing infrastructure, generating tax revenue, and reducing the U.S. trade deficit. The U.S. will always be NVIDIA's largest market and home to the largest installed base of our infrastructure. Every nation now sees AI as core to the next industrial revolution, a new industry that produces intelligence and essential infrastructure for every economy. Countries are racing to build national AI platforms to elevate their digital capabilities. At Computex, we announced Taiwan's first AI factory in partnership with Foxconn and the Taiwan government. Last week, I was in Sweden to launch its first national AI infrastructure.
Japan, Korea, India, Canada, France, the U.K., Germany, Italy, Spain, and more are now building national AI factories to empower startups, industries, and societies. Sovereign AI is a new growth engine for NVIDIA. Toshiya, back to you. Thank you.
Toshiya Hari (Head of Investor Relations)
Operator, we will now open the call for questions. Would you please pull for questions?
Operator (participant)
Thank you. At this time, I would like to remind everyone, in order to ask a question, press star, then the number one on your telephone keypad. We'll pause for just a moment to compile the Q&A roster. Your first question comes from the line of Joe Moore with Morgan Stanley. Your line is open.
Joe Moore (Analyst)
Great. Thank you. You guys have talked about this scaling up of inference around reasoning models for at least a year now, and we've really seen that come to fruition, as you talked about. We've heard it from your customers.
Can you give us a sense for how much of that demand you're able to serve? Give us a sense for maybe how big the inference business is for you guys, and do we need full-on NVL72 rack scale solutions for reasoning inference going forward?
Jensen Huang (President and CEO)
We would like to serve all of it, and I think we're on track to serve most of it. Grace Blackwell, NVLink 72, is the ideal engine today, the ideal computer thinking machine, if you will, for reasoning AI. There's a couple of reasons for that. The first reason is that the token generation amount, the number of tokens reasoning goes through, is 1000 times more than a one-shot chatbot. It's essentially thinking to itself, breaking down a problem step by step. It might be planning multiple paths to an answer.
It could be using tools, reading PDFs, reading web pages, watching videos, and then producing a result, an answer. The longer it thinks, the better the answer, the smarter the answer is. What we would like to do, and the reason why Grace Blackwell was designed to give such a giant step up in inference performance, is so that you could do all this and still get a response as quickly as possible. Compared to Hopper, Grace Blackwell is some 40 times higher speed and throughput compared. This is going to be a huge benefit in driving down the cost while improving the quality of response with excellent quality of service at the same time. That is the fundamental reason. That was the core driving reason for Grace Blackwell NVLink 72.
Of course, in order to do that, we had to reinvent, literally redesign the entire way that these supercomputers are built. Now we're in full production. It's going to be exciting. It's going to be incredibly exciting.
Operator (participant)
The next question comes from Vivek Arya with Bank of America Securities. Your line is open.
Vivek Arya (Managing Director and Senior Analyst)
Thanks for the question. Just a clarification for Colette first. On the China impact, I think previously it was mentioned at about $15 billion. You had the $8 billion in Q2. Is there still some left as a headwind for the remaining quarters? Colette, how to model that? Question, Jensen, for you. Back at GTC, you had outlined a path towards almost a trillion dollars of AI spending over the next few years. Where are we in that build-out?
Do you think it's going to be uniform that you will see every spender, whether it's ESP, sovereigns, enterprises, or build-out? Should we expect some periods of digestion in between? Just what are your customer discussions telling you about how to model growth for next year?
Colette Kress (EVP and CFO)
Yes, Vivek, thanks so much for the question regarding H20. Yes, we recognized $4.6 billion H20 in Q1. We were unable to ship $2.5 billion. The total for Q1 should have been $7 billion. When we look at our Q2, our Q2 is going to be meaningfully down in terms of China data center revenue. We had highlighted in terms of the amount of orders that we had planned for H20 in Q2, and that was $8 billion. Now, going forward, we did have other orders going forward that we will not be able to fulfill.
That is what was incorporated, therefore, in the amount that we wrote down of the $4.5 billion. That write-down was about inventory and purchase commitments. And our purchase commitments were about what we expected regarding the orders that we had received. Going forward, though, it's a bigger issue regarding the amount of the market that we will not be able to serve. We assess that TAM to be close to about $50 billion in the future as we don't have a product to enable for the China.
Jensen Huang (President and CEO)
Vivek, probably the best way to think through it is that AI is several things. Of course, we know that AI is this incredible technology that's going to transform every industry, from, of course, the way we do software to healthcare and financial services to retail to, I guess, every industry, transportation, manufacturing. And we're at the beginning of that.
Maybe another way to think about that is, where do we need intelligence? Where do we need digital intelligence? Every country is in every industry. We know, because of that, we recognize that AI is also an infrastructure. It is a way of delivering a technology that requires factories. These factories produce tokens. They, as I mentioned, are important to every single industry and every single country. On that basis, we are really at the very beginning of it because the adoption of this technology is really kind of in its early stages. Now, we have reached an extraordinary milestone with AIs that are reasoning, are thinking, what people call inference time scaling. Of course, it created a whole new era. We have entered an era where inference is going to be a significant part of the compute workload. Anyhow, it is going to be a new infrastructure.
We're building it out in the clouds. The United States is really the early starter and available in U.S. clouds. This is our largest market, our largest installed base, and we're going to continue to see that happening. Beyond that, we're going to see AI go into enterprise, which is on-prem, because so much of the data is still on-prem. Access control is really important. It's really hard to move every company's data into the cloud. We're going to move AI into the enterprise. You saw that we announced a couple of really exciting new products, our RTX Pro enterprise AI server that runs everything enterprise and AI, our DGX Spark and DGX Station, which is designed for developers who want to work on-prem. Enterprise AI is just taking off. Telcos.
Today, a lot of the telco infrastructure will be, in the future, software-defined and built on AI. 6G is going to be built on AI. That infrastructure needs to be built out. It is at its very, very early stages. Of course, every factory today that makes things will have an AI factory that sits with it. The AI factory is going to be creating AI and operating AI for the factory itself, but also to power the products and the things that are made by the factory. It is very clear that every car company will have AI factories. Very soon, there will be robotics companies, robot companies, and those companies will be also building AIs to drive the robots. We are at the beginning of all of this build-out.
Operator (participant)
The next question comes from CJ Muse with Cantor Fitzgerald.
Your line is open.
CJ Muse (Senior Managing Director)
Yeah, good afternoon. Thank you for taking the question. There have been many large GPU cluster investment announcements in the last month, and you alluded to a few of them with Saudi Arabia, the UAE, and then also we've heard from Oracle and xAI, just to name a few. My question, are there others that have yet to be announced of the same kind of scale and magnitude? Perhaps more importantly, how are these orders impacting your lead times for Blackwell and your current visibility sitting here today, almost halfway through 2025?
Jensen Huang (President and CEO)
We have more orders today than we did at the last time I spoke about orders at GTC. However, we're also increasing our supply chain and building out our supply chain. They're doing a fantastic job.
We're building it here on shore in the United States, but we're going to keep our supply chain quite busy for many more years coming. With respect to further announcements, I'm going to be on the road next week through Europe. Just about every country needs to build out AI infrastructure, and there are umpteen AI factories being planned. I think in the remarks, Colette mentioned there's 100 AI factories being built. There's a whole bunch that haven't been announced. I think the important concept here, which makes it easier to understand, is that like other technologies that impact literally every single industry, of course, electricity was one, and it became infrastructure. Of course, the information infrastructure, which we now know as the internet, affects every single industry, every country, every society. Intelligence is surely one of those things.
I don't know any company, industry, country who thinks that intelligence is optional. It's essential infrastructure. We've now digitalized intelligence. I think we're clearly in the beginning of the build-out of this infrastructure. Every country will have it. I'm certain of that. Every industry will use it. That I'm certain of. What's unique about this infrastructure is that it needs factories. It's a little bit like the energy infrastructure, electricity. It needs factories. We need factories to produce this intelligence. The intelligence is getting more sophisticated. We were talking about earlier that we had a huge breakthrough in the last couple of years with reasoning AI. Now there are agents that reason, and there are super agents that use a whole bunch of tools. There's clusters of super agents where agents are working with agents, solving problems.
You could just imagine, compared to one-shot chatbots and the agents that are now using AI built on these large language models, how much more compute-intensive they really need to be and are. I think we're in the beginning of the build-out. There should be many, many more announcements in the future.
Operator (participant)
Your next question comes from Ben Reitzes with Melius. Your line is open.
Ben Reitzes (Managing Director and Head of Technology Research)
Yeah, hi. Thanks for the question. I wanted to ask first to Colette just a little clarification around the guidance and maybe putting it in a different way. The $8 billion for H20 just seems like it's roughly $3 billion more than most people thought with regard to what you'd be foregoing in the second quarter.
That would mean that with regard to your guidance, the rest of the business, in order to hit 45, is doing $2 billion-$3 billion or so better. I was wondering if that math made sense to you. In terms of the guidance, that would imply the non-China business is doing a bit better than the street expected. Wondering what the primary driver was there in your view. This second part of my question, Jensen, I know you guide one quarter at a time. With regard to the AI diffusion rule being lifted and this momentum with Sovereign, there have been times in your history where you guys have sat on calls like this where you have more conviction and sequential growth throughout the year, etc.
Given the unleashing of demand with AI diffusion being revoked and the supply chain increasing, does the environment give you more conviction and sequential growth as we go throughout the year? First one for Colette, and then next one for Jensen. Thanks so much.
Colette Kress (EVP and CFO)
Thanks, Ben, for the question. When we look at our Q2 guidance and our commentary that we provided that had the export controls not occurred, we would have had orders of about $8 billion for H20. That's correct. That was a possibility for what we would have had in our outlook for this quarter in Q2. What we also have talked about here is the growth that we've seen in Blackwell, Blackwell across many of our customers, as well as the growth that we continue to have in terms of supply that we need for our customers.
Putting those together, that's where we came through with the guidance that we provided. I'm going to turn the rest over to Jensen to see how he wants to.
Jensen Huang (President and CEO)
Yeah, thanks. Thanks, Ben. I would say compared to the beginning of the year, compared to GTC timeframe, there are four positive surprises. The first positive surprise is the step function demand increase of reasoning AI. I think it is fairly clear now that AI is going through an exponential growth. Reasoning AI really busted through. Concerns about hallucination or its ability to really solve problems. I think a lot of people are crossing that barrier and realizing how incredibly effective agentic AI is and reasoning AI is. Number one is inference reasoning and the exponential growth there, demand growth. The second one, you mentioned AI diffusion.
It's really terrific to see that the AI diffusion rule was rescinded. President Trump wants America to win. He also realizes that we're not the only country in the race. He wants the United States to win and recognizes that we have to get the American stack out to the world and have the world build on top of American stacks instead of alternatives. AI diffusion happened. The rescinding of it happened at almost precisely the time that countries around the world are awakening to the importance of AI as an infrastructure, not just as a technology of great curiosity and great importance, but infrastructure for their industries and startups and society. Just as they had to build out infrastructure for electricity and internet, you got to build out infrastructure for AI. I think that that's an awakening, and that creates a lot of opportunity.
The third is enterprise AI. Agents work. And these agents are really quite successful. Much more than generative AI, agentic AI is game-changing. Agents can understand ambiguous and rather implicit instructions and are able to problem-solve and use tools and have memory and so on. I think this enterprise AI is ready to take off. It's taken us a few years to build a computing system that is able to integrate and run enterprise AI stacks, run enterprise IT stacks, but add AI to it. This is the RTX Pro enterprise server that we announced at Computex just last week. Just about every major IT company has joined us and are super excited about that. Computing is one part of it. Remember, enterprise IT is really three pillars. It's compute, storage, and networking.
We have now put all three of them together finally, and we are going to market with that. Lastly, industrial AI. Remember, one of the implications of the world reordering, if you will, is regions onshoring manufacturing and building plants everywhere. In addition to AI factories, of course, there are new electronics manufacturing, chip manufacturing being built around the world. All of these new plants and these new factories are creating exactly the right time when Omniverse and AI and all the work that we are doing with robotics is emerging. This fourth pillar is quite important. Every factory will have an AI factory associated with it. In order to create these physical AI systems, you really have to train a vast amount of data. Back to more data, more training, more AIs to be created, more computers.
These four drivers are really kicking into turbocharge.
Operator (participant)
Your next question comes from Timothy Arcuri with UBS. Your line is open.
Timothy Arcuri (Managing Director)
Thanks a lot. Jensen, I wanted to ask about China. It sounds like the July guidance assumes there's no SKU replacement for the H20. If the president wants the U.S. to win, it seems like you're going to have to be allowed to ship something into China. I guess I had two points on that. First of all, have you been approved to ship a new modified version into China? You're currently building it, but you just can't ship it in fiscal Q2. You were sort of run rating $7 billion-$8 billion a quarter into China. Can we get back to those sorts of quarterly run rates once you get something that you're allowed to ship back into China?
I think we're all trying to figure out how much to add back to our models and when. Whatever you can say there would be great. Thanks.
Jensen Huang (President and CEO)
The president has a plan. He has a vision, and I trust him. With respect to our export controls, it's a set of limits. The new set of limits pretty much make it impossible for us to reduce Hopper any further for any productive use. The new limits, it's kind of the end of the road for Hopper. We have limited options. The key is to understand the limits. The key is to understand the limits and see if we can come up with interesting products that could continue to serve the Chinese market. We don't have anything at the moment, but we're considering it. We're thinking about it.
Obviously, the limits are quite stringent at the moment. We have nothing to announce today. When the time comes, we'll engage the administration and discuss that.
Operator (participant)
Your final question comes from the line of Aaron Rakers with Wells Fargo. Your line is open.
Hi. This is Jake On for Aaron. Thanks for taking the question and congrats on the great quarter. I was wondering if you could give some additional color around the strength you saw within the networking business, particularly around the adoption of your Ethernet solutions at CSPs, as well as any change you're seeing in network attach rates.
Jensen Huang (President and CEO)
Yeah, thank you for that. We now have three networking platforms, maybe four. The first one is the scale-up platform to turn a computer into a much larger computer. Scaling up is incredibly hard to do.
Scaling out is easier to do, but scaling up is hard to do. That platform is called NVLink. NVLink comes with chips and switches and NVLink spines. It is really complicated. Anyways, that is our new platform, scale-up platform. In addition to InfiniBand, we also have SpectrumX. We have been fairly consistent that Ethernet was designed for a lot of traffic that are independent. In the case of AI, you have a lot of computers working together. The traffic of AI is insanely bursty. Latency matters a lot because the AI is thinking, and it wants to get work done as quickly as possible. You have a whole bunch of nodes working together. We enhanced Ethernet, added capabilities like extremely low latency, congestion control, adaptive routing, the type of technologies that were available only in InfiniBand to Ethernet.
As a result, we improved the utilization of Ethernet in these clusters. These clusters are gigantic, from as low as 50% to as high as 85%, 90%. The difference is if you had a cluster that's $10 billion and you improved its effectiveness by 40%, that's worth $4 billion. It's incredible. SpectrumX has been really, quite frankly, a home run in this last quarter. As we said in the prepared remarks, we added two very significant CSPs to the SpectrumX adoption. The last one is BlueField, which is our control plane.
In those four, the control plane, the network, which is used for storage, is used for security, and for many of these clusters that want to achieve isolation among its users, multi-tenant clusters, and still be able to use and have extremely high-performance bare metal performance, BlueField is ideal for that and is used in a lot of these cases. We have these four networking platforms. They are all growing. We are doing really well. I am very proud of the team.
Jensen, over to you.
Operator (participant)
That is all the time we have for questions. Jensen and I will turn the call back to you.
Toshiya Hari (Head of Investor Relations)
Thank you. This is the start of a powerful new wave of growth. Grace Blackwell is in full production. We are off to the races. We now have multiple significant growth engines. Inference, one's the light of workload, is surging with revenue-generating AI services.
AI is growing faster and will be larger than any platform shifts before, including the internet, mobile, and cloud. Blackwell is built to power the full AI lifecycle from training frontier models to running complex inference and reasoning agents at scale. Training demands continue to rise with breakthroughs in post-training and reinforcement learning and synthetic data generation. Inference is exploding. Reasoning AI agents require orders of magnitude more compute. The foundations of our next growth platforms are in place and ready to scale. Sovereign AI nations are investing in AI infrastructure like they once did for electricity and internet. Enterprise AI must be deployable on-prem and integrated with existing IT. Our RTX Pro, DGX Spark, and DGX Station enterprise AI systems are ready to modernize the $500 billion IT infrastructure on-prem or in the cloud. Every major IT provider is partnering with us.
Industrial AI, from training to digital twin simulation to deployment, NVIDIA Omniverse and Isaac GR00T are powering next-generation factories and humanoid robotic systems worldwide. The age of AI is here. From AI infrastructures, inference at scale, sovereign AI, enterprise AI, and industrial AI, NVIDIA is ready. Join us at GTC Paris. I'll keynote at VivaTech on June 11, talking about quantum GPU computing, robotic factories and robots, and celebrate our partnerships building AI factories across the region. The NVIDIA band will tour France, the U.K., Germany, and Belgium. Thank you for joining us at the earnings call today. See you in Paris.
Operator (participant)
This concludes today's conference call. You may now disconnect.