Earnings summaries and quarterly performance for NVIDIA.
Executive leadership at NVIDIA.
Jen-Hsun Huang
President and Chief Executive Officer
Ajay K. Puri
Executive Vice President, Worldwide Field Operations
Colette M. Kress
Executive Vice President and Chief Financial Officer
Debora Shoquist
Executive Vice President, Operations
Timothy S. Teter
Executive Vice President, General Counsel and Secretary
Board of directors at NVIDIA.
A. Brooke Seawell
Director
Aarti Shah
Director
Dawn Hudson
Director
Harvey C. Jones
Director
John O. Dabiri
Director
Mark A. Stevens
Director
Melissa B. Lora
Director
Persis S. Drell
Director
Robert K. Burgess
Director
Stephen C. Neal
Lead Independent Director
Tench Coxe
Director
Research analysts who have asked questions during NVIDIA earnings calls.
Joseph Moore
Morgan Stanley
7 questions for NVDA
Timothy Arcuri
UBS
7 questions for NVDA
Aaron Rakers
Wells Fargo
6 questions for NVDA
Vivek Arya
Bank of America Corporation
6 questions for NVDA
Ben Reitzes
Melius Research LLC
4 questions for NVDA
CJ Muse
Cantor Fitzgerald
4 questions for NVDA
Stacy Rasgon
Bernstein Research
4 questions for NVDA
Benjamin Reitzes
Melius Research
3 questions for NVDA
Christopher Muse
Cantor Fitzgerald
3 questions for NVDA
Jim Schneider
Goldman Sachs
3 questions for NVDA
Atif Malik
Citigroup Inc.
2 questions for NVDA
Toshiya Hari
Goldman Sachs Group, Inc.
2 questions for NVDA
Harlan Sur
JPMorgan Chase & Co.
1 question for NVDA
Jake Wilhelm
Wells Fargo Securities, LLC
1 question for NVDA
Mark Lipacis
Evercore ISI
1 question for NVDA
Matthew Ramsay
TD Cowen
1 question for NVDA
Pierre Ferragu
New Street Research
1 question for NVDA
Stacey Raskin
Bernstein Research
1 question for NVDA
Vivek Aria
Bank of America Securities
1 question for NVDA
Recent press releases and 8-K filings for NVDA.
- Trainium3 delivers 2.52 petaflops of compute and supports 144 GB HBM3e memory with 4.9 TB/s bandwidth for complex AI workloads.
- Trainium3 UltraServers scale to 144 chips per server and can cluster up to one million chips for massive AI training clusters.
- AWS will offer AI Factories services, enabling clients to deploy dedicated AI infrastructure within their own data centers.
- Amazon announced Trainium4 chips integrating Nvidia’s NVLink Fusion technology for high-speed rack-scale interconnects, boosting AI performance and reducing deployment risks.
- Nvidia is negotiating a $100 billion investment deal with OpenAI, with terms yet to be finalized.
- The agreement would deploy at least 10 gigawatts of Nvidia systems to OpenAI, enough to power over 8 million U.S. homes.
- This potential deal would be in addition to Nvidia's existing $500 billion in chip bookings through 2026, as confirmed by CFO Colette Kress.
- Nvidia shares rose approximately 2.6% following the announcement, reflecting investor confidence in the company's AI investments.
- NVIDIA forecasts $3–4 trillion data center infrastructure overhaul by 2030, driven by CPU-to-GPU transition and AI/agentic AI workloads.
- Introduced Grace Blackwell rack-scale systems with seven-chip co-design, maintaining competitive lead via full-stack hardware and CUDA software; Vera Rubin GPU taped out for 2H26 with significant performance uplift.
- Confirms ability to maintain mid-70s % gross margins into next year through improved yields, cycle times, and cost management despite rising HBM costs.
- Inventory and purchase commitments rose by $25 billion, supporting anticipated revenue growth; current $500 billion Blackwell and Vera Rubin orders for 2025–26, with additional potential from direct OpenAI and Anthropic deals.
- Capital allocation prioritizes supply capacity funding, shareholder returns (buybacks, dividends), and strategic ecosystem investments, with selective M&A.
- NVIDIA expects a $3 trillion–$4 trillion data center infrastructure market by 2030 driven by the transition from CPUs to GPUs for accelerated computing.
- The company highlighted its Grace Blackwell rack-scale systems (200, Ultra and 300 series) and emphasized that its full-stack CUDA software ecosystem sustains its competitive lead.
- NVIDIA detailed a $500 billion planned deployment of Blackwell/Vera Rubin through 2026 and noted a 10 GW OpenAI LOI (~$400 billion)—with definitive terms still under negotiation and current fulfillment via CSP partners.
- CFO Colette Kress reaffirmed mid-70s gross margins for next year as Vera Rubin ramps and cited a $25 billion increase in inventory and purchase commitments as indicative of strong near-term revenue growth.
- CFO Colette Kress outlined a multi-phase shift from CPU to GPU accelerated computing, forecasting $3 – $4 trillion of data-center infrastructure spending moving to AI by the end of the decade.
- NVIDIA has released Grace Blackwell 200/Ultra/300 series for full rack-scale deployment and maintains its performance lead through a tightly integrated hardware/software stack.
- Over 50 percent of NVIDIA’s revenue comes from cloud service providers; the company is negotiating a 10 GW LOI with OpenAI (≈ $400 billion) and continues collaborations with Anthropic via Microsoft’s CSPs.
- The company achieved and expects to sustain mid-70 percent gross margins, driven by improved cycle times, yields, and cost management ahead of Vera Rubin’s second-half ramp.
- Inventory and purchase commitments grew by $25 billion, reflecting robust demand; supply-demand balance is managed daily through purchase orders and capital planning.
- NVIDIA and Synopsys have formed a multi-year strategic partnership to integrate Synopsys’ engineering software with NVIDIA’s accelerated computing, AI technology and Omniverse digital twins to transform system-level design, EDA and CAE workflows.
- The collaboration targets 10x to over 1,000x speed-ups in workloads such as computational lithography, logic and circuit simulation, fluid dynamics and AI-based physics emulation.
- NVIDIA will commit $2 billion in equity to Synopsys as a demonstration of long-term commitment; both companies are allocating engineering teams to co-develop CUDA-accelerated and AI-infused tools.
- The non-exclusive partnership leverages Synopsys’ expanded customer reach (post-Ansys acquisition) and NVIDIA’s presence across all major clouds and OEM systems, aiming to expand the market from chip design to a multi-trillion-dollar product-design industry.
- NVIDIA and Synopsys unveiled a multi-year, non-exclusive partnership to integrate Synopsys’ EDA/IP tools with NVIDIA’s accelerated computing, AI technologies (CUDA, physical AI) and Omniverse digital twins to transform system-level design workflows.
- The collaboration aims to deliver 10× to over 1,000× speed-ups in core engineering simulations—ranging from computational lithography and logic simulation to fluid dynamics and AI-driven physics emulation—so that weeks-long tasks can complete in hours.
- NVIDIA will make a $2 billion equity investment in Synopsys to demonstrate commitment and provide optionality for joint R&D, without exclusivity on GPU procurement.
- The partnership leverages NVIDIA’s global GPU footprint—across clouds, OEMs, on-prem and edge—to enable Synopsys solutions to run wherever customers require accelerated compute.
- By extending from semiconductor design to sectors such as automotive, industrial, aerospace and drug discovery, the alliance targets a multi-trillion dollar total addressable market beyond traditional chip R&D.
- NVIDIA and Synopsys entered a non-exclusive, multi-year strategic partnership to integrate Synopsys’ engineering software with NVIDIA’s CUDA-accelerated computing, AI technology, and Omniverse digital twins.
- NVIDIA will invest $2 billion in Synopsys to jointly accelerate EDA, SDA, CAE, computer-aided drug discovery, and digital twin workflows.
- The collaboration targets 10×–1,000× speed-ups in core engineering workloads—shrinking simulations from weeks to hours and enabling system-level, real-time digital twins.
- It expands the total addressable market from the several‐hundred‐billion-dollar semiconductor sector to a multi-trillion-dollar global R&D industry, with key tool accelerations prioritized by 2026.
- Morgan Stanley maintains an Overweight rating on Nvidia while raising its price target from $235 to $250, reflecting confidence in the company’s AI-driven growth.
- The consensus from 56 analysts shows an average price target of $250.57 (high of $432.78, low of $138.00), implying a 39.65% upside from the current $179.43 share price.
- Morgan Stanley deferred raising its revenue forecasts despite Nvidia’s management projecting $500 billion in revenue over five quarters, choosing to validate the outlook through independent checks first.
- Industry observers stress Nvidia’s AI dominance, with Wedbush’s Dan Ives remarking, “it’s Nvidia’s world, everyone else is paying rent,” underscoring the company’s leading market position.
- SoftBank founder Masayoshi Son emotionally sold the company’s entire 32.1 million Nvidia shares in October to raise capital for AI investments, saying he “was crying to sell Nvidia shares”.
- The sale generated $5.8 billion, used to support SoftBank’s $22.5 billion financing commitment to OpenAI due by year-end.
- The firm reported its strongest quarter since summer 2022, with profits more than doubling to 2.5 trillion yen ($16.6 billion) primarily from paper gains on its OpenAI stake.
- As part of its deepening AI strategy, SoftBank is acquiring Ampere Computing and collaborating with Hon Hai on a large-scale Stargate data center project.
Quarterly earnings call transcripts for NVIDIA.