Earnings summaries and quarterly performance for NVIDIA.
Executive leadership at NVIDIA.
Jen-Hsun Huang
President and Chief Executive Officer
Ajay K. Puri
Executive Vice President, Worldwide Field Operations
Colette M. Kress
Executive Vice President and Chief Financial Officer
Debora Shoquist
Executive Vice President, Operations
Timothy S. Teter
Executive Vice President, General Counsel and Secretary
Board of directors at NVIDIA.
A. Brooke Seawell
Director
Aarti Shah
Director
Dawn Hudson
Director
Harvey C. Jones
Director
John O. Dabiri
Director
Mark A. Stevens
Director
Melissa B. Lora
Director
Persis S. Drell
Director
Robert K. Burgess
Director
Stephen C. Neal
Lead Independent Director
Tench Coxe
Director
Research analysts who have asked questions during NVIDIA earnings calls.
Joseph Moore
Morgan Stanley
7 questions for NVDA
Timothy Arcuri
UBS
7 questions for NVDA
Aaron Rakers
Wells Fargo
6 questions for NVDA
Vivek Arya
Bank of America Corporation
6 questions for NVDA
Ben Reitzes
Melius Research LLC
4 questions for NVDA
CJ Muse
Cantor Fitzgerald
4 questions for NVDA
Stacy Rasgon
Bernstein Research
4 questions for NVDA
Benjamin Reitzes
Melius Research
3 questions for NVDA
Christopher Muse
Cantor Fitzgerald
3 questions for NVDA
Jim Schneider
Goldman Sachs
3 questions for NVDA
Atif Malik
Citigroup Inc.
2 questions for NVDA
Toshiya Hari
Goldman Sachs Group, Inc.
2 questions for NVDA
Harlan Sur
JPMorgan Chase & Co.
1 question for NVDA
Jake Wilhelm
Wells Fargo Securities, LLC
1 question for NVDA
Mark Lipacis
Evercore ISI
1 question for NVDA
Matthew Ramsay
TD Cowen
1 question for NVDA
Pierre Ferragu
New Street Research
1 question for NVDA
Stacey Raskin
Bernstein Research
1 question for NVDA
Vivek Aria
Bank of America Securities
1 question for NVDA
Recent press releases and 8-K filings for NVDA.
- NVIDIA and Dassault Systèmes announced their largest collaboration in over 25 years to integrate CUDA-X acceleration, NVIDIA AI, and Omniverse into Dassault Systèmes’ 3D design and simulation tools, enabling real-time virtual twins at 100×–1,000× scale and beyond.
- The partnership spans industries—from life sciences (BIOVIA-powered virtual protein design at Bel Group) to automotive engineering (real-time crash and aero simulation at Lucid) and factory automation (software-defined AI factories at Omron)—demonstrating cross-sector virtual twin applications.
- Introduction of AI “virtual companions” will let engineers seamlessly convert unstructured inputs (e.g., photos, sketches) into structured 3D models, automate compliance and manufacturability checks, and codify individual and organizational knowledge for design “shift-left” workflows.
- The joint vision underpins the coming reindustrialization—an estimated $85 trillion global infrastructure buildout over the next decade—and aims to accelerate software-defined products and factories with continuous, AI-driven virtual twin operations.
- NVIDIA and Dassault Systèmes announced their largest-ever collaboration, embedding CUDA-X acceleration, NVIDIA AI, and Omniverse into the 3DEXPERIENCE platform for real-time virtual twin capabilities.
- The integration spans life sciences (AI-driven protein and drug/material discovery with BIOVIA, e.g., Bel Group’s non-dairy protein R&D) , engineering design (upstream crash and aerodynamic optimization for Lucid Motors) , and fully software-defined factory automation (e.g., OMRON’s autonomous production lines).
- Jensen Huang characterized AI as foundational infrastructure akin to electricity and internet, anticipating $85–100 trillion in industrial AI and digital twin investments over the next decade.
- AI-powered virtual companions will convert unstructured data (e.g., sketches, images) into structured 3D models, codify user expertise, automate design workflows, and keep proprietary knowledge on local systems.
- NVIDIA and Dassault Systèmes announce their largest-ever collaboration, integrating NVIDIA CUDA-X, AI, and Omniverse into CATIA, SOLIDWORKS, SIMULIA and other Dassault tools to enable real-time, generative virtual twins across industries.
- The partnership embeds physics-aware AI models (e.g., PhysicsNeMo) to accelerate simulations by orders of magnitude and shift compliance, manufacturability, and lifecycle checks upstream in the design process.
- Joint focus on sector-specific use cases—from BIOVIA-powered food-science virtual twins with Bel Group to automotive generative optimization for Lucid and fully software-defined smart factories with OMRON.
- NVIDIA highlights the scale of the coming AI-industrial infrastructure buildout, estimating $85–100 trillion in virtual twin-driven industrialization investment over the next decade.
- Cisco and NVIDIA detailed their AI factory vision, aiming to reinvent the enterprise computing stack across compute, storage, networking (via Cisco Nexus control plane), and security to support large-scale AI deployments.
- Jensen Huang urged companies to “let a thousand flowers bloom,” encouraging broad AI experimentation before curating and concentrating on core workflows such as chip design, software engineering, and systems integration.
- Emphasized an AI sensibility of abundance, applying orders-of-magnitude speed (real-time over annual cycles) and zero-mass computing to the most impactful business problems.
- Outlined a five-layer AI stack—energy (chips), infrastructure (hardware & software), AI models, and applications—stressing that successful enterprise AI hinges on driving real-world applications, not just infrastructure.
- Advised firms to build on-premises AI infrastructure to safeguard proprietary IP (notably their own questions) and integrate “AI in the loop” for continuous organizational learning and value capture.
- Cisco and NVIDIA announced a collaboration to develop AI factories, aiming to reinvent enterprise computing across processing, storage, networking, and security for AI workloads.
- Jensen Huang urged enterprises to “let a thousand flowers bloom” by enabling broad AI experimentation in core business areas before curating and scaling the most impactful projects.
- The CEOs highlighted the shift from explicit to implicit programming—treating computing as infinitely fast and leveraging AI’s “abundance” mindset to tackle large, context-dependent problems.
- They detailed a five-layer AI stack—energy, chips, infrastructure, models, and applications—and stressed that success now hinges on rapid application development rather than infrastructure alone.
- Recommended building on-prem AI systems to protect proprietary IP—keeping AI “in the loop” so organizational questions and insights become enduring company assets.
- NVIDIA CEO Jensen Huang and Cisco’s leadership presented the AI factory partnership, aiming to reinvent the entire computing stack—processing, storage, networking and security—to accelerate enterprise AI deployment.
- Huang advocated an enterprise strategy of “let a thousand flowers bloom,” encouraging broad AI experimentation before curating efforts around a company’s core value drivers like chip design and software engineering, with partners such as Synopsys, Cadence, Siemens and Dassault.
- The talk underscored AI’s shift from explicit programming to implicit, self-supervised learning, scaling models from millions to trillions of parameters and enabling tasks that once took years to be done in real time.
- A five-layer AI “cake” was described—energy, chips, infrastructure, AI models and applications—emphasizing that applications and embedding “AI in the loop” will define future corporate IP and competitiveness.
- NVIDIA and Dassault Systèmes entered a long-term strategic partnership to integrate Dassault’s Virtual Twin and 3DEXPERIENCE platforms with NVIDIA’s Omniverse, open models and accelerated software libraries, aiming to create science-validated Industry World Models for real-time physics-grounded simulations.
- The reciprocal deal will see Dassault deploy NVIDIA-powered “AI factories” via its OUTSCALE cloud while NVIDIA adopts Dassault’s systems-engineering tools for its Rubin platform, positioning NVIDIA’s infrastructure as the backbone for scaled industrial AI.
- At its Feb. 2 SOLIDWORKS and 3DEXPERIENCE summit, Dassault unveiled three AI assistants built on these World Models, with the CEO projecting potential tenfold productivity gains for adopters.
- The partners emphasized training on decades of validated scientific and engineering data to produce physics-grounded AI, targeting a market opportunity as high as $9 trillion.
- EPRI is partnering with Prologis, NVIDIA, and InfraPartners to develop micro data centers (5–20 MW) for distributed inference, aiming for at least five U.S. pilot sites by end-2026.
- The project will co-locate inference compute at or near utility substations to leverage underused grid capacity, reduce transmission congestion, and enhance grid reliability.
- NVIDIA will provide its GPU-accelerated computing platform and architectural guidance, while Prologis sources sites and manages development and InfraPartners supplies high-density AI data center modules.
- NVIDIA’s AI supercomputer architecture combines scale-up NVLink, scale-out Spectrum-X Ethernet, context memory storage via BlueField DPUs, and Scale Across for multi-data center connectivity.
- Spectrum-X Ethernet delivers 3× higher expert dispatch performance for inference and 1.4× faster, fully synchronous training by eliminating jitter through RDMA, fine-grain adaptive routing, and SuperNIC injection control.
- Co-packaged optics shifts the optical engine into the switch package, cutting scale-out network power by 5×, boosting signal integrity by 64×, and improving data-center resilience by encapsulating lasers in liquid-cooled modules.
- NVIDIA products include a 102 Tb/s Spectrum-X switch (120× 800 GbE or 512× 200 GbE), a 409 Tb/s Spectrum-X variant, and a 115 Tb/s Quantum-X InfiniBand switch (144× 800 Gb/s), all fully liquid-cooled for maximum energy efficiency and million-GPU scalability.
- NVIDIA outlined a four-tier AI supercomputer infrastructure comprising NVLink for rack-scale GPUs, Spectrum-X Ethernet for scale-out, ConnectX with BlueField for storage, and Spectrum-X scale-across for multi-site deployments.
- Introduced co-packaged optics in Spectrum-X Ethernet and Quantum-X InfiniBand switches, embedding the optical engine with the switch ASIC to reduce network power by up to 5×, boost signal integrity 64×, and increase reliability 13×.
- Unveiled Spectrum-X Photonics switches: a 102 Tbps model (120 × 800 Gbps or 512 × 200 Gbps) and a 409 Tbps variant (512 × 800 Gbps or 2000 × 200 Gbps); plus Quantum-X InfiniBand photonics: 115 Tbps with 144 × 800 Gbps ports.
- Announced early 2026 deployments of Quantum-2 InfiniBand CPO with CoreWeave, Lambda and TACC, with Spectrum-X Ethernet CPO shipping later in the year.
Fintool News
In-depth analysis and coverage of NVIDIA.

NVIDIA and Dassault Systèmes Forge 'Largest Collaboration' in 25 Years to Build Industrial AI Platform

Nvidia Cancels RTX 50 Super, Delays RTX 60 to 2028: AI Chips Win, Gamers Lose

Cerebras Raises $1B at $23B Valuation, Nearly Tripling in Five Months

ElevenLabs Raises $500 Million at $11 Billion Valuation, Becoming UK's Most Valuable AI Startup

Jensen Huang at Cisco AI Summit: Physical AI Unlocks $100 Trillion Economy

NVIDIA Confirms CoreWeave, Lambda, TACC as First Co-Packaged Optics Customers in Webinar
Quarterly earnings call transcripts for NVIDIA.
Ask Fintool AI Agent
Get instant answers from SEC filings, earnings calls & more