Sign in

Broadcom - Earnings Call - Q2 2025

June 5, 2025

Executive Summary

  • Broadcom delivered record revenue of $15.00B (+20% y/y) and non-GAAP EPS of $1.58 in Q2 FY25, modestly beating S&P Global consensus on revenue and EPS; company-reported Adjusted EBITDA was $10.00B (67% of revenue).
  • AI momentum remained the key catalyst: AI semiconductor revenue grew 46% y/y to >$4.4B, with Ethernet-based AI networking representing ~40% of AI revenue; management guides AI semiconductor revenue to $5.1B in Q3 (+60% y/y), marking the 10th consecutive quarter of AI growth.
  • Guidance: Q3 FY25 revenue ~$15.8B (+21% y/y) and Adjusted EBITDA ≥66%; segment guides called for semis ~$9.1B, software ~$6.7B; gross margin guided down ~130bps q/q on higher XPU mix; non-GAAP tax rate maintained at 14%.
  • Capital return remained robust: $2.785B cash dividends ($0.59/share) and $4.216B share repurchases/eliminations in Q2; Board authorized $10B buyback program in April, supporting share count moderation despite growth from employee vestings.
  • Stock narrative catalyst: accelerating AI networking and XPU programs (Tomahawk 6 launch at 102.4Tbps), sustained AI revenue trajectory into FY26, and continued VMware conversion to VCF (VCF adoption at >87% of top 10,000 customers), though mix shifts to XPUs pressure margins sequentially.

What Went Well and What Went Wrong

What Went Well

  • AI growth and mix: “Q2 AI revenue grew 46% y/y to over $4.4 billion… robust demand for AI networking,” with AI networking ~40% of AI revenue and strong traction for Ethernet scale-up and scale-out fabrics.
  • VMware/Infrastructure software execution: Infrastructure software revenue rose to $6.596B (+25% y/y), above outlook in Q2; VCF adoption exceeded 87% among top 10,000 customers, driving double-digit ARR growth.
  • Cash generation and returns: Record free cash flow of $6.411B (+44% y/y), dividends of $2.785B, and $4.216B of repurchases/eliminations returned ~$7B to shareholders; DSO improved to 34 days vs. 40 a year ago.

What Went Wrong

  • Margin dilution from XPU mix: Q3 consolidated gross margin guided down ~130bps sequentially primarily on higher XPU mix; management reiterated XPU margins are “slightly lower” than rest of semis (ex-wireless).
  • Non-AI semis sluggish: Non-AI semiconductors hovered around $4B, with wireless/industrial flat-to-down and industrial down; management views recovery as slow and near the bottom.
  • Regulatory/geopolitical uncertainty: Management cannot provide comfort on export controls given rapidly changing rules; emphasizes uncertainty around bilateral agreements.

Transcript

Operator (participant)

Welcome to Broadcom's second quarter fiscal year 2025 financial results conference call. At this time, for opening remarks and introductions, I would like to turn the call over to Goo Gai, Head of Investor Relations of Broadcom.

Jihyung Yoo (Head of Investor Relations)

Thank you, Operator, and good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO; Kirsten Spears, Chief Financial Officer; and Charlie Kawwas, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the second quarter of fiscal year 2025. If you did not receive a copy, you may obtain the information from the investor section of Broadcom's website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for one year through the investor section of Broadcom's website. During the prepared comments, Hock and Kirsten will be providing details of our second quarter fiscal year 2025 results, guidance for our third quarter of fiscal year 2025, as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments.

Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I will now turn the call over to Hock.

Hock Tan (CEO)

Thank you, Goo, and thank you, everyone, for joining us today. In our fiscal Q2 2025, total revenue was a record $15 billion, up 20% year on year. This 20% year on year growth was all organic, as Q2 last year was the first full quarter with VMware. Now, revenue was driven by continued strength in AI semiconductors and the momentum we have achieved in VMware. Now, reflecting excellent operating leverage, Q2 consolidated adjusted EBITDA was $10 billion, up 35% year on year. Now, let me provide more color. Q2 semiconductor revenue was $8.4 billion, with growth accelerating to 17% year on year, up from 11% in Q1. And of course, driving this growth was AI semiconductor revenue of over $4.4 billion, which is up 46% year on year and continues the trajectory of nine consecutive quarters of strong growth.

Within this, custom AI accelerators grew double digits year on year, while AI networking grew over 170% year on year. AI networking, which is based on Ethernet, was robust and represented 40% of our AI revenue. As a standards-based open protocol, Ethernet enables one single fabric for both scale-out and scale-up, and remains the preferred choice by our hyperscale customers. Our networking portfolio of Tomahawk switches, Jericho routers, and NICs is what's driving our success within AI clusters in hyperscalers. The momentum continues with our breakthrough Tomahawk 6 switch just announced this week. This represents the next generation 102.4 terabits per second switch capacity. Tomahawk 6 enables clusters of more than 100,000 AI accelerators to be deployed in just two tiers instead of three.

This flattening of the AI cluster is huge because it enables much better performance in training next generation frontier models through a lower latency, higher bandwidth, and lower power. Turning to XPUs, or custom accelerators, we continue to make excellent progress on the multi-year journey of enabling our three customers and four prospects to deploy custom AI accelerators. As we had articulated over six months ago, we eventually expect at least three customers to each deploy 1 million AI accelerator clusters in 2027, largely for training their frontier models. We forecast and continue to do so a significant percentage of these deployments to be custom XPUs. These partners are still unwavering in their plan to invest despite this certain economic environment. In fact, what we've seen recently is that they are doubling down on inference in order to monetize their platforms.

Reflecting this, we may actually see an acceleration of XPU demand into the back half of 2026 to meet urgent demand for inference on top of the demand we have indicated from training. Accordingly, we do anticipate now our fiscal 2025 growth rate of AI semiconductor revenue to sustain into fiscal 2026. Turning to our Q3 outlook, as we continue our current trajectory of growth, we forecast AI semiconductor revenue to be $5.1 billion, up 60% year on year, which would be the 10th consecutive quarter of growth. Turning to non-AI semiconductors in Q2, revenue of $4 billion was down 5% year on year. Non-AI semiconductor revenue is close to the bottom and has been relatively slow to recover. There are bright spots. In Q2, broadband, enterprise networking, and server storage revenues were up sequentially.

However, industrial was down, and as expected, wireless was also down due to seasonality. In Q3, we expect enterprise networking and broadband to continue to grow sequentially, but server storage, wireless, and industrial are expected to be largely flat. Overall, we forecast non-AI semiconductor revenue to stay around $4 billion. Now, let me talk about our infrastructure software segment. Q2 infrastructure software revenue of $6.6 billion was up 25% year on year, above our outlook of $6.5 billion. As we have said before, this growth reflects our success in converting our enterprise customers from perpetual vSphere to the full VCF software stack subscription. Customers are increasingly turning to VCF to create a modernized private cloud on-prem, which will enable them to repatriate workloads from public clouds while being able to run modern container-based applications and AI applications. Of our 10,000 largest customers, over 87% have now adopted VCF.

The momentum from strong VCF sales over the past 18 months since the acquisition of VMware has created annual recurring revenue, or otherwise known as ARR, growth of double digits in our core infrastructure software. In Q3, we expect infrastructure software revenue to be approximately $6.7 billion, up 16% year on year. In total, we're guiding Q3 consolidated revenue to be approximately $15.8 billion, up 21% year on year. We expect Q3 adjusted EBITDA to be at least 66%. With that, let me turn the call over to Kirsten.

Kirsten Spears (CFO)

Thank you, Hock. Let me now provide additional detail on our Q2 financial performance. Consolidated revenue was a record $15 billion for the quarter, up 20% from a year ago. Gross margin was 79.4% of revenue in the quarter, better than we originally guided on product mix. Consolidated operating expenses were $2.1 billion, of which $1.5 billion was related to R&D. Q2 operating income of $9.8 billion was up 37% from a year ago, with operating margin at 65% of revenue. Adjusted EBITDA was $10 billion, or 67% of revenue, above our guidance of 66%. This figure excludes $142 million of depreciation. Now, a review of the P&L for our two segments, starting with semiconductors. Revenue for our semiconductor solution segment was $8.4 billion, with growth accelerating to 17% year on year, driven by AI. Semiconductor revenue represented 56% of total revenue in the quarter.

Gross margin for our semiconductor solution segment was approximately 69%, up 140 basis points year on year, driven by product mix. Operating expenses increased 12% year on year to $971 million on increased investment in R&D for leading-edge AI semiconductors. Semiconductor operating margin of 57% was up 200 basis points year on year. Now, moving on to infrastructure software. Revenue for infrastructure software of $6.6 billion was up 25% year on year and represented 44% of total revenue. Gross margin for infrastructure software was 93% in the quarter compared to 88% a year ago. Operating expenses were $1.1 billion in the quarter, resulting in infrastructure software operating margin of approximately 76%. This compares to operating margin of 60% a year ago. This year-on-year improvement reflects our disciplined integration of VMware. Moving on to cash flow. Free cash flow in the quarter was $6.4 billion and represented 43% of revenue.

Free cash flow as a percentage of revenue continues to be impacted by increased interest expense from debt related to the VMware acquisition and increased cash taxes. We spent $144 million on capital expenditures. Day sales outstanding were 34 days in the second quarter compared to 40 days a year ago. We ended the second quarter with inventory of $2 billion, up 6% sequentially in anticipation of revenue growth in future quarters. Our days of inventory on hand were 69 days in Q2, as we continue to remain disciplined on how we manage inventory across the ecosystem. We ended the second quarter with $9.5 billion of cash and $69.4 billion of gross principal debt. Subsequent to quarter end, we repaid $1.6 billion of debt, resulting in gross principal debt of $67.8 billion.

The weighted average coupon rate and years to maturity of our $59.8 billion in fixed-rate debt is 3.8% and 7 years, respectively. The weighted average interest rate and years to maturity of our $8 billion in floating-rate debt is 5.3% and 2.6 years, respectively. Turning to capital allocation, in Q2, we paid stockholders $2.8 billion of cash dividends based on a quarterly common stock cash dividend of $0.59 per share. In Q2, we repurchased $4.2 billion, or approximately 25 million shares of common stock. In Q3, we expect the non-GAAP diluted share count to be approximately 4.97 billion shares, excluding the potential impact of any share repurchases. Now, moving on to guidance. Our guidance for Q3 is for consolidated revenue of $15.8 billion, up 21% year on year. We forecast semiconductor revenue of approximately $9.1 billion, up 25% year on year.

Within this, we expect Q3 AI semiconductor revenue of $5.1 billion, up 60% year on year. We expect infrastructure software revenue of approximately $6.7 billion, up 16% year on year. For modeling purposes, we expect Q3 consolidated gross margin to be down approximately 130 basis points sequentially, primarily reflecting a higher mix of XPUs within AI revenue. As a reminder, consolidated gross margins through the year will be impacted by the revenue mix of infrastructure software and semiconductors. We expect Q3 adjusted EBITDA to be at least 66%. We expect the non-GAAP tax rate for Q3 in fiscal year 2025 to remain at 14%. With this, that concludes my prepared remarks. Operator, please open up the call for questions.

Operator (participant)

Thank you. To ask a question, you will need to press star 11 on your telephone. To withdraw your question, please press star 11 again. Due to time restraints, we ask that you please limit yourself to one question. Please stand by while we compile the Q&A roster. Our first question will come from the line of Ross Seymore with Deutsche Bank. Your line is open.

Ross Seymore (Managing Director)

Hi, guys. Thanks for letting me ask a question. Hock, I wanted to jump onto the AI side and specifically some of the commentary you had about next year. Can you just give a little bit more color on the inference commentary you gave? Is it more the XPU side, the connectivity side, or both that's giving you the confidence to talk about the growth rate that you have this year being matched next fiscal year?

Hock Tan (CEO)

Thank you, Ross. Good question. I think we're indicating that what we are seeing and what we have quite a bit of visibility increasingly is increased deployment of XPUs next year, much more than we originally thought, and hand in hand with it, of course, more and more networking. It is a combination of both.

Ross Seymore (Managing Director)

The inference side of things?

Hock Tan (CEO)

Yeah. We're seeing much more inference now.

Ross Seymore (Managing Director)

Thank you.

Operator (participant)

Thank you. One moment for our next question. That will come from the line of Harlan Sur with JPMorgan. Your line is open.

Harlan Sur (Executive Director of Equity Research)

Good afternoon. Thanks for taking my question and great job on the quarterly execution. Hock, good to see the positive growth inflection quarter to quarter, year over year growth rates in your AI business. As the team has mentioned, the quarters can be a bit lumpy. If I smooth out kind of first three quarters of this fiscal year, your AI business is up 60% year over year. It is kind of right in line with your three-year kind of SAM growth figure. Given your prepared remarks and knowing that your lead times remain at 35 weeks or better, do you see the Broadcom team sustaining the 60% year over year growth rate exiting this year?

I assume that that potentially implies that you see your AI business sustaining the 60% year over year growth rate into fiscal 2026, again, based on your prepared commentary, which again is in line with your SAM growth figure. Is that kind of a fair way to think about the trajectory this year and next year?

Hock Tan (CEO)

Harlan, that's a very insightful set of analysis here. That's exactly what we're trying to do here because over six months ago, we gave you guys a point, a year, 2027. As we come into the second half of 2025 and with improved visibility and updates we're seeing in the way our hyperscale partners are deploying data centers, AI clusters, we are providing you some level of guidance, visibility, what we are seeing, how the trajectory of 2026 might look like. I'm not giving you any update on 2027. We're just still establishing the update we have in 2027 six months ago. What we're doing now is giving you more visibility into where we're seeing 2026 headed.

Harlan Sur (Executive Director of Equity Research)

Is the framework that you laid out for us, like second half of last year, which implies 60% kind of growth figure in your SAM opportunity, is that kind of the right way to think about it as it relates to the profile of growth in your business this year and next year?

Hock Tan (CEO)

Yes.

Harlan Sur (Executive Director of Equity Research)

Okay. Thank you, Hock.

Operator (participant)

Thank you. One moment for our next question. That will come from the line of Ben Reitzes with Melius Research. Your line is open.

Ben Reitzes (Managing Director and Head of Technology Research)

Hey, how you doing? Thanks, guys. Hey, Hock. AI networking was really strong in the quarter, and it seemed like it must have beat expectations. I was wondering if you could just talk about the networking in particular, what caused that, and how much of that is your acceleration into next year? When do you think you see Tomahawk kicking in as part of that acceleration? Thanks.

Hock Tan (CEO)

I think AI networking, as you probably would know, goes pretty hand in hand with deployment of AI accelerator clusters. It does not deploy on a timetable that is very different from the way the accelerators get deployed, whether they are XPUs or GPUs. It does happen. They deploy a lot in scale-out where Ethernet, of course, is the choice or protocol, but it is also increasingly moving into the space of what we all call scale-up within those data centers, where you have much higher, more than we originally thought, consumption or density of switches than you have in the scale-out scenario. In fact, the increased density in scale-up is 5-10 times more than in scale-out.

That is the part that kind of pleasantly surprised us, which is why this past quarter, Q2, the AI networking portion continues at about 40% from what we reported a quarter ago for Q1. At that time, I said I expect it to drop. It has not.

Ben Reitzes (Managing Director and Head of Technology Research)

Your thoughts on Tomahawk driving acceleration for next year and when it kicks in?

Hock Tan (CEO)

Oh, Tomahawk 6. Oh, yeah. That's extremely strong interest. Now, we're not shipping big orders or any orders other than basic proof of concepts out to customers, but there is tremendous demand for this new 102 terabit per second Tomahawk switches.

Ross Seymore (Managing Director)

Thanks, Hock.

Operator (participant)

Thank you. One moment for our next question. That will come from the line of Blaine Curtis with Jefferies. Your line is open.

Blayne Curtis (Managing Director)

Hey. Thanks. Great results. I just wanted to ask maybe following up on the scale-out opportunity. Today, I guess your main customer is not really using an NVLink switch-style scale-up. I'm just kind of curious your visibility or the timing in terms of when you might be shipping a switch Ethernet scale-up network to your customers.

Hock Tan (CEO)

The talking scale-up.

Blayne Curtis (Managing Director)

Scale-up.

Hock Tan (CEO)

Yeah. Scale-up is very rapidly converting to Ethernet now, very much so. For our fairly narrow band of hyperscale customers, scale-up is very much Ethernet.

Operator (participant)

Thank you. One moment for our next question. That will come from the line of Stacy Rasgon with Bernstein. Your line is open.

Stacy Rasgon (Managing Director and Senior Analyst of U.S. Semiconductors and Semiconductor Capital Equipment)

Hi, guys. Thanks for taking my questions. Hock, I still wanted to follow up on that AI 2026 question. I want to just put some numbers on it just to make sure I've got it right. So you did 60% in the first three quarters of this year. If you grow 60% year over year in Q4, it'd put you at, I don't know, $5.8 billion, something like $19 billion or $20 billion for the year. And then are you saying you're going to grow 60% in 2026, which would put you $30 billion plus in AI revenues for 2026? I just want to make sure. Is that the math that you're trying to communicate to us directly?

Hock Tan (CEO)

I think you're doing the math. I'm giving you the trend. I did answer that question, I think Harlan asked earlier. The rate we are seeing now so far in fiscal 2025, and we'll presumably continue. We don't see any reason why it doesn't, given lead time visibility in 2025. What we're seeing today, based on what we have visibility on 2026, is to be able to ramp up this AI revenue in the same trajectory. Yes.

Stacy Rasgon (Managing Director and Senior Analyst of U.S. Semiconductors and Semiconductor Capital Equipment)

Is the SAM going up? Is the SAM going up as well? Because now you have inference on top of training. Is the SAM still 60-90, or is the SAM higher now as you see it?

Hock Tan (CEO)

I'm not playing a SAM game here. I'm just giving a trajectory towards where we drew the line on 2027 before. So I have no response to is the SAM going up or not. Stop talking about SAM now. Thanks.

Stacy Rasgon (Managing Director and Senior Analyst of U.S. Semiconductors and Semiconductor Capital Equipment)

Okay. Thank you.

Operator (participant)

One moment for our next question. That will come from the line of Vivek Arya with Bank of America. Your line is open.

Vivek Arya (Managing Director)

Thanks for taking my question. I had a near and then a longer-term question on the XPU business. Hock, for near term, if your networking upside in Q2 and overall AI was in line, it means XPU was perhaps not as strong. I realize it's lumpy, but anything more to read into that, any product transition or anything else? Just a clarification there. Longer term, you have outlined a number of additional customers that you're working with. What milestones should we look forward to, and what milestones are you watching to give you the confidence that you can now start adding that addressable opportunity into your 2027 or 2028 or other numbers? How do we get the confidence that these projects are going to turn into revenue in some reasonable timeframe from now? Thank you.

Hock Tan (CEO)

Okay. On the first part that you're asking, it's like you're trying to count how many angels on the head of a pin here. I mean, whether it's XPU or networking. Networking is hot, but that doesn't mean XPU is any softer. It's very much along the trajectory we expect it to be. There's no lumpiness. There's no softening. It's pretty much what we expect the trajectory to go so far and into next quarter as well and probably beyond. We have a fairly—it's a fairly, I guess, in our view, a fairly clear visibility on the short-term trajectory. In terms of going on to 2027, no, we are not updating any numbers here. Six months ago, we drew a sense for the size of the SAM based on a million GPU XPU clusters for three customers.

That is still very valid at that point that you will be there. We have not provided any further updates here, nor are we intending to at this point. When we get better visibility, a clearer sense of where we are, and that probably will not happen until 2026, we will be happy to give an update to the audience. Right now, though, in today's prepared remarks and answering a couple of questions, we are, as we have done here, intending to give you guys more visibility of what we are seeing, the growth trajectory in 2026.

Vivek Arya (Managing Director)

Thank you, Hock.

Operator (participant)

Thank you. One moment for our next question. That will come from the line of CJ Muse with Cantor Fitzgerald. Your line is open.

CJ Muse (Senior Managing Director)

Yeah. Good afternoon. Thank you for taking the question. I was hoping to follow up on Ross's question regarding inference opportunity. Can you discuss workloads that are optimal that you're seeing for custom silicon? And that over time, what percentage of your XPU business could be inference versus training? Thank you.

Hock Tan (CEO)

I think there's no differentiation between training and inference in using merchant accelerators versus custom accelerators. I think the whole premise behind going towards custom accelerators continues, which is not a matter of cost alone. It is that as custom accelerators get used and get developed on a roadmap with any particular hyperscaler, there's a learning curve, a learning curve on how they could optimize the way their algorithms on their large language models get written and tied to silicon. That ability to do so is a huge value added in creating algorithms that can drive their LLMs to higher and higher performance, much more than basically a segregation approach between hardware and the software. You literally combine end-to-end hardware and software as they take that journey. It is a journey. They don't learn that in one year.

Do it a few cycles, get better and better at it, and then realize the fundamental value in creating your own hardware versus using a third-party merchant silicon. You are able to optimize your software to the hardware and eventually achieve way higher performance than you otherwise could. We see that happening.

Operator (participant)

Thank you. One moment for our next question. That will come from the line of Carl Ackerman with BNP Paribas. Your line is open.

Karl Ackerman (Managing Director and Equity Research of Semiconductors and IT Hardware)

Yes. Thank you. Hock, you spoke about the much higher content opportunity in scale-up networking. I was hoping you could discuss how important is demand adoption for copackage optics in achieving this 5-10x higher content for scale-up networks? Or should we anticipate much of the scale-up opportunity will be driven by Tomahawk and Thor NICs? Thank you.

Hock Tan (CEO)

I'm trying to decipher this question of yours. Let me try to answer it perhaps in a way I think you want me to clarify. First and foremost, I think most of what's scaling up, a lot of the scaling up that's going in, as I call it, which means a lot of XPU or GPU to GPU interconnects, is done on copper interconnects. Because the size of this scale-up cluster is still not that huge yet, you can get away with using copper interconnects. They're still doing it. Mostly, they're doing it today. At some point soon, I believe, when you start trying to go beyond maybe 72 GPU to GPU interconnects, you may have to push towards a different protocol mode at a different media from copper to optical.

When we do that, yeah, perhaps then things like exotic stuff like copackaging might become of silicon with optical might become relevant. Truly, what we really are talking about is that at some stage, as the clusters get larger, which means scale-up becomes much bigger, and you need to interconnect GPU or XPU to each other in scale-up, many more than just 72 or 100, maybe even 128. You start going more and more. You want to use optical interconnects simply because of distance. That is when optical will start replacing copper. When that happens, the question is, what's the best way to deliver on optical? One way is copackage optics, but it's not the only way. You can just simply use continuous, perhaps pluggable, at low-cost optics, in which case then you can interconnect the bandwidth, the radix of a switch.

Our switch is now 512 connections. You can now connect all these XPUs, GPUs, 512 for scale-up phenomenon. That was huge. When you go to optical, that's going to happen, in my view, within a year or two. We will be right in the forefront of it. It may be copackage optics, which we are very much in development, but it is a lock-in copackage. It could just be, as a first step, pluggable optics. Whatever it is, I think the bigger question is, when does it go from optical and from copper connecting GPU to GPU to optical connecting it? The step in that move will be huge. It is not necessarily copackage optics, though that is definitely one path we are pursuing.

Karl Ackerman (Managing Director and Equity Research of Semiconductors and IT Hardware)

Very clear. Thank you.

Operator (participant)

One moment for our next question. That will come from the line of Joshua Buckhalter with TD Cowen. Your line is open.

Joshua Buchalter (Director of Equity Research Semiconductors)

Hey, guys. Thank you for taking my question. I realize it's a bit nitpicky, but I wanted to ask about gross margins in the guide. Your revenue implies sort of $800 million incremental increase, but gross profit up, I think, $400 million-$450 million, which is kind of pretty well below corporate average fall-through. I appreciate that SEMIs is dilutive and custom is probably dilutive within SEMIs. Anything else going on with margins that we should be aware of? How should we think about the margin profile of custom longer term as that business continues to scale and diversify? Thank you.

Kirsten Spears (CFO)

Yeah. We've historically said that the XPU margins are slightly lower than the rest of the business, other than wireless. There is really nothing else going on other than that. It is just exactly what I said, that the majority of it, quarter over quarter, the 130 basis point decline is being driven by more XPUs.

Hock Tan (CEO)

There are more moving parts here than your simple analysis proves here. I think your simple analysis is totally wrong in that regard.

Joshua Buchalter (Director of Equity Research Semiconductors)

All right then. Thank you.

Operator (participant)

One moment for our next question. That will come from the line of Timothy Arcuri with UBS. Your line is open.

Timothy Arcuri (Managing Director)

Thanks a lot, Hock. I also wanted to ask about scale-up, Hock. There's a lot of competing ecosystems. There's UA Link, which of course you left. Now there's the big GPU company opening up NVLink. They're both trying to build ecosystems. There's an argument that you're an ecosystem of one. What would you say to that debate? Does opening up NVLink change the landscape? How do you view your AI networking growth next year? Do you think it's going to be primarily driven by scale-up or will it still be pretty scale-out heavy things?

Hock Tan (CEO)

People do like to create platforms and new protocols and systems. The fact of the matter is scale-up can just be done easily. It is currently available. It is open standards, open source, Ethernet. Just as well. You do not need to create new systems for the sake of doing something that you could easily be doing in networking in Ethernet. Yeah, I hear a lot of these interesting new protocols, standards that are trying to be created. Most of them, by the way, are proprietary, much as they like to call it otherwise. What is really open source and open standards is Ethernet. We believe Ethernet will prevail as it does for the last 20 years in traditional networking. There is no reason to create a new standard for something that could be easily done in transferring bits and bytes of data.

Timothy Arcuri (Managing Director)

Got it, Hock. Thank you.

Operator (participant)

One moment for our next question. That will come from the line of Christopher Rolland with Susquehanna. Your line is open.

Christopher Rolland (Senior Equity Analyst Semiconductors)

Thanks for the question. Yeah. My question is for you, Hock. It's kind of a bigger picture one here. And this kind of acceleration that we're seeing in AI demand, do you think that this acceleration is because of a marked improvement in ASICs or XPUs closing the gap on the software side at your customers? Do you think it's these tokenomics around inference, test-time compute driving that? For example, what do you think is actually driving the upside here? And do you think it leads to a market share shift faster than we were expecting towards XPU from GPU? Thanks.

Hock Tan (CEO)

Yeah. Interesting question. No, none of the foregoing that you outlined. It's very simple. The way inference has come out very, very hot lately is, remember, we're only selling to a few customers, hyperscalers with platforms and LLMs. That's it. There are not that many. We told you how many we have, and we haven't increased any. What is happening is these hyperscalers and those with LLMs need to justify all this spending they're doing. Doing training makes your frontier models smarter. That's no question. It's almost like research and science. Make your frontier models by creating very clever algorithms that consume a lot of compute for training smarter. Training makes it smarter. You want to monetize inference. That's what's driving it. Monetize, I indicated in my prepared remarks, to drive to justify a return on investment. A lot of that investment is training.

That return on investment is by creating use cases, a lot of AI use cases, AI consumption out there through availability of a lot of inference. That is what we are now starting to see among a small group of customers.

Christopher Rolland (Senior Equity Analyst Semiconductors)

Excellent. Thank you.

Operator (participant)

One moment for our next question. That will come from the line of Vijay Rakesh with Mizuho. Your line is open.

Vijay Rakesh (Managing Director)

Yeah. Thanks. Hey, Hock. Just going back on the AI server revenue side, I know you said fiscal 2025 kind of tracking to that up 60%-ish growth. If you look at fiscal 2026, you have many new customers racking with a Meta and probably you have the four or the six hyperscalers that you're talking about. Would you expect that growth to accelerate into fiscal 2026 above that kind of 60% you talked about?

Hock Tan (CEO)

My prepared remarks, which I clarified, that the rate of growth we are seeing in 2025 will sustain into 2026 based on improved visibility and the fact that we're seeing inference coming in on top of the demand for training as the clusters get bigger and bigger and bigger still stands. I don't think we are getting very far by trying to pass through my words or data here. It's just a trick.

Vijay Rakesh (Managing Director)

Got it.

Hock Tan (CEO)

We see that going from 2025 into 2026 as the best forecast we have at this point.

Vijay Rakesh (Managing Director)

Got it. On the NVLink, the NVLink Fusion versus the scale-up, do you expect that market to go the route of top of the rack where you're seeing some move to the Ethernet side in kind of the scale-out? Do you expect scale-up to kind of go the same route? Thanks.

Hock Tan (CEO)

Broadcom do not participate in NVLink. So I'm really not qualified to answer that question, I think.

Vijay Rakesh (Managing Director)

Got it. Thank you.

Operator (participant)

Thank you. One moment for our next question. That will come from the line of Aaron Rakers with Wells Fargo. Your line is open.

Aaron Rakers (Managing Director and Technology Analyst)

Yeah. Thanks for taking the question. I think all my questions on scale-up have been asked. I guess, Hock, given the execution that you guys have been able to do with the VMware integration, looking at the balance sheet, looking at the debt structure, I'm curious if you could give us your thoughts on how the company thinks about capital return versus the thoughts on M&A and the strategy going forward. Thank you.

Hock Tan (CEO)

Okay. That's an interesting question. I agree. Not too untimely, I would say. Because, yeah, we have done a lot of the integration of VMware now. You can see that in the level of free cash flow we're generating from operations. As we said, the use of capital has always been very, I guess, measured and upfront with a return through dividends, which is half our free cash flow of the preceding year. Frankly, as Kirsten has mentioned three months ago and six months ago during the last two earnings call, the first choice typically of the other part of the free cash flow is to bring down our debt to a level that we feel closer to no more than two ratio of debt to EBITDA.

That does not mean that opportunistically we may go out there and buy back our shares as we did last quarter. As indicated by Kirsten when we did $4.2 billion of stock buyback. Now, part of it is used to basically when employee RSUs vest, we basically buy back part of the shares used to be paying taxes on the vested RSU. The other part of it, I do admit we used opportunistically last quarter when we see an opportunity situation when basically we think that it is a good time to buy some shares back. We do. Having said all that, our use of cash outside of dividends would be at this stage used towards reducing our debt. I know you are going to ask, what about M&A?

The kind of M&A we will do in our view would be significant, would be substantial enough that we need debt in any case. It is a good use of our free cash flow to bring down debt to, in a way, expand, if not preserve, our borrowing capacity if we have to do another M&A deal.

Operator (participant)

Thank you. One moment for our next question. That will come from the line of Srini Pajjuri with Raymond James. Your line is open.

Srini Pajjuri (Managing Director)

Thank you. Hock, a couple of clarifications. First, on your 2026 expectation, are you assuming any meaningful contribution from the four prospects that you talked about?

Hock Tan (CEO)

No comment. We don't talk on prospects. We only talk on customers.

Srini Pajjuri (Managing Director)

Okay. Fair enough. My other clarification is that I think you talked about networking being about 40% of the mix within AI. Is that the right kind of mix that you expect going forward, or is that going to materially change as we, I guess, see XPUs ramping going forward?

Hock Tan (CEO)

No. I've always said and expect that to be the case going forward in 2026 as we grow, that networking should be a ratio to XPU should be closer in the range of less than 30%, not the 40%.

Operator (participant)

Thank you. One moment for our next question. That will come from the line of Joe Moore with Morgan Stanley. Your line is open.

Joe Moore (Managing Director)

Great. Thank you. You've said you're not going to be impacted by export controls on AI. I know there's been a number of changes in the industry since the last time you made the call. Is that still the case? Can you give people comfort that there's no impact from that down the road?

Hock Tan (CEO)

Nobody can give anybody comfort in this environment, Joe. You know that. Rules are changing quite dramatically as bilateral trade agreements continue to be negotiated in a very, very dynamic environment. I'll be honest, I don't know. I know as little as probably you probably know more than I do, maybe, in which case then I know very little about this whole thing about whether there's any export control, how the export control will take place. We're guessing. I'd rather not answer that because, no, I don't know whether it will be.

Operator (participant)

Thank you. We do have time for one final question. That will come from the line of William Stein with Truist Securities. Your line is open.

William Stein (Managing Director and Senior Analyst Technology)

Yeah. Great. Thank you for squeezing me in. I wanted to ask about VMware. Can you comment as to how far along you are in the process of converting customers to the subscription model? Is that close to complete, or is there still a number of quarters that we should expect that that conversion continues?

Hock Tan (CEO)

That's a good question. Let me start off by saying a good way to measure it is most of our VMware contracts are typically three years. That was what VMware did before we acquired them. That's pretty much what we continue to do, three years, very traditional. Based on that, the renewals were like two-thirds of the way, almost halfway, more than halfway through the renewals. We probably have at least another year plus maybe a year and a half to go.

Operator (participant)

Thank you. And with that, I'd like to turn the call over to Goo for closing remarks.

Thank you, operator. Broadcom currently plans to report its earnings for the third quarter of fiscal year 2025 after close of market on Thursday, September 4, 2025. A public webcast of Broadcom's earnings conference call will follow at 2:00 P.M. Pacific. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.

Best AI for Equity Research

Performance on expert-authored financial analysis tasks

Fintool-v490%
Claude Sonnet 4.555.3%
o348.3%
GPT 546.9%
Grok 440.3%
Qwen 3 Max32.7%