Sign in

You're signed outSign in or to get full access.

Broadcom - Earnings Call - Q4 2025

December 11, 2025

Transcript

Operator (participant)

Welcome to Broadcom Inc's fourth quarter and fiscal year 2025 financial results conference call. At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yoo, Head of Investor Relations of Broadcom Inc.

Ji Yoo (Head of Investor Relations)

Thank you, Cherie, and good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO, Kirsten Spears, Chief Financial Officer, and Charlie Kawwas, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the fourth quarter and fiscal year 2025. If you did not receive a copy, you may obtain the information from the investor section of Broadcom's website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for one year through the investor section of Broadcom's website. During the prepared remarks, Hock and Kirsten will be providing details of our fourth quarter and fiscal year 2025 results, guidance for our first quarter of fiscal year 2026, as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments.

Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I'll now turn the call over to Hock.

Hock Tan (President and CEO)

Thank you, Ji. And thank you, everyone, for joining us today. We just ended our Q4, fiscal 2025, and before I get into details of that quarter, let me recap the year. In our fiscal 2025, consolidated revenue grew 24% year-over-year to a record $64 billion, and it's driven by AI semiconductors and VMware. AI revenue grew 65% year-over-year to $20 billion, driving the Semiconductor revenue for this company to a record $37 billion for the year. In our Infrastructure Software business, strong adoption of VMware Cloud Foundation, or VCF, as we call it, drove revenue growth of 26% year-on-year to $27 billion. In summary, 2025 was another strong year for Broadcom, and we see the spending momentum by our customers for AI continuing to accelerate in 2026. Now, let's move on to the results of our fourth quarter 2025.

Total revenue was a record $18 billion, up 28% year-on-year, and above our guidance on better-than-expected growth in AI Semiconductors, as well as Infrastructure Software. Q4 consolidated adjusted EBITDA was a record $12.12 billion, up 34% year-on-year. So let me give you more color on our two segments. In semiconductors, revenue was $11.1 billion, as year-on-year growth accelerated to 35%. And this robust growth was driven by the AI Semiconductor revenue of $6.5 billion, which was up 74% year-on-year. And this represents a growth trajectory exceeding 10x over the 11 quarters we have reported this line of business. Our custom-accelerated business more than doubled year-over-year, as we see our customers increase adoption of XPUs, as we call those custom accelerators, in training their LLMs and monetizing their platforms through inferencing APIs and applications.

These XPUs, I might add, are not only being used to train and inference internal workloads by our customers. The same XPUs, in some situations, have been extended externally to other LLM peers. Best exemplified at Google, where the TPUs used in creating Gemini are also being used for AI cloud computing by Apple, Cohere, and SSI as a sample, and the scale at which we see this happening could be significant, and as you are aware, last quarter, Q3 2025, we received a $10 billion order to sell the latest TPU, Ironwood [REX], to Anthropic, and in this quarter, Q4, we received an additional $11 billion order from this same customer for delivery in late 2026, but that does not mean our other two customers are using TPUs.

In fact, they prefer to control their own destiny by continuing to drive their multi-year journey to create their own custom AI accelerators, or XPU [REX], as we call them. I'm pleased today to report that during this quarter, we acquired a fifth XPU customer through a $1 billion order placed for delivery in late 2026. Now, moving on to AI networking. Demand here has even been stronger as we see customers build out their data center infrastructure ahead of deploying AI accelerators. Our current order backlog for AI switches exceeds $10 billion, as our latest 102 Tb per second Tomahawk 6 switch, the first and only one of its capability out there, continues to book at record rates. This is just a subset of what we have.

We have also secured record orders on DSPs, optical components like lasers, and PCI Express switches to be deployed in AI data centers, and all these components, combined with our XPUs, bring our total order on hand in excess of $73 billion today, which is almost half Broadcom's consolidated backlog of $162 billion. We expect these $73 billion in AI backlog to be delivered over the next 18 months, and in Q1, fiscal 2026, we expect our AI revenue to double year-on-year to $8.2 billion. Turning to non-AI semiconductors, Q4 revenue of $4.6 billion was up 2% year-on-year and up 16% sequentially based on favorable wireless seasonality. Year-on-year, broadband showed solid recovery. Wireless was flat, and all the other end markets were down as enterprise spending continued to show limited signs of recovery.

Accordingly, in Q1, we forecast non-AI Semiconductor revenue to be approximately $4.1 billion, flat from a year ago, down sequentially due to wireless seasonality. Let me now talk about our Infrastructure Software segment. Q4 Infrastructure Software revenue of $6.9 billion was up 19% year-on-year, and above our outlook of $6.7 billion. Bookings continued to be strong as total contract value booked in Q4 exceeded $10.4 billion versus $8.2 billion a year ago. We ended the year with $73 billion of Infrastructure Software backlog, up from $49 billion a year ago. We expect renewals to be seasonal in Q1 and forecast Infrastructure Software revenue to be approximately $6.8 billion. We still expect, however, that for fiscal 2026, Infrastructure Software revenue to grow low double-digit percentage. Here's what we see in 2026.

Directionally, we expect AI revenue to continue to accelerate and drive most of our growth, and non-AI Semiconductor revenue to be stable. Infrastructure Software revenue will continue to be driven by VMware growth at low double digits. And for Q1 2026, we expect consolidated revenue of approximately $19.1 billion, up 28% year-on-year. And we expect adjusted EBITDA to be approximately 67% of revenue. And with that, let me turn the call over to Kirsten.

Kirsten Spears (CFO)

Thank you, Hock. Let me now provide additional detail on our Q4 financial performance. Consolidated revenue was a record $18 billion for the quarter, up 28% from a year ago. Gross margin was 77.9% of revenue in the quarter, better than we originally guided on higher software revenues and product mix within semiconductors. Consolidated operating expenses were $2.1 billion, of which $1.5 billion was research and development. Q4 operating income was a record $11.9 billion, up 35% from a year ago. Now, on a sequential basis, even as gross margin was down 50 basis points on Semiconductor product mix, operating margin increased 70 basis points sequentially to 66.2% on favorable operating leverage. Adjusted EBITDA of $12.2 million, or 68% of revenue, was above our guidance of 67%. This figure excludes $148 million of depreciation. Now, a review of the P&L for our two segments, starting with Semiconductors.

Revenue for our Semiconductor Solutions segment was a record $11.1 billion, with growth accelerating to 35% year-on-year, driven by AI. Semiconductor revenue represented 61% of total revenue in the quarter. Gross margin for our Semiconductor Solutions segment was approximately 68%. Operating expenses increased 16% year-on-year to $1.1 billion on increased investment in R&D for leading-edge AI semiconductors. Semiconductor operating margin of 59% was up 250 basis points year-on-year. Now, moving to Infrastructure Software. Revenue for Infrastructure Software of $6.9 billion was up 19% year-on-year and represented 39% of total revenue. Gross margin for Infrastructure Software was 93% in the quarter, compared to 91% a year ago. Operating expenses were $1.1 billion in the quarter, resulting in Infrastructure Software operating margin of 78%. This compares to operating margin of 72% a year ago, reflecting the completion of the integration of VMware.

Moving on to cash flow. Free cash flow in the quarter was $7.5 billion and represented 41% of revenue. We spent $237 million on capital expenditures. Day sales outstanding were 36 days in the fourth quarter, compared to 29 days a year ago. We ended the fourth quarter with inventory of $2.3 billion, up 4% sequentially. Our days of inventory on hand were 58 days in Q4, compared to 66 days in Q3, as we continue to remain disciplined on how we manage inventory across the ecosystem. We ended the fourth quarter with $16.2 billion of cash, up $5.5 billion sequentially on strong cash flow generation. The weighted average coupon rate in years to maturity of our gross principal fixed rate debt of $67.1 billion is 4% at 7.2 years, respectively. Turning to capital allocation.

In Q4, we paid stockholders $2.8 billion of cash dividends based on a quarterly common stock cash dividend of $0.59 per share. In Q1, we expect the non-GAAP diluted share count to be approximately 4.97 billion shares, excluding the potential impact of any share repurchases. Now, let me recap our financial performance for fiscal year 2025. Our revenue hit a record $63.9 billion, with organic growth accelerating to 24% year-on-year. Semiconductor revenue was $36.9 billion, up 22% year-over-year. Infrastructure Software revenue was $27 billion, up 26% year-on-year. Fiscal 2025 adjusted EBITDA was $43 billion and represented 67% of revenue. Free cash flow grew 39% year-on-year to $26.9 billion. For fiscal 2025, we returned $17.5 billion of cash to shareholders in the form of $11.1 billion of dividends and $6.4 billion in share repurchases and elimination.

Aligned with our ability to generate increased cash flows in the preceding year, we are announcing an increase in our quarterly common stock cash dividend in Q1 fiscal 2026 to $0.65 per share, an increase of 10% from the prior quarter. We intend to maintain this target quarterly dividend throughout fiscal 2026, subject to quarterly board approval. This implies our fiscal 2026 annual common stock dividend to be a record $2.60 per share, an increase of 10% year-on-year. I would like to highlight that this represents the 15th consecutive increase in annual dividends since we initiated dividends in fiscal 2011. The board also approved an extension of our share repurchase program, of which $7.5 billion remains through the end of calendar year 2026. Now, moving to guidance. Our guidance for Q1 is for consolidated revenue of $19.1 billion, up 28% year-on-year.

We forecast Semiconductor revenue of approximately $12.3 billion, up 50% year-on-year. Within this, we expect Q1 AI Semiconductor revenue of $8.2 billion, up approximately 100% year-on-year. We expect Infrastructure Software revenue of approximately $6.8 billion, up 2% year-on-year. For your modeling purposes, we expect Q1 consolidated gross margin to be down approximately 100 basis points sequentially, primarily reflecting a higher mix of AI revenue. As a reminder, consolidated gross margins through the year will be impacted by the revenue mix of Infrastructure Software and Semiconductors, and also product mix within Semiconductors. We expect Q1 adjusted EBITDA to be approximately 67%. We expect the non-GAAP tax rate for Q1 and fiscal year 2026 to increase from 14% to approximately 16.5% due to the impact of the global minimum tax and shift in geographic mix of income compared to that of fiscal year 2025.

That concludes my prepared remarks. Operator, please open up the call for questions.

Operator (participant)

Thank you. To ask a question, you will need to press star one one on your telephone. To withdraw your question, press star one one again. Due to time restraints, we ask that you please limit yourself to one question. Please stand by while we compile the Q&A roster. Our first question will come from the line of Vivek Arya with Bank of America. Your line is open.

Vivek Arya (Managing Director and Senior Analyst)

Thank you. Just wanted to clarify. Hock, you said $73 billion over 18 months for AI. That's roughly $50+ billion for fiscal 2026 for AI. I just wanted to make sure I got that right. And then the main question, Hock, is that there is sort of this emerging debate about customer-owned tooling, your hyperscale customers potentially wanting to do more things on their own. How do you see your XPU content and share at your largest customer evolve over the next one or two years? Thank you.

Hock Tan (President and CEO)

To answer your first question, what we said is correct that as of now, we have $73 billion of backlog in place of XPUs, switches, DSPs, lasers for AI data centers that we anticipate shipping over the next 18 months. Obviously, this is as of now. I mean, we fully expect more bookings to come in over that period of time. So don't take that $73 billion as that's the revenue we ship over the next 18 months. We're just saying we have that now, and the bookings have been accelerating. Frankly, we see that bookings not just in XPUs, but in switches, DSPs, all the other components that go into AI data centers. We have never seen bookings of the nature that what we have seen over the past three months, particularly with respect to Tomahawk 6 switches.

This is one of the fastest-growing products in terms of deployment that we have ever seen of any switch product that we put out there. It is pretty interesting, and partly because it's the only one of its kind out there at this point at 102 Tb per second, and that's the exact product needed to expand the clusters of the latest GPU and XPUs out there. Oh, that's great, but as far as what is the future, is XPU your broader question? My answer to you is don't follow what you hear out there as gospel. It's a trajectory. It's a multi-year journey, and many of the players, and not too many players doing LLMs, want to do their own custom AI accelerator for very good reasons. You can put in hardware what, if you use a general-purpose GPU, you can only do in software and kernels and software.

You can achieve performance-wise so much better in the custom-purpose designed hardware-driven XPU. And we see that in the TPU, and we see that in all the accelerators we are doing for our other customers. Much, much better in areas of Sparse Core, training, inference, reasoning, all that stuff. Now, will that mean that over time they all want to go do it themselves? Not necessarily. And in fact, because the technology in silicon keeps updating, keeps evolving. And if you are an LLM player, where do you put your resources in order to compete in this space, especially when you have to compete at the end of the day against merchant GPUs who are not slowing down in their rate of evolution? So I see that as this concept of customer tooling is an overblown hypothesis, which frankly, I don't think will happen.

Vivek Arya (Managing Director and Senior Analyst)

Thank you.

Operator (participant)

Moment for our next question, and that will come from the line of Ross Seymore with Deutsche Bank. Your line is open.

Ross Seymore (Managing Director)

Hi. Thanks for asking the question. Hock, I wanted to go to something you touched on earlier about the TPUs going a little bit more to a merchant go-to-market to other customers. Do you believe that's a substitution effect for customers who otherwise would have done ASICs with you, or do you think it's actually broadening the market? And so what are kind of the financial implications of that from your perspective?

Hock Tan (President and CEO)

That's a very good question, Ross. What we see right now is the most obvious move it does is it goes to the people who use TPUs. The alternative is GPUs on a merchant basis. That's the most common thing that happens. Because to do that substitution for another custom, it's different. To make an investment in custom accelerator is a multi-year journey. It's a strategic directional thing. It's not necessarily a very transactional or short-term move. Moving from GPU to TPU is a transactional move. Going into AI accelerator of your own is a long-term strategic move, and nothing will deter you from that to continue to make that investment towards that end goal of successfully creating and deploying your own custom AI accelerator. That's the motion we see.

Ross Seymore (Managing Director)

Thank you.

Operator (participant)

And that will come from the line of Harlan Sur with JPMorgan.

Harlan Sur (Executive Director of Equity Research)

Yeah. Good afternoon. Thanks for taking my question and congratulations on the strong results, guidance, and execution, Hock. Again, I just want to reiterate I just want to sort of verify this, right? So you talked about total AI backlog of $73 billion over the next six quarters, right? This is just a snapshot of your order book right now. But given your lead times, I think customers can and still will place orders for AI in quarters four, five, and six. So as time moves forward, that backlog number for more shipments in the second half of 2026 will probably still go up, right? Is that the correct interpretation? And then given the strong and growing backlog, right, the question is, does the team have 3 nm, 2 nm wafer supply, CoWoS, substrate, HBM supply commitments to support all of the demand in your order book?

And I know one of the areas where you are trying to mitigate this is in advanced packaging, right? You're bringing up your Singapore facility. Can you guys just remind us what part of the advanced packaging process the team is focusing on with the Singapore facility? Thanks.

Hock Tan (President and CEO)

Thanks. Well, to answer your first simpler question, you're right. You can say that $73 billion is the backlog we have today to ship over the next six quarters. You might also say that, and given our lead time, we expect more orders to be able to be absorbed into our backlog for shipments over the next six quarters. So take it that we expect revenue, a minimum revenue, one way to look at it, of $73 billion over the next six quarters, but we do expect much more as more orders come in for shipments within that next six quarters. Our lead time, depending on the particular product it is, can be anywhere from six months to a year. With respect to supply chain, is what you're asking, critical supply chain on silicon and packaging?

Harlan Sur (Executive Director of Equity Research)

Yes.

Hock Tan (President and CEO)

Yeah. That's an interesting challenge that we have been addressing constantly and continue to, and with the strength of the demand and the need for more innovative packaging, advanced packaging, because you're talking about multi-chips, multi-chips in creating every custom accelerator now, the packaging becomes a very interesting and technical challenge. Building our Singapore fab is to really talk about partially insourcing those advanced packaging. We believe that we have enough demand. We can literally insource not from the viewpoint of not just cost, but in the viewpoint of supply chain security and delivery. We're building up a fairly substantial facility for packaging, advanced packaging in Singapore, as you indicated, purely for that purpose to address the advanced packaging side. Silicon-wise, no, we go back to the same process source in TSMC, and so we keep going for more and more capacity in 2 nm, 3 nm.

And so far, we do not have that constraint. But again, time will tell as we progress and as our backlog builds up.

Harlan Sur (Executive Director of Equity Research)

Thank you, Hock.

Operator (participant)

One moment for our next question. The next question will come from the line of Blayne Curtis with Jefferies. Your line is open.

Blayne Curtis (Managing Director)

Hey, good afternoon. Thanks for taking my question. I wanted to ask, with the original $10 billion deal, you talked about a rack sale. I just wanted to, with the follow-on order as well as the fifth customer, can you just maybe describe how you're going to deliver those? Is it an XPU, or is it a rack? And then maybe you can kind of just walk us through the math and kind of what the deliverable is. Obviously, Google uses its own networking. So I'm kind of curious too, would it be a copy exact of what Google does now that you could talk to it to name? Or would you have your own networking in there as well? Thanks.

Hock Tan (President and CEO)

That's a very complicated question, Blayne. Let me tell you what it is. It's a system sale. Hock, about that. It's a real system sale. We have so many components beyond XPUs, custom accelerators in any system, in AI system, any AI system used by hyperscalers that, yeah, we believe it began to make sense to do it as a system sales and be responsible, be fully responsible for the entire system or rack, as you call it. I think people understand it as a system sale better. And so on this customer number four, we are selling it as a system with our key components in it. And that's no different than selling a chip. We certify and final ability to run as part of the whole selling process.

Blayne Curtis (Managing Director)

Okay. Thanks, Hock.

Operator (participant)

One moment for our next question, and that will come from the line of Stacy Rasgon with Bernstein. Your line is open.

Stacy Rasgon (Managing Director and Senior Analyst)

Hi, guys. Thanks for taking my question. I wanted to touch on gross margins, and maybe it feeds into a little bit the prior question. So I understand why the AI business is somewhat diluted to gross margins. We have the HBM pass-through, and then presumably with the system sales, that will be more diluted. And you've hinted at this in the past, but I was wondering if you could be a little more explicit. As this AI revenue starts to ramp, as we start to get system sales, how should we be thinking about that gross margin number, say if we're looking out four quarters or six quarters? Is it low 70s? I mean, could it start with a six at the corporate level? And I guess I'm also wondering, I understand how that comes down, but what about the operating margins?

Do you think you get enough operating leverage on the OpEx side to keep operating margins flat, or do they need to come down as well?

Hock Tan (President and CEO)

I'll let Kirsten give you the details, but enough for me to broadly high-level explain to you, Stacy, and good question, phenomenal. You don't see that impacting us right now, and we have already started that process of some system sales. You don't see that in our numbers, but it will, and we have said that openly. The AI revenue has a lower gross margin than our, obviously, the rest of our business, including software, of course, but we expect the rate of growth as we do more and more AI revenue to be so much that we get the operating leverage on our operating spending, that operating margin will deliver dollars that are still a high level of growth from what it has been. So we expect operating leverage to benefit us at the operating margin level, even as gross margin will start to deteriorate. High level.

Kirsten Spears (CFO)

Yeah. I think Hock said that fairly, and the second half of the year, when we do start shipping more systems, the situation is straightforward. We'll be passing through more components that are not ours, so think of it similar to the XPUs where we have memory on those XPUs, and we're passing through those costs. We'll be passing through more costs within the rack, and so those gross margins will be lower. However, overall, the way Hock said it, gross margin dollars will go up, margins will go down. Operating margins, because we have leverage, operating margin dollars will go up, but the margin itself as a percentage of revenues will come down a bit, but we're not, I mean, we'll guide closer to the end of the year for that.

Stacy Rasgon (Managing Director and Senior Analyst)

Got it. Thank you, guys.

Operator (participant)

One moment for our next question. That will come from the line of Jim Schneider with Goldman Sachs. Your line is open.

Jim Schneider (Senior Equity Analyst)

Good afternoon. Thanks for taking my question. Hock was wondering if you might care to calibrate your expectations for AI revenue in fiscal 2026 a little bit more closely. I believe you talked about acceleration in fiscal 2026 off of the 65% growth rate you did in fiscal 2025, and then you're guiding to 100% growth for Q1. So I'm just wondering if the Q1 is a good jumping-off point for the growth rate you expect for the full year or something maybe a little bit less than that. And then maybe if you could separately clarify whether your $1 billion of orders for the fifth customer is indeed OpenAI, which you made a separate announcement about. Thank you.

Hock Tan (President and CEO)

Wow. There's a lot of questions here. But let me start off with 2026. Our backlog is very dynamic these days, as I said. It's continuing to ramp up. And you're right. We originally, six months ago, said maybe year-on-year, AI revenues would grow in 2026 60%-70%. Q1, we doubled. And Q1 2026, today, we're saying it doubled. And we're looking at it because all the fresh orders keep coming in, and we give you a milestone of where we are today, which is $73 billion of backlog to be shipped over the next 18 months. And we do fully expect, as I answered the earlier question, for that $73 billion over the 18 months to keep growing. Now, it's a moving target. It's a moving number as we move in time, but it will grow.

It's hard for me to pinpoint what 2026 is going to look like precisely. So I'd rather not give you guys any guidance. That's why we don't give you guidance, but we do give it for Q1. Give it time. We'll give it for Q2. You're right. It's in that to us, is it an accelerating trend? My answer is it's likely to be an accelerating trend as we progress through 2026. Hope that answers your question.

Jim Schneider (Senior Equity Analyst)

Yes. Thank you.

Operator (participant)

One moment for our next question, and that will come from the line of Ben Reitzes with Melius Research. Your line is open.

Ben Reitzes (Managing Director)

Yeah. Hey, guys. Thanks a lot. Hey, Hock. I wanted to ask. I'm not sure if the last caller said something on it, but I didn't hear it in the answer. What I wanted to ask about the OpenAI contract that it's supposed to start in the second half of the year and go through 2029 for 10 GW. I'm going to assume that that's the fifth customer order there. And I was just wondering if you're still confident in that being a driver. Are there any obstacles to making that a major driver? And when you expect that to contribute and your confidence in it? Thanks so much, Hock.

Hock Tan (President and CEO)

You didn't hear that answer from my last caller, Jim's questions, because I didn't answer it. I did not answer it, and I'm not answering it either. It's the fifth customer, and it's a real customer, and it will grow. They are on their multi-year journey to their own XPUs. And let's leave it at that. As far as the OpenAI view that you have, we appreciate the fact that it is a multi-year journey that will run through 2029, as our press release with OpenAI showed. 10 GW between 2026, more like 2027, 2028, 2029, Ben, not 2026. It's more like 2027, 2028, 2029, 10 GW. That was the OpenAI discussion. And I call it an agreement, an alignment of where we're headed with respect to a very respected and valued customer, OpenAI. But we do not.

Ben Reitzes (Managing Director)

Okay. That's real interesting.

Hock Tan (President and CEO)

We do not expect much in 2026.

Ben Reitzes (Managing Director)

Okay. Thanks for clarifying that. That's real interesting. Appreciate it.

Operator (participant)

One moment for our next question. That will come from the line of CJ Muse with Cantor Fitzgerald. Your line is open.

CJ Muse (Senior Managing Director)

Yeah. Good afternoon. Thank you for taking the question. I guess, Hock, I wanted to talk about custom silicon and maybe speak to how you expect compute to grow for Broadcom generation to generation. And as part of that, your competitor announced XPU offering essentially accelerator for an accelerator for massive context windows. I'm curious if you see it broadening opportunity for your existing five customers to have multiple XPU offerings. Thanks so much.

Hock Tan (President and CEO)

Thank you. No. Yeah. You hit it right on. I mean, the nice thing about a custom accelerator is you try not to do one size fits all and generationally. Each of these five customers now can create their version of an XPU custom accelerator for training and inference. And basically, it's almost two parallel tracks going on almost simultaneously for each of them. So I would have plenty of versions to deal with. I don't need to create any more versions. We got plenty of different content out there just on the basis of creating these custom accelerators. And by the way, when you do custom accelerators, you tend to put more hardware in that are unique, differentiated versus trying to make it work on software and creating kernels into software.

I know that's very tricky too, but thinking about the difference where you can create in hardware those Sparse Core data routers versus the dense matrix multipliers, all in one same chip. And that's just one example of what creating custom accelerators is letting us do. Or for that matter, a variation in how much memory capacity or memory bandwidth for the same customer from chip to chip, just because even in inference, you want to do more reasoning versus decoding versus something else like prefill. So you literally start to create different hardware for different aspects of how you want to train or inference and run your workloads. It's a very fascinating area, and we are seeing a lot of variations and multiple chips for each of our customers.

CJ Muse (Senior Managing Director)

Thank you.

Operator (participant)

One moment for our next question, and that will come from the line of Harsh Kumar with Piper Sandler. Your line is open.

Harsh Kumar (Managing Director and Senior Research Analyst)

Yeah. Hock and team, first of all, congratulations on some pretty stunning numbers. I've got an easy one and a more strategic one. The easy one is your guide in AI, Hock and Kirsten, is calling for almost $1.7 billion of sequential growth. I was curious, maybe you could talk about the diversity of the growth between the three existing customers. Is it pretty well spread out, all of them growing, or is one sort of driving much of the growth? And then, Hock, strategically, one of your competitors bought a photonic fabric company recently. I was curious about your take on that technology and if you think it's disruptive or you think it's just gimmickery at this point in time.

Hock Tan (President and CEO)

I like the way you address this question, the way that you address the question to me. It's almost hesitant. Thank you. I appreciate that. But on your first part, yeah, we are driving growth, and it began to feel like this thing never ends. And it's a real mixed bag of existing customers and on existing XPUs. And a big part of it is XPUs that we're seeing. And that's not to slow down the fact that, as I indicated in my remarks and commented on, the demand for switches, not just Tomahawk 6, Tomahawk 5 switches, the demand for our latest 1.6 Tb per second DSPs that enables optical interconnects for scale-out, particularly. It's just very, very strong. And by extension, demand for the optical components like lasers, PIN diodes, just going nuts. All that come together.

Now, all that is smaller on relatively lesser dollars when it comes to XPUs, as you probably guess. I mean, to give you a sense, maybe let me look at it on a backlog side. Of the $73 billion of AI revenue backlog over the next 18 months I talked about, maybe $20 billion of it is everything else. The rest is XPUs. Hope that gives you a sense of what the mix is. But that's not to say that the rest is still $20 billion. That's not small by any means. So we value that. So when you talk about your next question of silicon photonics as a means to create basically much better, more efficient, lower power interconnects in not just scale-out, but hopefully scale-up, yeah, I could see a point in time in the future when silicon photonics matters as the only way to do it.

We're not quite there yet, but we have the technology, and we continue to develop the technology. Even at each time, we develop it first for 400 Gb bandwidth, going on to 800 Gb bandwidth. Not ready for it yet. And even we have the product, and we're now doing it for 1.6 Tb bandwidth to create silicon photonics switches, silicon photonics interconnects. Not even sure it will get fully deployed because our engineers, our peers, and the peers we have out there will somehow try to find a way to still try to do scale-up within a rack in copper as long as possible and in scale-up in pluggable optics. The final, final straw is when you can't do it well in pluggable optics. And of course, when you can't do it even in copper, then you're right. You go to silicon photonics, and it will happen.

We're ready for it. Just saying not anytime soon.

Harsh Kumar (Managing Director and Senior Research Analyst)

Thank you, Hock.

Operator (participant)

One moment for our next question. That will come from the line of Karl Ackerman with BNP Paribas. Your line is open.

Karl Ackerman (Managing Director)

Yes. Thank you. Hock, could you speak to the supply chain resiliency and visibility you have with your key materials suppliers, particularly CoWoS, as you not only support your existing customer programs but the two new custom compute processors that you announced this quarter? I guess what I'm getting at is you also happen to address the very large subset of networking and compute AI supply chains. You talked about record backlog. If you were to pinpoint some of the bottlenecks that you have and areas that you're aiming to address and mitigate from supply chain bottlenecks, what would they be, and how do you see that ameliorating into 2026? Thank you.

Hock Tan (President and CEO)

It's across the board, typically. I mean, we are very fortunate in some ways that we have the product technology and the operating business lines to create multiple key leading-edge components that enables today's state-of-the-art AI data centers. I mean, our DSP, as I said earlier, is now at 1.6 Tb per second. That's the leading-edge connectivity for bandwidth for the top of the heap XPU and even GPU. And we intend to be that way. And we have the lasers, EMLs, [VCSELs], [CW lasers] that goes with it. So it's fortunate that we have all this and the key active components that go with it. And we see it very early, and we expand the capacity as we do the design to match it.

This is a long answer to what I'm trying to get at, which is I think we are of any of these data center suppliers of the system racks, not counting the power shell and all that. Now, that starts to get beyond us on the power shell and the transformers and the gas turbines. If you just look at the racks, the systems on AI, we probably have a good handle on where the bottlenecks are because sometimes we are part of the bottlenecks, which we then work to get to resolve. We feel pretty good about that through 2026.

Karl Ackerman (Managing Director)

Thank you.

Operator (participant)

One moment for our next question. That will come from the line of Christopher Rolland with Susquehanna. Your line is open.

Christopher Rolland (Semiconductor Analyst)

Hi. Thanks for the question. Just first a clarification and then my question. And sorry to come back to this issue, but if I understand you correctly, Hock, I think you were saying that OpenAI would be a general agreement, so it's not binding, maybe similar to the agreements with both NVIDIA and AMD. And then secondly, you talked about flat non-AI Semiconductor revenue. Maybe what's going on there is there still an inventory overhang, and what do we need to get that going again? Do you see growth eventually in that business? Thank you.

Hock Tan (President and CEO)

On the non-AI semiconductor, we see broadband literally recovering very well. We don't see the others. No, we see stability. We don't see a sharp recovery that is sustainable yet. I guess give it a couple more quarters. We don't see any further deterioration in demand. It's more, I think, maybe the AI is sucking the oxygen a lot out of enterprise spending elsewhere and hyperscaler spending elsewhere. We don't see it getting any worse. We don't see it recovering very quickly with the exception of broadband. That's a simple summary of non-AI. With respect to OpenAI, without diving into depth, I'm just telling you what that 10 GW announcement is all about. Separately, the journey with them on the custom accelerator progresses at a very advanced stage and will happen very, very quickly. We will have a committed element to this whole thing.

And that will. But what I was articulating earlier was the 10 GW announcement. And that 10 GW announcement is an agreement to be aligned on developing 10 GW for OpenAI over 2027 to 2029 timeframe. That's it. That's different from the XPU program we're developing with them.

Christopher Rolland (Semiconductor Analyst)

I see. Thank you very much.

Operator (participant)

Thank you. And we do have time for one final question. And that will come from the line of Joe Moore with Morgan Stanley. Your line is open.

Joe Moore (Semiconductor Industry Analyst)

Great. Thank you very much. So if you have $21 million of rack revenue in the second half of 2026, I guess, do we stay at that run rate? Beyond that, are you going to continue to sell racks, or does that sort of that type of business make shift over time? And I'm really just trying to figure out the percentage of your 18-month backlog that's actually full systems at this point.

Hock Tan (President and CEO)

Well, it's an interesting question. And that question basically comes to how much compute capacity is needed by our customers over the next, as I say, over the period beyond 18 months. And your guess is probably as good as mine based on what we all know out there, which is really what it relates to. But if they need more, then you see that continuing even larger. If they don't need it, then probably it won't. But what we're trying to indicate is that's the demand we're seeing over that period of time right now.

Joe Moore (Semiconductor Industry Analyst)

Thank you.

Operator (participant)

I would now like to turn the call back over to Ji Yoo for any closing remarks.

Ji Yoo (Head of Investor Relations)

Thank you, operator. This quarter, Broadcom will be presenting at the New Street Research Virtual AI Big Ideas Conference on Monday, December 15th, 2025. Broadcom currently plans to report its earnings for the first quarter of fiscal year 2026 after close of market on Wednesday, March 4th, 2026. A public webcast of Broadcom's earnings conference call will follow at 2:00 P.M. Pacific. That will conclude our earnings call today. Thank you all for joining. Operator, you may end the call.

Operator (participant)

This concludes today's program. Thank you all for participating. You may now disconnect.