Sign in

You're signed outSign in or to get full access.

Credo Technology Group - Q4 2024

May 29, 2024

Transcript

Operator (participant)

Ladies and gentlemen, thank you for standing by. At this time, all participants are in a listen-only mode. Later, we will conduct a question and answer session. At that time, if you have a question, you will need to press the star eleven on your push-button phone. I would now like to hand the conference over to Dan O'Neil. Please go ahead, sir.

Dan O'Neil (VP of Corporate Development and Investor Relations)

Good afternoon, and thank you all for joining us today for our fiscal 2024 fourth quarter and year-ending earnings call. I'm joined today by Bill Brennan, Credo's Chief Executive Officer, and Dan Fleming, Credo's Chief Financial Officer. I'd like to remind everyone that certain comments made in this call today may include forward-looking statements regarding expected future financial results, strategies and plans, future operations, the markets in which we operate, and other areas of discussion. These forward-looking statements are subject to risks and uncertainties that are discussed in detail in our documents filed with the SEC. It's not possible for the company's management to predict all risks, nor can the company assess the impact of all factors on its business or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statement.

Given these risks, uncertainties, and assumptions, the forward-looking events discussed during this call may not occur, and actual results could differ materially and adversely from those anticipated or implied. The company undertakes no obligation to publicly update forward-looking statements for any reason after the date of this call to conform these statements to actual results or to changes in the company's expectations, except as required by law. Also, during this call, we will refer to certain non-GAAP financial measures, which we consider to be important measures of the company's performance. These non-GAAP financial measures are provided in addition to, and not as a substitute for or superior to, financial performance prepared in accordance with the U.S. GAAP.

A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed using the Investor Relations portion of our website. With that, I'll now turn the call over to our CEO.

Bill Brennan (President and CEO)

Thank you for joining our fourth quarter fiscal 2024 earnings call. I'll start by reviewing our results, and then I'll provide highlights of what we see for fiscal 2025. Our CFO, Dan Fleming, will then follow with a detailed discussion of our Q4 and fiscal year 2024 results and provide our outlook for Q1. Credo is a pure-play, high-speed connectivity company delivering a range of optimized and innovative connectivity solutions to meet the needs of global data center operators and service providers. We leverage our core SerDes technology and unique customer-focused design approach to deliver a differentiated suite of solutions, including active electrical cables, or AECs, optical DSPs, line card PHYs, SerDes chiplets, and SerDes IP licenses for Ethernet port speeds ranging from 100 Gb up to 1.6 Tb.

We target the most difficult connectivity challenges facing the market, where a combination of architecture, system-level approach, power, and performance are most differentiated. Credo is in a market environment of steadily increasing demand for optimized solutions with higher bandwidth and improved power efficiency, driven by the accelerating connectivity requirements of leading-edge AI deployments. I'm pleased to say that during both fiscal Q4 and fiscal 2024, Credo achieved record revenue. In Q4, we delivered revenue of $60.8 million and non-GAAP gross margin of 66.1%. In fiscal 2024, Credo achieved revenue of $193 million and non-GAAP gross margin of 62.5%. The workloads supported by our solutions changed significantly during the fiscal year, and our growth was primarily driven by AI deployments across our entire portfolio. In fiscal Q4, roughly 3/4 of our revenue was driven by AI workloads.

The year was also notable as we diversified our revenue across additional customers and products. I'm proud of the team for delivering solid results across a shifting landscape and also for executing a strong quarterly sequential ramp throughout the year. In our AEC product line, we continued our market leadership and delivered our customers a range of solutions for port speeds ranging from 100 Gb-800 Gb. Furthermore, our approach of delivering system-level solutions with customized hardware and software features has enabled us to build close, collaborative relationships with our customers. Over many design cycles across numerous customers, we have dramatically improved our speed-to-market in designing and qualifying our solutions, and this remains a key aspect of our competitive advantage. We believe this positions Credo with a unique value to the market that is difficult to replicate.

With this, our AECs have quickly become the leading solution for in-rack cabled connectivity for single-lane speeds of 50 Gbps and above. In addition to the advantages of AECs that include signal integrity, power, form factor, and reliability, our customers have embraced the opportunity to innovate with Credo as a design partner to optimize system-level features that make their AI clusters more efficient. From a customer engagement perspective, fiscal 2024 was fruitful as we saw the successful ramp at a new hyperscaler customer, qualification at another, and expansion of our AEC engagements with hyperscalers, Tier 2 data centers, and service providers. AECs have quickly transitioned from a new concept to a de facto solution across many data center environments. Based on customer feedback and forecasts, we continue to expect an inflection point in our AEC revenue growth during the second half of fiscal 2025.

Fiscal 2024 was also a strong year for our optical DSP products. During the year, we achieved material production revenue with significant wins at domestic and international hyperscalers. Additionally, we continue to gain traction with optical module partners and end customers due to our attractive combination of performance, power efficiency, and system costs. AI back-end network deployments are a strong volume driver for the optical transceiver and the AOC market, specifically for leading-edge 100 Gb per lane solutions. As power efficiency has become a more critical factor, Credo has responded with innovative architectural solutions that drastically reduce DSP power while maintaining interoperability and signal integrity at the optical module level. We've made great progress with our linear receive optics DSPs. In November, we announced our LRO DSP solutions, and by March at OFC, we demonstrated production designs with three 800 Gb module partners.

In the few months since OFC, we've seen continued market acceptance and design activity. These products enable a significant power reduction versus a traditional 800 Gb solution. The LRO architecture is the only way to achieve a sub-10-W, 800 Gb module that meets existing industry optical standards and facilitates multi-vendor interoperability. We expect the benefits of LRO solutions to become even more impactful in next-generation 1.6 Tb optical modules. I feel confident saying that the LPO architecture with no DSP has lost nearly all momentum in the market, and that the LRO architecture is showing great promise. I'm encouraged by our customer traction in Q4. We were pleased to kick off multiple new optical DSP design engagements with the leading optical module manufacturers, both with our new LRO DSP and our traditional full DSP solutions.

Given our results to date and our customer engagements, we are on track to achieve our Optical DSP revenue goal of 10% of our fiscal 2025 revenue, and we are enthusiastic about future growth prospects in this category. Regarding our Line Card PHYs business, our leadership in this market continues as we transition to more advanced process nodes that deliver improved product performance and power efficiency. During the year, we continued to add to our customer base and have multiple 100 Gb per lane wins at industry-leading tier one OEMs and ODMs that serve the global data center market. These include 800 Gb and 1.6 Tb gearbox, retimer, and MACsec PHY products. As we've discussed in the past, AI deployments are the driving force behind our growth for these leading-edge devices.

In the fourth quarter, we had success with both 50 Gb and 100 Gb per lane line card products. While 50 Gb per lane solutions will continue to have lengthy life cycles, our 100 Gb per lane solutions will also start adding to our revenue in fiscal 2025. We expect the Line Card PHY business will continue to grow and contribute nicely to our overall business in fiscal 2025 and beyond, as we continue to invest and innovate in this market. Lastly, I'll discuss our SerDes IP licensing and chiplet businesses. In Q4, our SerDes licensing business delivered solid results, owing to our combination of speed, signal integrity, leading power efficiency, and breadth of offerings. During fiscal 2024, we won licensing business across a range of applications, process nodes, and lane rates.

Our wins range from 28 nanometer down to 4 nanometer, at lane rates ranging from 28 Gb to 112 Gb. Our chiplet business saw significant growth led by our largest customer, who deploys our SerDes chiplets in a massive AI cluster. This customer also engaged us to develop a next-generation chiplet for future deployments, which is a testament to our leading technology and customer-centric focus. We are entering fiscal 2025 with a strong and diverse funnel of SerDes licensing and chiplet opportunities. In summary, the shift towards generative AI accelerated during our fiscal 2024, and we see that continuing into the foreseeable future. Industry data and market forecasters point towards continued and growing demand for high bandwidth, energy-efficient connectivity solutions that are application-optimized. Credo benefits from this demand due to our focus on innovative, low-power, customer-centric connectivity solutions for the most demanding applications.

Our view into fiscal 2025 and beyond has remained consistent for a number of quarters now, and this has only been reinforced by recent wins, production ramps, and customers' forecasts as they continue to formalize their AI deployment plans. With that, Dan Fleming, our CFO, will now provide additional details.

Dan Fleming (CFO)

Thank you, Bill, and good afternoon. I will first provide a financial summary of our fiscal year 2024, then review our Q4 results, and finally, discuss our outlook for Q1 and provide some color on our expectations for fiscal year 2025. Revenue for fiscal year 2024 was a record at $193 million, up 5% year-over-year, driven by product revenue that grew by 8%. Gross margin for the year was 62.5%, up 448 basis points year-over-year. Our operating margin declined by 208 basis points as we continued to invest in R&D to support product development, focused on numerous opportunities across our hyperscale customers. We reported earnings per share of $0.09 for the year, a $0.04 improvement over the prior year. Moving on to the fourth quarter.

In Q4, we reported record revenue of $60.8 million, up 15% sequentially and up 89% year-over-year. Our IP business generated $16.6 million of revenue in Q4, up 193% year-over-year. IP remains a strategic part of our business, but as a reminder, our IP results may vary from quarter-to-quarter, driven largely by specific deliverables to pre-existing or new contracts. While the mix of IP and Product revenue will vary in any given quarter over time, our revenue mix in Q4 was 27% IP, above our long-term expectation for IP, which is 10%-15% of revenue. Our Product business generated $44.1 million of revenue in Q4, down 15% sequentially and up 67% year-over-year.

Our Product business, excluding Product Engineering Services, generated $40.8 million of revenue in Q4, up 2% sequentially. Our top 4 end customers were each greater than 10% of our revenue in Q4. Our team delivered Q4 non-GAAP gross margin of 66.1%, above the high end of our guidance range and up 391 basis points sequentially, enabled by a strong IP contribution in the quarter. Our IP non-GAAP gross margin generally hovers near 100% and was 99.2% in Q4. Our product non-GAAP gross margin was 53.7% in the quarter, down 783 basis points sequentially and up 392 basis points year-over-year. The sequential decline was due to a change in Product Engineering Services revenue.

Total non-GAAP operating expenses in the fourth quarter were $32.7 million, below the midpoint of our guidance range and up 7% sequentially. Our OpEx increase was a result of a 17% year-over-year increase in R&D, as we continue to invest in the resources to deliver innovative solutions for our hyperscale customers, and a 26% year-over-year increase in SG&A, as we continue to invest in public company infrastructure. Our non-GAAP operating income was a record $7.5 million in Q4, compared to non-GAAP operating income of $2.4 million last quarter, due to strong gross margin performance coupled with top-line leverage.

Our non-GAAP operating margin was also a record at 12.3% in the quarter, compared to a non-GAAP operating margin of 4.6% last quarter, a sequential increase of 771 basis points. We reported non-GAAP net income of $11.8 million in Q4, compared to non-GAAP net income of $6.3 million last quarter. Cash flow from operations in the fourth quarter was $4.2 million. CapEx was $3.2 million in the quarter, driven by R&D equipment spending, and free cash flow was $1 million, an increase of $16.7 million year-over-year. We ended the quarter with cash and equivalents of $410.0 million, an increase of $0.9 million from the third quarter.

We remain well-capitalized to continue investing in our growth opportunities while maintaining a substantial cash buffer. Our accounts receivable balance increased 33% sequentially to $59.7 million, while days sales outstanding increased to 89 days, up from 77 days in Q3. Our Q4 ending inventory was $25.9 million, down $5.6 million sequentially. Now, turning to our guidance. We currently expect revenue in Q1 of fiscal 2025 to be between $58 million and $61 million, down 2% sequentially at the midpoint. We expect Q1 non-GAAP gross margin to be within a range of 63%-65%. We expect Q1 non-GAAP operating expenses to be between $35 million and $37 million.

The first quarter of fiscal 2025 is a 14-week quarter, so included in this forecast is approximately $2 million in expenses for the extra week. We expect Q1 diluted weighted average share count to be approximately 180 million shares. We were pleased to see fiscal year 2024 play out as expected. The rapid shift to AI workloads drove new and broad-based customer engagement, and we executed well to deliver the sequential growth we had forecast throughout the year. Our revenue mix transitioned swiftly through the year. In Q4, we estimate that AI revenue was approximately 3/4 of total revenue, up dramatically from the prior year. As we move forward through fiscal year 2025, we expect sequential growth to accelerate in the second half of the year.

From Q4 of fiscal 2024 to Q4 of fiscal 2025, we expect AI revenue to double year-over-year, as programs across a number of customers reach production scale. We expect fiscal year 2025 non-GAAP gross margin to be within a range of 61%-63%, as product gross margins expand due to increasing scale. We expect fiscal year 2025 non-GAAP operating expenses to grow at half the rate of top-line growth. As a result, we look forward to driving operating leverage throughout the year. And with that, I will open it up for questions.

Operator (participant)

Thank you. At this time, I would like to remind everyone, in order to ask a question, press star then the number eleven on your telephone keypad. We'll pause for just a moment to compile the Q&A roster. Our first question comes from the line of Quinn Bolton from Needham. Your line is open.

Quinn Bolton (Managing Director)

Hey, guys. Congratulations on the results, and nice to see you're quantifying the AI revenue. I guess, Bill or Dan, wanted to start with just sort of a couple of housekeeping questions. Can you give us sort of the percent of revenue from your largest four customers, and were they all across different product lines, or are you starting to see consolidation back to AECs among that top four customer base?

Dan Fleming (CFO)

Yeah, Quinn, let me comment on that. This is Dan. Yeah, so as I mentioned in our prepared remarks, we had four 10% end customers in Q4. They were our two AEC hyperscalers that we've discussed previously, plus a large consumer company and our lead chiplet customer. So by that list, you can kind of see a broad diversification of products represented. And I'll add to that by saying when our 10-K is filed in the next few weeks, you'll see that we had three 10% end customers for the full year, and I'll lay those out for you since you'll see them soon enough. Our largest customer was our first AEC hyperscale customer, which we've talked about over the last few years, which is Microsoft at 26%.

Then the second was an AEC, our second AEC hyperscaler. They came in at 20% for the year, and our, the third 10% customer was the, the lead chiplet customer that we had, at 15%. So the, the key takeaway, though, this year was, you know, if you go back to FY 2023 versus FY 2024, and we've been saying this for the last few quarters, FY 2024 was really, the year in which revenue diversification materialized for us, both from a customer perspective and, and a, and a product perspective as well. Hopefully, that gives you, some additional color, Quinn.

Quinn Bolton (Managing Director)

Yeah, that's great, Dan. Thank you. Then I guess, Bill, I think if I got your prepared script, you talked about ramping a second AEC customer and then qualifying a third hyperscaler. Wondering if you could just give us a little bit of detail on the third hyperscaler. You know, is it a sort of AI application? Is it NIC to ToR? Is it within a switch? Can you give us any sense of the per lane speed or total cable speed on that third engagement?

Bill Brennan (President and CEO)

Sure. Sure. So we've talked about in the past that the first program with this hyperscaler is a switch rack. It's a 50 Gb per lane design, 400 Gb ports. And we've seen this relationship develop in a really similar way to the first two. Start with a single program, and after, you know, the first experience with AECs, we're now engaged with additional programs on the roadmap. I mentioned, you know, the first program is a switch rack, and now we're working on additional programs for AI appliance racks, and these are at 100 Gb per lane. And I'll mention that, you know, the plan that we're getting from them at this point is that, you know, we'll see this third customer ramp in the second half fiscal year timeframe.

That'll contribute to the inflection point that Dan has referenced.

Quinn Bolton (Managing Director)

Perfect. Thank you.

Operator (participant)

One moment for our next question. Our next question comes from the line of Tore Svanberg from Stifel. Your line is open.

Tore Svanberg (Managing Director)

Yes, thanks, and congratulations on the record revenue. I had a question on, Dan, your comment about AI revenue for Q4 fiscal 2025. So based on my math, you know, sounds like AI revenue would be about $90 million. How should we think about the non-AI revenue over the next 12 months? Or in other words, that $16 million in non-AI revenue for Q4 2024, how will that progress over the next 12 months?

Dan Fleming (CFO)

Yes. So, you know, based on our comments, we didn't provide specific revenue guidance for the full year, but we wanted to provide you a framework to understand a little bit more definitively how we've been framing our, our revenue growth throughout the year. And as you say, you know, AI revenue, we expect to grow 100% Q4 to Q4, fiscal 2024 to 2025. If you look at the non-AI revenue piece, what I would say is, you know, we can assume, or you can assume, modest year-over-year growth. That's not, that's not what's driving our growth in the year. It's really these AI programs that are ramping largely in the second half of the year. So that's, that's why we frame things, as we did.

The other comment I'll, I'll add to that is, you know, our overall fiscal year 2025 outlook has not changed. We're just kind of giving a little bit more specificity. So we continue to expect that meaningful growth in the year, and that second half inflection point is, you know, it will be fast upon us here shortly, driven by these AI programs.

Tore Svanberg (Managing Director)

Yeah, no, that's great, Colin. And, perhaps a question for you, Bill. So looks like your PAM4 DSP business is finally starting to take off. You're targeting 10% fiscal 2025. First of all, how much was that revenue in fiscal 2024? And could you talk to a little bit of the diversified customer base that you have for the PAM4 DSP business? You talked about growth at International and North America, but would in North America, are we talking about, you know, several hyperscalers driving that, that growth?

Bill Brennan (President and CEO)

Yeah. So first of all, for fiscal 2024, we did not, you know, we hadn't had that 10% number as an objective in fiscal 2024, but we came pretty close to it. So we feel like things are lined up well for fiscal 2025 and beyond. I would say there's multiple drivers. Of course, we've got, you know, the first U.S. hyperscaler in production. We've got a second in qualification. We're seeing a return in spending with non-U.S. hyperscalers, and, you know, we've commented that we're very well-positioned for when that spending turns back on. And, you know, I'd say that these are the primary contributors in fiscal 2025.

I will mention that we've got a lot of promising new engagements with optical module partners, and these new partners are really considered leaders in the industry. And of course, that will, you know, that bridges to additional hyperscalers that are, you know, interested in looking at solutions with Credo. I will also say that we've spent a lot of time talking about the LRO architecture in really the last six months, and we see growing momentum with that LRO architecture for sure. And that's in addition to the full DSP momentum that we're building. So hopefully that gives you the color that you're looking for.

Operator (participant)

Thank you. One moment for our next question. Our next question comes from the line of Thomas O'Malley from Barclays. Your line is open.

Thomas O'Malley (Director of Equity Research)

Hey, what's going on, guys? Thanks for taking my question. I wanted to follow up on the AI guidance. Obviously, if you take the comment that 3/4 of the revenue was related to AI in Q4 and extrapolate that to next year, it gets to pretty big numbers. But I wanted to just kind of zoom in on this quarter. You had four 10% customers, one of which was a consumer customer, who we know is non-AI related. That would kind of imply that the rest of your business was entirely AI, if you just do that math. Could you just help me walk through? Is it kind of just rounding 3/4 AI revenue, or how should I be thinking about, like, the dollar amount?

Because it obviously sounds like that's growing quite nicely, but just wanna understand the base if you're giving some color on what that should grow for the entire year.

Dan Fleming (CFO)

Yeah, we, we didn't give a precise number because it's hard for us to estimate, in some cases, how our, you know, end customers utilize our products. But we have a fair amount of certainty that, that 3/4 or 75% is where we ended for Q4. So, so we kind of, you know, that's just kind of with a caveat. So, so you would use that, and it's really, you know, looking this framework as, as we exited fiscal 2024 versus how we expect to exit fiscal 2025. So, you know, as you know, these, production ramps at these large customers, can take time, and they, they can pull in, they can push out, a quarter or so. So, so that's why we're, we're framing it kind of fiscal year-end to fiscal year-end.

Thomas O'Malley (Director of Equity Research)

Gotcha. And then I just wanted to ask, I know you guys don't guide by product segment, but just a little color on the product in the IP side, because it's swung pretty drastically over the last couple of quarters. So with the July guidance, with the gross margins being a bit better than expected, you would just assume that maybe the IP side is kind of staying higher sequentially. Could you give us any color on the mix there into July? Is there any product growth, or is most of the—well, obviously, with the midpoint of revenue a bit down, but is there any IP color that you can give us? Does it stay at these kind of elevated rates after the big fiscal Q4? Thank you.

Dan Fleming (CFO)

Yeah, sure. So just to reiterate what we had said for guidance for Q1, for gross margin, it was 63%-65%, so really just kind of a modest sequential decline from Q4. So it's really driven by a few things. One is, IP revenue will decline sequentially, quarter-over-quarter. So that will happen. However, if you look at NRE, that's kind of, you should assume we're at historic averages, which we were in Q4, so kind of flat quarter-over-quarter.

So it's really the product gross margin. There's a bit of a revenue mix dynamic there as well. And a lot of this, part of the theme of fiscal 2025 will be increasing product margin, you know, exclusive of Product Engineering Services, due to increasing scale as we kind of return to that roadmap where, where we really do drive, operating margin and gross margin leverage as, as we increase in scale.

Operator (participant)

Thank you. One moment for our next question. Our next question will come from the line of Vijay Rakesh from Mizuho. Your line is open.

Vijay Rakesh (Managing Director)

Yeah. Hi, Bill and Dan. Good quarter here. Just a quick question on the LRO side. You mentioned the 800 Gb LRO, the sub-10-W power consumption, and you had an engagement with this one. Just wondering how many, you know, CSPs you're working with on shipping that product, and how do you see those revenues ramping into 2025, I guess, calendar 2025?

Bill Brennan (President and CEO)

So the work is primarily being done right now with optical module manufacturers, and we've got more than a handful that are working on designs now. We have delivered first samples to the first hyperscaler potential customer, and we see that continuing throughout this quarter. So, as far as fiscal 2025 goes, there is a possibility. We don't have much really forecasted in what we're looking at yet, but there's a possibility that we'll ramp first significant revenue in this fiscal year. But it's not something that's really built in.

Vijay Rakesh (Managing Director)

Got it. So that should be incremental. And on the chiplet customer, your chiplet customer is obviously increasing CapEx quite a bit. And, you know, do you see, say, your, your traction, growing proportionately? Like, is that starting to pick up as well? Thanks.

Bill Brennan (President and CEO)

Yeah. So the first customer that we've got in production, the one we've, you know, we've talked about, we see that business really ongoing. Now, if they have a big increase in the spend on the cluster, you know, that's designed in-house, we'll definitely see, you know, a participation with that. But they've got, you know, multiple different paths that they're pursuing right now. But generally speaking, we continue to be bullish on chiplets in general. We've got additional customers that are that will be coming online in the future. Again, not much built-in in fiscal 2025, but we're bullish on the segment.

Operator (participant)

Thank you. One moment for our next question. Our next question comes from the line of Vivek Arya from Bank of America. Your line is open.

Duksan Jang (Research Analyst)

Hi, thank you for taking our question. This is Duksan on behalf of Vivek. I just wanna go back to the AEC product line. Obviously, you're ramping your second customer, a third customer also in qualification in the second half. How are you seeing the competitive dynamic, just because Marvell and Astera are also launching their product here?

Bill Brennan (President and CEO)

At a high level, we have not seen a significant change in the competitive environment. So our objective has always been to be first to deliver and first to qualify, and I think we've done a really good job on this objective with all of our customers. I would say that one big advantage that we have competitively is the way we're organized. We have more than 100 people on our team that are dedicated to the AEC business, and that includes hardware and software development, that includes qualification, production, fulfillment, and support. And this really drives success, you know, with this objective, to be able to deliver first and qualify first. You know, so I would say that, as we go deeper with each customer relationship, we really see an increasing number of requests for customized hardware and software.

You know, I think, you know, from the standpoint of the number of SKUs that we're working on today, the number is more than 20 that are in active development from a different SKU perspective. So, you know, competitively, I would say, we, you know, we're unique in a sense that we're the single point of contact, and we take full responsibility for all aspects of the relationships with our customers, and this really drives their satisfaction. When I talk more specifically about competition, we're really competing with groups of companies that need to do the same work that we're doing.

But there's, you know, there's really no shortcut to it, and when you've got the complexity of having multiple suppliers involved and responsible for different aspects of one solution, it's really, you know, far greater complexity than having one party, like Credo, being ultimately responsible. And so I guess with that said, the market's growing quickly, and we do expect to see second sourcing in the future. This is natural, and, you know, ultimately, our goal is to always be raising the competitive bar, and ultimately, you know, serving our customers very well and driving their satisfaction. But I don't have specific feedback regarding the two potential competitors that you mentioned.

Duksan Jang (Research Analyst)

Of course. And then as a follow-up, just given NVIDIA is also entering this Ethernet switch market, and, you know, that could potentially have some implications on AEC as a standard for connectivity. So I was wondering if you have any color there or if you've done any interoperability testing with the NVIDIA solutions as well. Thank you.

Bill Brennan (President and CEO)

Sure. We've been really clear when we talk about the U.S. hyperscalers, there is a desire to move to Ethernet, you know, long term. And so I think it comes as no surprise that we've seen a lot of discussion around NVIDIA and Ethernet. We view this as a positive for us and our business. We've done testing with everybody that's out there from the standpoint of NICs and switches. And so we feel really, you know, quite confident that there will be an opportunity for our AECs for in-rack connectivity. And again, we don't view this as really a surprise that the U.S. hyperscalers are driving in this direction.

Operator (participant)

Thank you. One moment for our next question. Our next question comes from the line of Richard Shannon from Craig-Hallum. Your line is open.

Richard Shannon (Senior Research Analyst)

Well, hi, guys. Thanks for taking my question. I apologize if this has been touched on before I got on the call late here, but, Bill, just following up on your comments regarding custom cables and the increasing requests there. Maybe you can characterize your business now and kind of what you expect it going forward here in terms of its profile of custom versus more commodity or standard. Is there much of any of that going on now, or you expect that to be a material contributor soon?

Bill Brennan (President and CEO)

Well, I think that, you know, what we've seen is that, you know, every time we engage deeply with a customer, and we open the door for innovation, basically, you know, we're open to special requests from a hardware standpoint, from a firmware or software standpoint. And what we're seeing is that, you know, customers really respond positively. So we've organized to be able to receive these requests and really deliver on them. You know, so I think that more and more as we, you know, look into the future, I think that a very large percentage of what we ship will be customer specific.

There will be a market, a smaller market, say, for standard connectivity solutions, like an 800 Gb to 800 Gb AEC with just two connectors and really nothing special. But we see the large majority of the volume, you know, being somewhat customer specific.

Richard Shannon (Senior Research Analyst)

Okay, thanks for that, that clarification, Bill. My second question is on, following up on, your comments here about AI revenues doubling from, this past fourth quarter to the, the next fourth quarter here. Maybe you can characterize the degree to which, you know, back-end, network revenues are built into this at all versus, you know, front end and, and the kind of Ethernet and other applications you've been at historically.

Bill Brennan (President and CEO)

Yeah, I would say that you know, you're right on from the standpoint that the back-end networks are really driving you know, the increase in revenue. And that's a general statement about AI. Of course, AI networks are also connected to the front-end network, but you know, the number of connections is small in comparison. So I'll say that we're seeing the continued increase in the density of connections in AI clusters, and it's really driven by the combination of increased GPU performance generally as you know, those in that market are executing on the roadmap. But there's also a desire to increase the GPU utilization.

You know, some out there, like OpenAI, they published a document that said that the average utilization of a GPU is roughly 33%. And so there's a big opportunity in going with more parallelism and really that drives a larger density or increased density in the number of connections, really specifically to the back-end scale-up networks. So we talked about scale out and scale up. What that means from the standpoint of how many connections, how many AEC connections are possible per GPU. Some of the back-end clusters that we're ramping in the second half will have two or four AECs per GPU.

And we're working on next-generation platforms that will actually increase that number of connections to 8 or even higher per GPU. And so I think if you take it to a rack level, say an AI appliance rack level, we're seeing a density today of, you know, between 56 and 64 AECs per rack, and we expect this number to likely reach close to 200 AEC per rack in the future. This is, you know, this is something that will fuel the growth as well.

Operator (participant)

Thank you. One moment for our next question. Our next question comes from the line of Karl Ackerman from BNP Paribas. Your line is open.

Karl Ackerman (Managing Director)

Yes, thank you, gentlemen. I have two. I suppose for the first question, Dan, could you put a finer point on IP licensing revenue in the July quarter? Like, is it cut in half? And do you see IP revenue remaining toward the upper end of your long-term 10%-15% range for fiscal 2025?

Dan Fleming (CFO)

Yeah, so for fiscal 2025, we internally expect it to be near the high end of that, of that long-term model, which again, is 10%-15% of overall revenue. And, so from a—if, if I were in your shoes to model this, I would, you know, assume that it's kind of near a quarter of that annual amount, in Q1. And if, if you do that, you should, kind of, solve to within our gross margin range for Q1, if that's helpful.

Karl Ackerman (Managing Director)

I see. Thank you for that. Perhaps a question for you, Bill. You know, there has been much discussion and confusion about where half retimer DSPs can be used in the network. For example, the use of active copper cable are being used for in-rack connectivity, while AOCs and AECs appear to be the primary use case for connecting NICs to ToR and/or the middle-of-row switches. My question is, do all AI networks require a full retimer DSP for either AOC or AEC connections? Thank you.

Bill Brennan (President and CEO)

So this, you know, this is a much-discussed topic in the industry. And, I think a year ago when, at OFC, there was a big discussion about the idea of eliminating the DSP. You know, that really started a lot of, you know, a lot of effort in pursuing the solution. So there's many companies, many optical module companies, you know, that pursued designs with no DSP. And I think generally, the, you know, the jury has come in and basically, there's really no momentum in the market now for, you know, for solutions with no DSP. So in the optical world that we see right now, the solution for LRO is really, you know, quite feasible, and we're showing that with multiple partners.

We demonstrated three at OFC, Lumentum being the, you know, the largest of the 3 partners. What we're seeing is that the solution successfully reduces power. By the way, that was the entire objective of LPO, was to reduce power of these connections. So we've shown that we can, you know, deliver 800 Gb modules with partners that go sub-10-W, which is really, you know, probably a 30%-40% reduction in, you know, compared to what's typical in the market for fully retimed solutions. So, you know, the key with LRO is that we're able to maintain industry standards as well as interoperability.

You can literally use an LRO solution, and you know, there's nothing special that you need to do. So we see that especially for clusters, you know, these are shorter connections, and AOCs are likely also transceivers. But for these shorter connections that say 10 m-20 m, you know, and especially in the cluster, power is so critical that we see that you know, that entire market could be addressed by LRO solutions. Now, you know, it's obviously going to be up to a given customer and their strategy.

You know, but the idea that, you know, there will be a large volume of solutions with no DSP, I think that, you know, that really no longer exists from the customers that we're talking to.

Operator (participant)

Thank you. One moment for our next question. Our next question comes from the line of Suji Desilva from Roth. Your line is open.

Suji Desilva (Managing Director)

Hi, Bill. Hi, Dan. Congrats on the progress here. Just, Bill, on your comments on the number of AECs per a GPU increasing. I'm just curious, you know, in general, that is that increasing the kind of the the forecasting needs of your customers in the AI area versus traditional cloud versus three months ago? Or, you know, are those? Is the AI line stable, already anticipated? And is the cloud, traditional cloud part coming back?

Bill Brennan (President and CEO)

Yeah, I would say there's really no change in the programs that we talked about three months ago. You know, I'd say the additional information, you know, that we're sharing today is that, you know, this need for more performance and more bandwidth is, you know, is really something that we're seeing as we look at next generation AI cluster designs. And so that's a bit of new information that, you know, the number of connections per GPU is doubling or even more than doubling. And that'll obviously drive growth. Now, as it relates to—you know, we talk about, you know, front-end networks and back-end networks. And of course, AI clusters, they're all connected to the front-end network.

As that relates to, to say, you know, general compute versus AI, hard for us to see, you know, from a forecasting standpoint, you know, how that breaks out. Because the same, you know, same AECs are used for both, you know, from a front-end network perspective. But I would say generally, that the momentum in the market for AI, there's no question, it's still a huge amount of momentum, and we see that really for the, you know, for the foreseeable future. If I talk about, you know, what's the trade-off for us if, if there's a real return from a general compute, you know, market share perspective, you know, of course, we'll benefit from that. You know, we're really used in both.

When we talk about the third swim lane, which is, you know, we have AI appliances, general compute, server racks, so these are both server racks. We talk about the third swim lane being switch racks, and we see that growing in popularity, as well, especially as the market moves towards 100 Gb per lane speeds.

Suji Desilva (Managing Director)

Got it. That's a very helpful color, Bill. Thanks. And then, you know, at staying on these, this increase in AECs per GPU, does that introduce a customization opportunity as well? I'm thinking kind of the old Y rack opportunity, things like that. Or are those more standard cables, but just more of them?

Bill Brennan (President and CEO)

Yeah, I would say none of these are standard, you know. So in the AI appliance application, what we're seeing is that there's maybe little or zero standard products that are being designed right now. So all of them have special features. I will say that, you know, that we're delivering, you know, cables with two connectors, three connectors, four connectors, and even five connectors. And so it's, you know, there's, when you give these really creative designers of the racks, just the entire, you know, AI appliance rack, you know, it's fun to see what they're gonna come up with, and we're very much open to, you know, making their rack design more efficient.

Operator (participant)

Thank you. One moment for our next question. We have a follow-up from the line of Quinn Bolton from Needham. Your line is open.

Quinn Bolton (Managing Director)

Hey, Bill. Wondering if you could just sort of address the AEC versus ACC debate that seems to have, you know, kind of popped up after OFC, as NVIDIA is looking to use ACCs in its NVLink fabric. You know, do you see perhaps as a result of that, growing adoption of ACCs, or do you think ACCs are gonna be really use case limited going forward?

Bill Brennan (President and CEO)

Use case limited. We don't see ACCs anywhere in the market, other than what you described at NVIDIA.

Quinn Bolton (Managing Director)

Perfect. Very simple. Thank you.

Operator (participant)

Thank you. One moment for our last question. Our next question comes from the line of Tore Svanberg from Stifel. Your line is open.

Tore Svanberg (Managing Director)

Yes, Svanberg, just two follow-ups. So first of all, Bill, in your prepared remarks, you talk about accelerating the speed-to-market pretty meaningfully. Is that a result of your engagements with customers, or have you implemented internally any new technologies or anything like that, to really, you know, get the product to market quicker?

Bill Brennan (President and CEO)

I think it's really due to a number of things in the way that we've organized and also the way that we work with our customers. You know, I think from the standpoint of our ability to you know, collaborate, it's really on a different level. I mean, we've got weekly, if not daily, interaction between our engineering teams at Credo and our customer. And so that relates directly to our ability to deliver first samples. And then, you know, when we talk about moving something into production, there's many different levels of qualification. And, you know, we've taken complete ownership of that. You know, and when we think about that, what does that mean?

That means, you know, that we've got, you know, more than 10 thermal chambers that are in constant use. And what are we doing there? You know, so our customers ship us, switches or appliances, you know, with the, you know, the configuration that they want to be qualified, that they're planning on taking in production. We run the qualification test for them, so it's live traffic, varying temperature, varying voltage, and we're doing a lot of the work for them upfront. And so when they go into a final qualification mode, they know that what we're delivering is highly predictable because we've already delivered data based on, you know, their prescriptive tests that they give us with the equipment that they ship us.

And so, you know, from the standpoint of delivering first, it's about, you know, being organized to respond quickly. Qualifying first, we're doing a lot of the work for our customers and, you know, that's really taking it up a notch.

Tore Svanberg (Managing Director)

Great. And just my last question is a clarification. I just wanna make sure, I mean, I think I got this right, but I just wanna make sure that, it is clear to everybody. So AI revenue, 3/4, that includes product and licensing revenue, and that is the number that you expect to double year-over-year, Q4 fiscal 2025?

Bill Brennan (President and CEO)

That is correct.

Tore Svanberg (Managing Director)

Great. Thank you.

Operator (participant)

Thank you. There are no further questions at this time. Mr. Brennan, I turn the call back over to you.

Bill Brennan (President and CEO)

So thanks to everybody for the questions. We really appreciate the participation, and we look forward to the continued conversation on the callbacks. Thank you.

Operator (participant)

This concludes today's conference call. You may now disconnect. Everyone, have a great day.