Sign in

You're signed outSign in or to get full access.

Astera Labs - Earnings Call - Q2 2025

August 5, 2025

Executive Summary

  • Q2 2025 delivered material upside: revenue $191.9M (+20% QoQ, +150% YoY), GAAP diluted EPS $0.29, and non‑GAAP diluted EPS $0.44; operating cash flow reached a record $135.4M, establishing a higher base led by PCIe 6 and Scorpio fabric switch ramp.
  • Results beat Wall Street: revenue beat by ~$19.5M and non‑GAAP EPS beat by ~$0.12 versus consensus; strength driven by Scorpio P-Series volume production and signal conditioning across PCIe scale‑up and Ethernet scale‑out on custom ASIC platforms; both revenue and EPS beats are significant.
  • Guidance: Q3 revenue $203–$210M, GAAP GM ~75%, GAAP EPS $0.23–$0.24; non‑GAAP GM ~75%, non‑GAAP EPS $0.38–$0.39; OpEx rising on R&D investment for next‑gen fabrics (including UALink).
  • Strategic catalysts: Scorpio exceeded 10% of total revenue in Q2, with 10+ customers engaged on X-Series scale‑up fabrics; management reiterated Scorpio’s path to be the largest product line over the next several years; expanding ecosystem with NVIDIA (NVLink Fusion), AMD (Advancing AI), Alchip (ASIC integration).

Note: Values with asterisk are retrieved from S&P Global.

What Went Well and What Went Wrong

What Went Well

  • Scorpio ramp and mix: Scorpio exceeded 10% of revenue, ramping P‑Series into volume production for PCIe 6 scale‑out; 10+ engagements on X‑Series for scale‑up fabrics, positioning anchor sockets and higher per‑accelerator dollar content.
  • Beat on revenue and EPS: Sequential growth +20% to $191.9M with non‑GAAP EPS $0.44; strength from PCIe 6 portfolio (retimers, fabric switches) and signal conditioning across custom ASIC platforms.
  • Margin execution and cash generation: Non‑GAAP gross margin 76.0% and operating margin 39.2%; operating cash flow $135.4M, ending cash and securities at ~$1.065B.

Management quotes:

  • “Strong sequential revenue growth of 20 percent, driving meaningful upside to earnings and cash flow from operations.” – CEO Jitendra Mohan.
  • “Scorpio exceeded 10% of total revenue…fastest ramping product line in the history of Astera Labs.” – CEO Jitendra Mohan.

What Went Wrong

  • Gross margin expected to ease near‑term: Q3 guide to ~75% reflects growing Taurus hardware module contribution (lower margin vs standalone silicon); long‑term target remains ~70% GM.
  • Tax rate volatility: Non‑GAAP tax rate jumps to ~20% in Q3 due to a law change catch‑up, normalizing to ~15% in Q4 and ~13% longer‑term; near‑term EPS impact versus prior lower rate.
  • Scale‑up timing: X‑Series scale‑up shipments largely preproduction in 2025; full volume production expected through 2026, pushing out the largest-dollar attach opportunity.

Transcript

Speaker 2

Good afternoon, my name is Rebecca and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs Second Quarter Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After management remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press STAR followed by the number one on your telephone keypad. If you would like to withdraw your question, press the pound key. Thank you. I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin. Thank you, Rebecca. Good afternoon, everyone, and welcome to the Astera Labs second quarter 2025 earnings conference call.

Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co-Founder, Sanjay Gajendra, President, Chief Operating Officer and Co-Founder, and Mike Tate, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies and plans, future operations, and the markets in which we operate. These forward-looking statements reflect management's current beliefs, expectations, and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in our most recent annual report on Form 10-K and our upcoming filing on Form 10-Q.

It is not possible for the Company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statements. In light of these risks, uncertainties, and assumptions, the results, events, or circumstances reflected in the forward-looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today and the Company undertakes no obligation to update such statements after the date of this call except as required by law. Also during this call we will refer to certain non-GAAP financial measures which we consider to be an important measure of the Company's performance.

These non-GAAP financial measures are provided in addition to, and not as a substitute for, financial results prepared in accordance with U.S. GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the investor relations portion of our website. With that I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs.

Speaker 3

Jitendra, thank you Leslie. Good afternoon everyone and thanks for joining our second quarter conference call for fiscal year 2025. Today I'll provide an overview of our Q2 results followed by a discussion around our rack scale connectivity vision. I will then turn the call over to Sanjay to walk through Astera Labs' near and long term growth profile. Finally, Mike will give an overview of our Q2 2025 financial results and provide details regarding our financial guidance for Q3. Astera Labs delivered strong results in Q2 with all financial metrics coming in favorable to our guidance. Quarterly revenue of $191.9 million was up 20% from the prior quarter and up 150% versus Q2 of last year. Growth within the quarter was driven by both our signal conditioning and switch fabric product line, establishing a meaningful new revenue baseline for the company to build upon.

This quarter we achieved a key milestone with our market leading Scorpio P Series switches supporting PCIe 6 scale-out applications ramping into volume production to support the deployment and general availability of customized rack scale AI system designs based on merchant GPUs. Strong demand for our PCIe 6 solutions helped to drive material top line upside. During the quarter, Scorpio exceeded 10% of total revenue, making it the fastest ramping product line in the history of Astera Labs. Furthermore, we continue to see strong activity and engagement across both our Scorpio P Series and X Series PCIe fabric switches, and we are pleased to report that we won new designs across multiple new customers during the quarter. We remain on track for Scorpio to exceed 10% of total revenue in 2025 while becoming the largest product line for Astera Labs over the next several years.

Our Aries product family grew during the quarter and continues to diversify across both GPU and custom ASIC-based systems for a variety of applications including scale-up and scale-out connectivity. Additionally, our first to market Aries 6 solutions supporting PCIe 6 began volume ramp during the quarter within rack scale merchant GPU-based systems. Our Taurus product family demonstrated strong growth driven by AEC demand supporting the latest merchant GPUs, custom AI accelerators, as well as general purpose compute platforms. LEO continues to ship in pre-production quantities as customers expand their development rack clusters to qualify new systems leveraging the recently introduced CXL capable data center CPU platforms. In addition to strong financial and operational performance during Q2, we continue to expand our strategic relationships across both customers and ecosystem partners as the industry pushes forward with innovative new technologies.

First, we broadened our collaboration with NVIDIA to support NVLink Fusion, providing additional optionality for customers to deploy NVIDIA AI accelerators by leveraging high performance scale up networks based on NVLink technology. Next, we announced a partnership with Alchip Technologies to advance the silicon ecosystem for AI rack scale infrastructure by combining our comprehensive connectivity portfolio with their custom ASIC development capabilities within the CXL ecosystem. Industry progress continues with SAP recently highlighting their collaboration with Microsoft featuring Intel's Xeon 6 processors to optimize SAP HANA database performance by utilizing CXL memory expansion. Lastly, we joined AMD on stage during their Advancing AI 2025 keynote presentation as a trusted partner to showcase UA Link, which is the only truly open memory semantic based scale fabric purpose built for AI workloads.

To continue the relentless pursuit of AI model performance, data center infrastructure providers are beginning a transformation to what we call AI Infrastructure 2.0. We define this AI Infrastructure 2.0 transition as the proliferation of open standards based AI rack scale platforms that leverage broad innovation, interoperability, and a diverse multi vendor supply chain. This transition is in its early stages and we are strategically crafting our roadmaps to help lead these secular connectivity trends over the coming years. The transition to AI Infrastructure 2.0 is especially significant at the rack level as modern AI workloads demand ultra low latency communication between hundreds of tightly integrated accelerators over a scale up network. Astera Labs is well positioned to support this infrastructure transformation as an anchor solution partner with expertise across the entire connectivity stack.

First, we support a variety of interconnect protocols including UA Link and PCIe for scale up, Ethernet for scale out, and CXL for memory. We are very excited about the momentum behind the UA Link scale up connectivity standard, which exemplifies the open ecosystem approach by combining the low latency of PCIe and the fast data rates of Ethernet to deliver best in class end to end latency and bandwidth. Next, we provide a broad suite of intelligent connectivity products to address the entire rack across both purpose-built silicon and hardware solutions, all featuring our Cosmos software for best-in-class fleet monitoring and management. Lastly, our deep partnerships across the entire ecosystem continue to expand as we work closely with ASIC and GPU vendors to align features, interoperability, and roadmaps to solve the rack-scale connectivity challenges of tomorrow.

In summary, Astera Labs has demonstrated strong momentum in our business and the prospects for continued diversification and scale are driving our roadmaps in R&D investment. We are in the early stages of the AI Infrastructure 2.0 transformation, which Astera Labs is uniquely positioned to help proliferate over the coming years. Scale-up connectivity for rack-scale AI infrastructure alone will add close to $5 billion of market opportunity for us by 2030, and we remain committed to supporting our customers as they choose the architectures and technologies that best suit their AI performance goals and business objectives. With that, let me turn the call over to our President and COO, Sanjay Gajendra, to outline our vision for growth over the next several years.

Speaker 0

Thanks Jitendra and good afternoon everyone. Today I want to provide an update on our recent execution, followed by an overview of the meaningful market opportunities and growth catalysts that Astera Labs will address within the forthcoming transition to AI Infrastructure 2.0. Our goal is to deliver a purpose-built connectivity platform that includes silicon hardware and software solutions for rack-scale AI deployments. To achieve this goal, our approach has been to increase our addressable dollar content in AI servers by rapidly expanding our product lines to provide a comprehensive connectivity platform and capture higher value sockets that include smart cable modules, gearboxes, and fabric solutions. We also see increasing attach rates driven by higher speed interconnects in platforms deployed by customers who are collectively investing hundreds of billions of dollars on AI infrastructure annually.

Starting in Q2 of 2025, Astera Labs executed the next step in its high growth evolution by ramping our PCIe Scorpio fabric switches and 86 retimers into volume production. This latest wave of growth has further diversified our overall business as we now have three product lines contributing above 10% of total sales. During this transition, our silicon dollar content opportunity has expanded into the range of multiple hundreds of dollars per AI accelerator, which has effectively established a new revenue baseline for the company. Looking ahead, we are excited about the opportunities enabled by scale-up interconnect topologies. Given the extreme importance of scale-up connectivity to overall AI infrastructure performance and productivity, we see Scorpio X Series solutions as the anchor socket within next generation AI rack.

We are engaged with over 10 unique AI platform and cloud infrastructure providers who are looking to utilize our fabric solution for their scale-up networking requirement. We look for Scorpio X Series to begin shipping for customized scale-up architectures in late 2025, with a shift to high volume production over the course of 2026. With the ramp of Scorpio X Series for scale-up connectivity topologies next year, we expect our overall silicon dollar content opportunity per AI accelerator to significantly increase. Overall, we expect this to be another step up from a baseline revenue standpoint. Also, given the size of the scale-up connectivity opportunity, we expect our Scorpio X Series revenue to quickly outgrow Scorpio P Series revenue in 2026 and beyond. Cloud platform providers and hyperscalers will begin to deploy next-generation platforms as the industry transitions to AI Infrastructure 2.0.

We believe the fastest path to this transformation lies in purpose-built solutions developed within open ecosystems with a multi-vendor supply chain. For Astera Labs, this transformation will be the catalyst for the next wave of overall market opportunity and revenue growth. Our expertise and support for major interconnect protocols including PCIe, Ethernet, CXL, and UA Link puts us in an excellent position to participate in these next-generation design conversations. UA Link represents the cleanest and most optimized scale-up strategy for AI accelerator providers given its robust performance potential, open ecosystem, diverse supply chain, and purpose-built approach. Early industry momentum has been very encouraging with multiple hyperscalers and several compute platform providers looking to incorporate UA Link into their accelerator roadmap and engaging with RFPs as an indication of strong interest.

As a leading promoter of UA Link, Astera Labs is committed to developing and commercializing a broad portfolio of UA Link connectivity solutions ranging from AI fabrics to signal conditioning solutions and other IO components. Proliferation of UA Link in 2027 and beyond will represent a long-term growth vector for Astera Labs. In conclusion, we are proud of our execution over the past several years, demonstrating strong and profitable revenue growth, diversification of customers and applications, and exposure to a broadening range of AI infrastructure applications and use cases. We believe this momentum is in its early stages as we fully embrace an industry transition to AI Infrastructure 2.0 which will expand our opportunity across even more customers and platforms over the next several years.

We look to build upon this newly established baseline of business as we partner tightly with our customers and the broader ecosystem to deliver and deploy best-in-class rack-scale solutions to fuel the next wave of AI evolution. With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q2 financial results and our Q3 outlook.

Speaker 1

Thanks Sanjay and thanks to everyone for joining the call. This overview of our Q2 financial results and Q3 guidance will be on a non-GAAP basis. The primary difference in Astera Labs non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today's press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q3 financial outlook, as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call. For Q2 of 2025, Astera Labs delivered quarterly revenue of $191.9 million, which was up 20% versus the previous quarter and 150% higher than the revenue in Q2 of 2024. During the quarter, we enjoyed revenue growth from both our Aries and Taurus product lines, supporting both scale-up and scale-out PCIe and Ethernet connectivity for AI rack-level configurations.

Scorpio smart fabric switches transitioned to volume production in Q2 with our P Series product line for PCIe 6 scale-out applications deployed within leading GPU customized rack-scale systems. LEO CXL controllers shipped in pre-production volumes as customers continue to work towards qualifying platforms ahead of volume deployment. Q2 non-GAAP gross margin was 76% and was up 110 basis points from March quarter levels, with product mix remaining largely constant across higher volumes. Non-GAAP operating expenses for Q2 of $70.7 million were up roughly $5 million from the previous quarter as we continue to scale our R&D organization to expand and broaden our long-term market opportunity. Within Q2 non-GAAP operating expenses, R&D expenses were $48.9 million, sales and marketing expenses were $9.4 million, and general and administrative expenses were $12.4 million. Non-GAAP operating margin for Q2 was 39.2%, up 550 basis points from the previous quarter.

Interest income in Q2 was $10.9 million. Our non-GAAP tax rate for Q2 was 9.4%. Non-GAAP fully diluted share count for Q2 was 178.1 million shares, and our non-GAAP diluted earnings per share for the quarter was $0.44. Cash flow from operating activities for Q2 was $135.4 million, and we ended the quarter with cash, cash equivalents, and marketable securities of $1.07 billion. Now turning to our guidance for Q3 of fiscal 2025, we expect Q3 revenues to increase to within a range of $203 million to $210 million, up roughly 6% to 9% from the second quarter levels. For Q3, we expect Aries, Taurus, and Scorpio to provide growth in the quarter. For Aries, we are seeing growth from a number of end customer platforms where we support scale-up and scale-out connectivity. Taurus growth is driven by new designs going into volume production for scale-out connectivity.

Scorpio will primarily be driven by the continued deployment of our P Series solutions for scale-out applications on third-party GPU platforms. We expect non-GAAP gross margins to be approximately 75%. With the mix between our silicon and hardware module businesses remaining largely consistent with Q2, we expect third quarter non-GAAP operating expenses to be in the range of approximately $76 million to $80 million. Operating expense growth in Q3 is driven by the continued investment in our research and development function as we look to expand our product portfolio and grow our addressable market opportunity. Interest income is expected to be $10 million. Our non-GAAP tax rate should be approximately 20%. The increase in our non-GAAP Q3 tax rate reflects the impact of the recent change in the tax law passed in July with an expectation that our full year non-GAAP tax rate for 2025 to now be approximately 15%.

Following this tax law change, our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of a range of $0.38 to $0.39. This concludes our prepared remarks and once again we appreciate everyone joining the call and now we will open the line for questions.

Speaker 2

Operator, at this time I would like to remind everyone in order to ask a question, press Star then the number one on your telephone keypad. We'll pause for just a moment to compile the Q&A roster. Your first question comes from the line of Harlan Sur with JP Morgan. Your line is open.

Speaker 3

Good afternoon.

Speaker 0

Congratulations on the very strong results. You know within your Scorpio family of switching products, it's good to see the strong ramp of Scorpio P this past quarter. Within the same portfolio, looks like the team is qualified and set to ramp its Scorpio X Series for XPU to XPU ASIC connectivity. We talked about 10 platform wins. What's been the biggest differentiator? Is it performance, that is, latency, throughput? Is it fully optimized with your signal conditioning products? Is that consideration, and how much does the familiarity with Cosmos software play a role? You guys have always called this an anchor product which pulls in more of your solutions alongside your Cosmos software suite. Is this how it's playing out with your basic XPU customers? You lead with Scorpio X, and you've been successful at driving higher attachments or other products.

Speaker 3

Thank you so much for the question. You're absolutely right. The success that we have enjoyed so far is rooted on primarily, I would say, three things. First is just our closeness to our customers. Over this time period, we earned the kind of a trusted partner status with our customers. We get a ringside view of what their plans are, what it is that they're planning to deploy, and when. The second part of that is really our execution track record. We have shown time and again that our team executes with purpose, and we deliver to our promises. With both of these, we get the first sort of call for developing new products, for going into new product platforms at our customers. That's where the Cosmos software suite comes in. Cosmos, for the audience here, is our software suite that unites all of our products together.

This is how we allow our products to be customized, optimized for unique applications, as well as collect a lot of very rich diagnostics information that allows our customers to really see how their connectivity infrastructure is operating. With the use of Cosmos, we can customize our products to deliver higher performance, which translates to sometimes lower latency, sometimes higher throughput, sometimes different diagnostics features for our customers. As a result of that, we've been able to use Scorpio as an anchor socket in these applications because this is something that gets designed in upfront, and then we figure out signal conditioning opportunities with our Aries and Taurus product in these platforms. In particular, the Scorpio X in particular, because the customers use kind of derivatives of PCIe, we have been able to customize Core PX to deliver this lower latency and higher throughput.

Speaker 0

Thank you for that. Very insightful. For my second question, just over the past 90 days we put a lot of focus and announcements on scale-up networking connectivity on UA Link as you mentioned. The team did the Wall Street teach-in back in May. Obviously the team is a key member of the UA Link consortium. AMD recently fully endorsed UA Link as its scale-up networking architecture of choice for all future generations of its rack-scale solutions. We know of at least one other ASIC XPU vendor that's going to be moving to UA Link as well. Beyond this, what's been the reception and interest level on UA Link? Can or will the Astera Labs team speed up its time to market on UA Link-based products or is the timing still to sample products next year with volume deployments in calendar 2027? Yeah, Harlan, this is Sanjay here.

Thank you for the question. To your point, absolutely. We see a tremendous amount of interest with UA Link. There are obviously the technical advantages that you get with low latency and familiarity with how the transport layer works based on its roots, which is PCIe. Also, the fact that it supports memory semantics natively is a strong reason why customers are liking that interface. The big upside, of course, is the physical layer, which now has been upgraded to support up to 200 gig on the Ethernet side. There are several technical reasons that are going in favor of UA Link. Customers that were using PCIe or PCIe-lite fabrics see this as a natural progression in order to support the AI infrastructure needs going forward.

What we'll also note is that it's not just about technical stuff, it's about ecosystem and the broad availability of components that are required for scale-up. That's again where UA Link shines in the sense that it's truly an open standard, it's truly a multi-vendor supply chain. Those are additional reasons why customers tend to gravitate towards UA Link. We do have, like noted, several customers—we're counting 10 plus right now—that are looking at leveraging some of the open standards, whether it's PCIe in the short term, a combination of PCIe and UA Link in the midterm, and transitioning perhaps to a broader UA Link deployment in 2027 and later. Overall, I think the momentum is shifting positively and we are excited to be in the middle of it and driving the adoption of open and scalable supply chain in the market. Great, thank you.

Speaker 2

Your next question comes from the line of Ross Seymore with Deutsche Bank. Your line is open.

Speaker 1

Hi guys, thanks for letting me ask a couple questions and congrats on the strong results and guidance. Maybe to no surprise, I wanted to stay on the Scorpio family. The diversity of engagements is also interesting to me. As far as you're talking about it as an anchor tenant, I wondered if you could go into a little.

Speaker 0

Bit of the profile, the types of.

Speaker 1

Customers, how it's changed from your initial customer, and then perhaps how much incremental business and interest those customers are showing in other products as they realize as well it's an anchor tenant sort of. How are you leveraging that Scorpio relationship to bring in more business? Any sort of illustrations of that would be helpful.

Speaker 0

Yeah, absolutely. Again, thank you for that question. Just to kind of remind, we have two product series within Scorpio. One is the Scorpio P Series that just started ramping to production to support some of the third party GPUs that are ramping. The P Series is designed for scale-out connectivity, very broad use case from interconnecting GPUs to custom next to storage and things like that. Scorpio P Series, we have a broad base of customers that are leveraging that solution, designing in, going to production, deep in technical evaluations and so on. That would be a broad play for us with PCIe-based scale-out interconnect and storage type of interconnect. Scorpio X Series, which is designed for scale-up networking to interconnect the GPUs and accelerator. This we see, like you noted, as an anchor socket because that is truly the socket that holds all the GPUs together.

Today, like we noted, we have 10+ customers that we are engaging when it comes to scale-up networking using Scorpio X Series. This is also pulling in rest of our products, both because of the advantages that Cosmos brings to the table by unifying all of our product, plus at the same time the fact that someone is using a fabric solution and they would need a gearbox or a retimer or other controller type of products. Those are all playing into having that first call with the customer or having that early access at an architectural stage, which translates into an opportunity for us where we can not only offer the fabric device but also the surrounding components that come along with it as a connectivity platform.

Speaker 1

Thanks for that color. I guess as my second question, one for Mike, I think the first one's going to be pretty quick, so I might have a clarification in there as well.

Speaker 0

The gross margin is beaten, and you're.

Speaker 1

Staying solidly above your 70% long-term target, is there anything that slows down your trajectory to the 70%? The clarification would be the tax rate at 20%.

Speaker 0

Is that this year, but not next year?

Speaker 1

Which is the number we should think of going forward, the 15, the 20, or the 10?

Speaker 0

Thank you.

Speaker 1

Okay, thanks Ross. I'll start with the taxes. The 20% is specifically to Q3 because that was the quarter that the tax law changed. We have to catch up for the previous two quarters. For Q4, you should expect it to normalize around 15%. Longer term with this new tax law in place, it is probably in the around the 13% range for the gross margins. When we have an inflection up in revenues like we did, you do have the benefit of higher revenues over fixed operating costs. That was the incremental benefit for us. We do expect to see some pretty good growth from our hardware modules going into the back half of this year into 2026. As we make it through 2026, we still encourage people to think of our long term target model, 70%, as something that we'll be delivering.

Speaker 0

Thank you.

Speaker 2

Your next question comes from the line of Blayne Curtis with Jefferies. Your line is open, guys.

Speaker 1

I'll echo the congrats on the results. I guess I want to ask on the Scorpio products. I mean I think 10% in the June quarter was ahead of what many people were looking at.

Speaker 3

Maybe you could just help us.

Speaker 1

With the shape of that product, you still said 10% for the year. I'm assuming it's greater than 10%, but I'm sure it's much greater than that. Can you help us a little bit with as you look to September, you know you have $50 million of growth. How to think about Aries for Scorpio and any kind of thoughts on how to guide us to model this Scorpio.

Speaker 0

Product line this year?

Speaker 3

Yeah, this is Mike Tate.

Speaker 1

Yeah. For Q2 the Scorpio P launched into volume production a little ahead of what we anticipated, so provided the upside in the quarter from this base level. Now it continues to grow in Q3 and Q4. We have more P Series designs kind of coming into play that will layer on top of that. That's more in 2026. For the X Series, we do have pre-production volumes here, but really that starts to go into high volume production during the course of 2026 and, Larry, even more growth. Ultimately, what we called out is the X Series is going to grow to be bigger than P Series. It's a very exciting opportunity just given the dollar value of the design opportunities are much higher than the P Series just given the use cases of the scale-up connectivity. Both will grow.

We did reiterate that it will exceed 10% of our revenues for the year, which is quite an accomplishment for the first year out of a product line. It is poised to be our largest product line of the company as we make it through the following two years. Thanks.

Speaker 0

I just want to ask, I.

Speaker 1

Think in terms of the scale-up opportunity, clearly you were clear that X will be more material next year, kind of pre-production this year. Just want to ask this because there was a lot of rumors out there in terms of are there any opportunities for scale-up with Scorpio P or maybe ensured.

Speaker 0

Are you going to be shipping to?

Speaker 1

Anything material this year for scale-up versus the scale-out?

Speaker 0

You already talked about.

Speaker 1

The scale up this year is predominantly pre-production volumes, and these systems are pretty complex that they're shipping into. We try to be conservative on how we, you know, telegraph those going forward. The volume opportunities, scale up connectivity for switching, is a much bigger dollars opportunity for us as we look forward. Those designs really will start to enter into full volume production during the course of 2026, not a driver in the next couple quarters. Thanks, Mike.

Speaker 2

Your next question comes from the line of Joe Moore with Morgan Stanley. Your line is open.

Speaker 0

Great, thank you.

Speaker 1

I wonder if you could talk about UA Link versus other architectures and I guess your involvement with NVLink Fusion.

Speaker 0

Are you agnostic to those various solutions?

Speaker 1

Are you more favorable towards open source or proprietary? Just walk us through the potential outcomes for you with these battles that are being fought.

Speaker 3

Yeah, this is Jitendra, happy to do that. Let's start with NVLink. Just because NVLink is perhaps the most widely deployed scale-up architecture that's available today, we are very happy to be part of the NVLink Fusion ecosystem. If you look at the history of NVLink, it really is a fabric that is built ground up for AI. It uses memory semantics to make sure that all of the GPUs can be addressed as if they are one large GPU. It has low latencies. It does add Ethernet-based services to get the higher speeds, and of course NVIDIA has popularized that with their NVL72 deployment. If you go from there to, let's say, UA Link, you find many similarities. UA Link also has this genesis in PCIe. It is a memory semantics-based protocol.

It uses lossless networking and several other technical advancements that are suitable for AI workload, and the whole protocol is really custom built for optimizing the throughput for AI type of traffic. I think it does offer several advantages over other more proprietary protocols, some of which happen to be Ethernet-based and some are completely proprietary as well. The other advantage of UA Link is it's an open ecosystem. We know that many hyperscalers are part of the promoter board members as well as many vendors, frankly, who are working to deploy solutions for this UA Link. As a result, we expect to see a very vibrant ecosystem of provider vendors and customers with the UA Link. I think that will be a defining characteristic and why we believe UA Link will be adopted widely over time.

As promoter members of UA Link consortium ourselves, we are very happy to both participate in this standard, and not only participate, but come up with a full portfolio of solutions that includes switches, retimers, cables, and what have you to enable our customers to build a full UA Link. To answer the question that you asked, with UA Link we have a lot of dollar content opportunity, but at the same time we will continue to service our customers who are today using PCIe, and we have a huge opportunity there, as well as Ethernet for scale-out applications, for cabling applications, and over time also with NVLink Fusion.

Speaker 0

That's very helpful, thank you.

Speaker 1

I get the question a lot. If you guys can size your exposure to merchant GPU platforms versus ASIC. I know there's probably a little bit higher content opportunity for you on the.

Speaker 0

ASIC side, any sense for what.

Speaker 1

that split looks like and where that may be going over time?

Speaker 3

Yeah, Joseph, we do address both of these opportunities. Our opportunity on the merchant GPU platform comes when our customers customize the rack design. This is the opportunity for both our Aries, Scorpio P Series that Sanjay and Mike touched upon earlier. We saw a lot of ramp happening with that this last quarter. In addition to that, we are also shipping the Taurus Ethernet cables for scale-out applications. When you go to the scale-up, that becomes a very big opportunity for us just because of the density of interconnect when you're trying to connect all of these GPUs together. When that network happens to be based on PCIe, we have an even larger attach rate, which drives our dollar content on these XPU platforms into several hundreds of dollars per XPU.

Over time, we do see the Scorpio X Series as our largest revenue contributor and largely deployed on XPUs.

Speaker 0

Great, thank you very much.

Speaker 2

Your next question comes from the line of Thomas O'Malley with Barclays. Your line is open.

Speaker 1

Hey guys, thanks for taking my question. You mentioned that you were engaged with 10+ customers on the X Series. Which side? Could you just give us a picture of how many of those are engaged on PCIe today and how many of those are engaged on the UA Link side? If you're engaged with one on PCIe, are you often engaged with one on UA Link as well? Can you maybe talk about that split right now?

Speaker 0

Yeah. This is Sanjay here. What we can notice is that the 10+ opportunities that we highlighted, these are both hyperscalers as well as AI platform providers. These are all today based on PCIe. These are nearer term opportunities that we're tracking. Having noted that, like Jitendra highlighted, UA Link is a standard and open standard that contemplates the requirements of scale-up networking in terms of speed and other capabilities going forward. Many of these customers that we're engaging with today with PCIe are also looking at UA Link. Some of them might continue to stay with PCIe, some of them will transition to UA Link in the midterm. Longer term, as the UA Link ecosystem develops and matures, we do expect that UA Link will continue to be a solution that both the merchant GPU as well as custom accelerator providers will standardize on. Helpful.

Speaker 3

As my follow up, I'm.

Speaker 1

Curious and there's been obviously a lot of news articles intra quarter about switching attach rates with XPUs and then also general purpose silicon. If you look at the large guy in the market in a 72 array, there's nine switch trays, a couple switches per, so like a 25% switching attach rate to a single XPU or general piece of silicon. In that instance, when you're ramping an XPU with a custom silicon customer, can you maybe walk us through specifically with the X switch, if that attach rate is higher or lower or what's the reason for that, that'd be super helpful.

Speaker 3

Thank you.

Speaker 0

We don't comment on individual platforms and customer deployment scenario, but in general the Scorpio switches, X Series switches, interconnect GPUs and there are, depending on the platform, different configurations for number of GPUs in a pod. Within Astera and the product portfolio that we are developing, it is designed in a way that it addresses a variety of different use cases and the attach rate varies. It probably will be a broad answer to your question. In general, we have the engagements, we have the design wins. Now it's a matter of all of these platforms getting qualified and ramping to production. With due course, as they get into production, we'll be able to add more color on how that's shaping our revenue and our growth.

Speaker 2

Your next question comes from the line of Tore Svanberg with Stifel Financial Corp. Your line is open.

Speaker 0

Yes, thank you. Let me add my congratulations as well. I guess my first question is on you talked about this new revenue base. I mean you now have three product lines in production that obviously doubled your revenue base. Now you're talking about AI Infrastructure 2.0 and Scorpio P Series or X Series really, you know, sort of creating a new revenue level. Should we infer with that that you will double the sort of run rate again as X Series starts to ramp? Is that the way we should look at it? Yeah, great question. I always like to make this correction. It's not retiment, it's retimer. Just to keep our engineering folks happy. You make a great point. That's exactly what we believe is the beauty of our business model where we have approached the business in a series of growth steps.

We started the journey being on all the NVIDIA based platforms with the PCIe retimers which got the company off the ground from a revenue growth standpoint. The second step that we hit was to expand our PCIe retimer and Ethernet retimer business to go after custom ASICs. This transition happened in Q3 of last year. Now where we are is our third step in that growth journey where we have ramped up our Scorpio P Series PCIe based fabric switch products along with our 86 retimers. That's going on all the third party NVIDIA based GPU platforms that are ramping up. The fourth step that we are highlighting as part of the call today is the Scorpio X Series which is designed for scale-up networking.

That transition is currently underway in the sense that we are still in pre-production and like we highlighted throughout 2026, we expect that wave to transition to high volume production providing us a new baseline for revenue. These are of course higher value sockets meaning the dollar content with the Scorpio X Series switches are significantly higher than what we have done so far. You could expect that to play into the overall revenue projections that we would have as we get towards 2026. The fifth step that we called out as part of the communication is the UA Link.

That is going to be a growth story in 2027 and that is a greenfield application for us with a much broader deployment of scale-up networking along with a variety of other products that we intend to develop for UA Link and that is going to be the fifth step that we are executing towards. Yeah, thank you for walking through all that Sanjay. I really appreciate it. As my follow up and related to UA Link, it does feel like the standard is sort of regaining a lot of traction. I'm just curious why that is. Is it because of AI moving more into inferencing? Is it because of the 128 gig version? It just feels like there's been a little bit of a change in the last few months. Any color you can add on that would be great. If you don't mind, could you repeat your question?

We didn't quite get the question that you asked. Yeah, I was asking about UA Link sort of regaining a lot of traction. At least that's the way it feels to us and I'm just wondering why that is. Is it because of AI moving more towards inferencing? Is it because of the 128 gig version? Or is there anything else that's going on there?

Speaker 3

Thank you for clarifying that. Ulink is gaining actually a lot of traction. If you just for as a reminder, UA Link was only introduced, the specification was only introduced towards the end of Q1 of this year. Since then, it has gained tremendous amount of traction. We've got, you know, AMD talked about it very recently in Taipei as part of the OCP Summit, and several of the hyperscalers are very closely engaged in figuring out what their roadmap intercepts will look like for UA Link. For all the reasons that we talked about earlier in the call, I will also say that majority of these engagements are at 200 gigabit per second per lane data and not at the 128.

Speaker 0

Perfect. Thank you.

Speaker 2

Your next question comes from the line of Sebastian Nagy with William Blair. Your line is open.

Speaker 1

Good afternoon. Thank you for taking the questions. A lot of the focus is rightfully on the AI tailwinds, but could you.

Speaker 0

Maybe comment on what you're seeing.

Speaker 1

Non-AI adoption and in particular what.

Speaker 0

You might be seeing on Gen 5.

Speaker 1

PCIe adoption and general purpose service drivers?

Speaker 0

Could that be a meaningful contributor?

Speaker 1

To Aries growth going forward?

Speaker 0

Yeah, absolutely. Thanks for highlighting that. We always overlook the general compute nowadays, but to your point that's a transition that we're tracking. AMD released their Venice CPU which does support PCIe Gen 6 as well. We do see that sort of playing out in terms of design opportunities and a new set of production ramps happening for our Aries product line, both on the retimer class devices as well as other sockets that we develop, whether it is the Taurus modules or Gearbox devices. In general those are additional opportunities for us to grow our business and we're tracking those things as part of our overall outlook. Let's not forget LEO products which are our CXL controllers. These are designed for memory expansion for CPUs in particular. Finally we have CPUs that support CXL technology and are ready for deployment.

We are excited about the opportunities that we're tracking between all the three product lines, Aries, Taurus, and LEO, going into the general compute use cases. Great.

Speaker 1

Okay, that's really helpful.

Speaker 0

If I could, a second question.

Speaker 1

I want to ask about the use.

Speaker 0

Of Ethernet and scale-up.

Speaker 1

Going forward.

Speaker 0

You have Broadcom positioning itself to address.

Speaker 1

Both the scale-out and scale-up part of the network with its latest.

Speaker 0

Generation of Ethernet chips. I'm wondering how do you see.

Speaker 1

Scale-up Ethernet potentially eating into that.

Speaker 0

PCIe part of the market where Astera Labs has such a strong position?

Speaker 3

This is Jitendra. Maybe I'll take this question. If you look at our customers today, they are deploying the scale-up network with the technologies that are available to them, which is NVLink for NVIDIA designs, of course, PCIe for several of the customers that we touched upon earlier in the call. Some of the customers are also using Ethernet. Largely this has to do with the availability of the switching infrastructure. The two protocols, PCIe as well as NVLink, are basically kind of custom built for memory access, for memory semantics. You can use that to make your multiple GPUs in a cluster look like one large GPU. Ethernet is a fantastic protocol, but it was never designed for scale-up. It was designed for kind of large-scale Internet traffic and it is very, very good at that.

However, because of the availability of the switches, some of the customers have tried to run RDMA and other proprietary protocols over Ethernet to do scale-up. In that scenario it does suffer from higher latencies and throughput. I think what you are referring to is scale-up Ethernet, where Broadcom has tried to actually borrow several of the same features that are present in PCIe and UA Link, such as memory semantics, lossless networking, etc., and put them on top of Ethernet. At that point, it looks something quite different from Ethernet. The switching infrastructure as well as the XP infrastructure has to evolve for somebody to use that. I believe that the real differentiation between the two has to do with the openness of the ecosystem.

The SUV is still dominated by Broadcom, whereas if you look at UA Link, it's a very open ecosystem, very vibrant ecosystem with multiple vendors working on products and multiple hyperscalers looking to really take their destiny in their own hands and, you know, relying on UA Link over time.

Speaker 1

Great, that's really helpful.

Speaker 0

Thank you so much and congrats on the quarter. Thank you.

Speaker 2

Your next question comes to the line of Quinn Bolton with Needham and Company. Your line is open.

Speaker 1

Hey Jason, I just wanted to follow up.

Speaker 0

Upon that question about Suji.

Speaker 1

Broadcom introduced their Tomahawk Ultra switch recently with a 250 nanosecond latency, which seems like it significantly reduces the latency problems that traditionally Ethernet has had. Can you give us some sense how does that 250 nanosecond latency, for sure, compare to what you're able to achieve on PCIe and UA Link? I have a follow up.

Speaker 3

Yes, we are able to achieve even lower latencies with some of the products that we have and other products that we have in development. It comes back to designing something that is purpose built for AI. It is not about just the point-to-point latency. If you look at the end-to-end latency in the system, we believe that UA Link and indeed PCIe today is going to be lower latency. The second point about that is utilization of bandwidth. Even though over time the current offering from Broadcom uses 100 gigabits per second per lane, over time every standard will migrate towards 200 gigabits per second per lane. Both UA Link Ethernet as well as NVLink is already there today. However, how efficiently you use that raw data rate varies from protocol to protocol.

UA Link has been designed to be extremely efficient with that and really achieve very high utilization of the data pipe that is available. On a technical basis, I do think that UA Link will be superior to other protocols. The big advantage of UA Link is in its openness, that it's an open standard, that our customers, the hyperscalers, can build their infrastructure once and then ideally plug in whichever GPU or XPU they want that supports an open interoperable ecosystem like UA Link.

Speaker 1

My follow up question, I think in the script you guys talked about.

Speaker 0

An.

Speaker 1

Expansion in the opportunities with Taurus, and I'm kind of wondering if you could expand on that.

Speaker 0

Is that.

Speaker 1

Are you seeing sort of adoption of higher per lane speeds on that Taurus product and adoption of 800 gig cables? Are you seeing adoption beyond your lead customer in Taurus? Just any additional color you could provide.

Speaker 0

On Taurus would be.

Speaker 1

Would be helpful.

Speaker 0

Thank you. Yeah. Like you correctly said, and what we have shared in the past as well, we expect broader adoption of AECs when the Ethernet data rate transitions to 800 gig. That's starting to happen. We expect most of the deployments to be ramping up in volume in 2026. To that standpoint, we're tracking and we're engaged with the customers that are deploying it. One point to keep in mind is that our business model for AECs is designed for scale. In other words, we developed these cable modules that fit into the cable assemblies of existing cable vendors, and there are a variety of them that service the data center market. Our business model is to go after the RAM and not necessarily the initial few volume that might be deployed. To that standpoint, we're tracking and we're engaged with the right customers.

As the volume starts ramping, we do expect to have a significant diversification and growth in our Taurus module business. Most of this we are modeling in 2026 versus this year. Got it.

Speaker 1

It sounds like the volume this year continues to be more 50 gig per lane, and then you see that diversification in 2026 as 100 gig per lane becomes.

Speaker 3

More.

Speaker 0

Seize wider adoption. Exactly. Our business model, like noted, is designed for that multi-vendor cable supply chain. We do believe that's the right strategy. That's what hyperscalers look for. The initial POC limited volume deployment, they might go with one vendor, but very quickly each one of these hyperscalers want to have the diversity as well as the supply chain capacity to drive volume. That has essentially been our focus when it comes to a business model on the AEC side. Got it. Thank you.

Speaker 2

Your next question comes from the line of Suji Desilva with Citi. Your line is open.

Speaker 0

Thank you for taking my question and great, thank you. Congrats for the great result. I guess my first question is kind of following your announcement of a partnership with a high kind of performance ASIC leader recently, I guess can you touch a little bit more on the kind of extent of that collaboration? Is it more at a chip level in terms of the IO chip type of kind of partnership or is it more at a kind of device level with your agent Scorpio portfolio? Yeah, I'll answer that question by sort of sharing our vision and goal that we're executing towards. Our vision is to provide purpose-built connectivity platform for AI infrastructure that includes silicon products, hardware products, and software products. Of course, the focus for us has been on the connectivity side of the AI rack.

When you think of an AI rack, there are other components that go, which primarily includes the compute nodes, whether it's based on third-party merchant GPUs and CPUs or custom ASICs that Alchip and others develop for hyperscalers. We are a strong believer in that the AI rack, the way it's defined today, is not scalable in the sense that it's more proprietary. As the industry transitions to what we are calling AI Infrastructure 2.0, the entire AI rack has to be based on an open, scalable, multi-vendor type of approach. To that standpoint, what we're doing is not only developing the connectivity products for addressing the various aspects of an AI rack, whether it's scale-up or scale-out and other connectivity at the same time. We are partnering with third-party GPU vendors. We talked about the announcement that we did with AMD.

We're also engaging with custom ASIC providers including Alchip, so that end of the day, the hyperscalers, who are our common customers, get a rack that is well tested, interoperable, the software is all consistent, and so on to ensure that it delivers the highest level of performance. That is the scope of the collaboration that we're having with Alchip and other providers. Over time you will see us announce more partnerships as we seek to establish the open rack that we believe is critical for deploying AI at scale.

Speaker 3

Got it.

Speaker 0

No, that's very helpful. If I can squeeze just one more, and this might be more for Mike. On the gross margin, it seems like over the last two quarters, particularly since the Scorpio announcement, gross margin keeps going up. In the September quarter, you are guiding it to 75%, which at the very least at the midpoint seems to be down a little bit. I'm just curious on any additional color on that because it seems like by all indications Scorpio will continue to go up and the mix trend we are seeing currently seems to be moving in the same direction in September as well. We're just curious on that guide down in gross margin in the September quarter.

Speaker 1

Yeah, we do see growth from Scorpio, but we also see good solid growth in Taurus as well during the quarter. You know the Taurus as a module, it's hardware, so it carries a little bit lower gross margin to stand on silicon. You'll see that dynamic play out to a smaller extent in the quarter. As we move into 2026, we still want to have people thinking of us going towards our longer term model. 70%.

Speaker 2

Your next question comes from the line of Suji Desilva with Roth Capital. Your line is open.

Speaker 0

Hi Jitendra, Sanjay, Mike, congrats on the strong quarter here.

Speaker 1

Maybe you could give us a framework.

Speaker 0

On the retimer content for a link.

Speaker 1

That's for scale-out versus scale-up.

Speaker 0

Maybe it's similar, but maybe there's some differences. I'd be curious to understand what the unit opportunities might be and how they might be different.

Speaker 3

Yeah, so when you look at the retimers, you know, the contrast with the switches is the following, which is the switches get designed in right at the inception at the architecture stage. Customers will think about how they're going to connect either their GPU to other GPUs in a scale up or the GPU to NICs or storage as part of that scale-out system. Once the switch is designed in and as the rack starts to get put together, we look at the question of reach, and sometimes you find that you need retimers in a link, other times actually you don't need retimers in the link. Sometimes the retimers go on the board as a kind of a chip down format. At other times they are better suited to be put in cables in an AEC format.

The good news with Astera Labs is that we provide this full portfolio of devices for our customers to choose from. From switches to gearboxes to chip down retimers to retimers in active electrical cables, they can look at, you know, one company, one Astera Labs, to figure out their entire, all the solutions at the rack level.

Speaker 0

Just trying to clarify, neither one would be higher than the other necessarily. Just to be clear.

Speaker 3

Can you repeat that? Neither one will be higher than the other.

Speaker 0

Scale up versus scale out necessarily.

Speaker 3

Yeah, it really depends upon the system architecture. In scale-up there are many, many more links than there are in scale-out. However, it is prohibitive from a power standpoint to put retimers on all the links. Typically, you will see the links that are shorter, where you are able to go from the switch to the GPU over a shorter distance, will not use retimers. The links that are longer will potentially use retimers. Sometimes we have scale-up domains that exceed one rack. You might have two racks side by side that are part of a scale-up domain, in which case you end up with a cable solution and you need retimers in the scale-up domain in those scenarios.

Speaker 0

Helpful. Thanks.

Speaker 3

My follow ups on Scorpio.

Speaker 1

You talked about 10 customer engagements. I'm wondering if that implies multiple programs.

Speaker 0

Per customer, if they're going to think about using you standard in their platforms, any color on how those are kind of shaping up would be helpful in programs versus customers. Yes, the 10+ we noted are unique customers. Now, within each customer, there are multiple opportunities that we're tracking. Some of them are design wins, and some of them are ramping to production. Some of them are design ins going through qualification. Some of those are early engagement. In general, we are very pleased with the amount of traction that we're seeing for our Scorpio family. Excellent. Thanks, Sanjay. Thanks, everybody. Thank you. Thanks.

Speaker 2

There are no further questions at this time. I will turn the call back over to Leslie Green for closing remarks. Thank you, everyone, for your participation today and questions. Please refer to our investor relations website for information regarding upcoming financial conferences and events. Thanks so much. This concludes today's conference call. You may now disconnect.