Arista Networks - Earnings Call - Q2 2025
August 5, 2025
Executive Summary
- Arista delivered a clean beat-and-raise quarter: revenue $2.205B (+10.0% q/q, +30.4% y/y), non-GAAP EPS $0.73, and non-GAAP gross margin 65.6%, all above internal guidance and Wall Street consensus; Q3 guide: revenue ~$2.25B, non-GAAP GM ~64%, OM ~47%.
- Management raised FY25 revenue growth to ~25% (targeting ~$8.75B) from ~17% (~$8.2B), citing accelerating demand across AI, cloud, and enterprise, and a record non-GAAP operating income crossing $1B for the first time.
- Strategic momentum: announced acquisition of the VeloCloud SD-WAN portfolio to strengthen branch/enterprise WAN and embrace MSP channels; front-end and back-end AI networking progress with EtherLink and 7800/7700 spine platforms highlighted on the call.
- Near-term catalysts: increased FY guide, strong AI narrative (back-end networking target ~$750M, aggregate AI networking >$1.5B in 2025), and Q3 guide above prior levels; watch deferred revenue volatility and customer concentration as swing factors.
What Went Well and What Went Wrong
What Went Well
- Revenue/EPS/margins beat: $2.205B revenue vs Q2 guide $2.1B and consensus; non-GAAP GM 65.6% (guide 63%), OM 48.8% (guide 46%); CFO: “Operating income crossed $1B for the first time”.
- AI networking traction: CEO reaffirmed back-end AI networking revenue objective of ~$750M in 2025 and aggregate AI networking ahead of ~$1.5B; progress with four top AI “Titans” and 25–30 enterprise/neo cloud customers.
- Strategic expansion in enterprise WAN: Acquisition of VeloCloud SD-WAN; COO: leveraging MSP motion to cross-sell Arista across portfolio; plan to partner for full SASE overlay.
What Went Wrong
- Deferred revenue volatility and acceptance clauses: Product deferred revenue increased ~$687M q/q; management cautioned about significant quarterly moves independent of underlying drivers.
- Customer concentration risk persists: At least two largest customers contributing ≥10% each; balance improving but Titans still meaningful to annual results.
- Sovereign AI uncertainty: One sovereign AI customer fell out; while opportunity remains, management is “cautiously optimistic” and not factoring sovereign AI into 2025 numbers.
Transcript
Speaker 2
Welcome to the second quarter 2025 Arista Networks financial results earnings conference call. During the call, all participants will be in a listen-only mode. After the presentation, we will conduct a question and answer session. Instructions will be provided at that time. If you need to reach an operator at any time during the conference, please press the star key followed by zero. As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section on the Arista website. Following this call, Mr. Rudolph Araujo, Arista's Head of Investor Advocacy. You may begin.
Speaker 0
Thank you, Regina. Good afternoon everyone and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks' Chairperson and Chief Executive Officer, and Chantelle Breithaupt, Arista Networks' Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal second quarter ending June 30, 2025. If you want a copy of the release, you can access it online at our website.
During the course of this conference call, Arista Networks' management will make forward-looking statements including those relating to our financial outlook for the third quarter of the 2025 fiscal year, longer-term business model and financial outlooks for 2025 and beyond, our total addressable market and strategy for addressing these market opportunities including AI customer demand trends, tariffs and trade restrictions, supply chain constraints, component costs, manufacturing output, inventory management, and inflationary pressures on our business, lead times, product innovation, working capital optimization, and the benefits of acquisitions which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K, and which could cause actual results to differ materially from those anticipated by these statements.
These forward-looking statements apply as of today and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. This analysis of our Q2 results and our guidance for Q3 2025 is based on non-GAAP and excludes all non-cash stock-based compensation impacts, certain acquisition-related charges, and other non-recurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. With that, I will turn the call over to Jayshree.
Speaker 1
Thank you, Rudy, and thank you, everyone, for joining us this afternoon for our second quarter 2025 earnings call. Arista Networks is experiencing momentum in our business as demonstrated in our record Q2 2025 results. We achieved $2.2 billion this quarter, surpassing our plan by $100 million. Software and service renewals contributed approximately 16.3% of revenue. Our non-GAAP gross margins of 65.6% were influenced by efficient supply and inventory benefit, with a non-material tariff impact in the quarter. International contributions for the quarter registered strongly at 21.8%, with the Americas at 78.2%. Reviewing our mid-year inflection point, our conviction with AI and cloud titans and enterprise customers has only strengthened. We began the year with a pragmatic guide of 17% or $8.2 billion annual revenue. As the year has progressed, we recognize the potential to build a truly transformational networking company addressing a massive total available market.
This feels to us like a unique, once-in-a-lifetime opportunity. We therefore raise our 2025 annual growth to 25%, now targeting $8.75 billion in revenue, which is an incremental $550 million more. Due to our increased momentum that we are experiencing across AI, cloud, and enterprise sectors, it is important to appreciate that Arista Networks' AI center strategy is complementing our data center focus to drive some of this increase. AI Centers consist of both scale-out front end and scale-up, scale-out combination for backend networks. Scale-up backend networks consist of high bandwidth, low latency interconnects that tightly link multiple accelerators within a single rack as a unified compute system with workload parallelism. Today, this is predominantly constructed with NVLink as a compute-attached IO, but we do expect a move to open standards such as Ethernet or UA link in the next few years.
Scale-out backend network is dedicated spines interconnecting XPUs across racks engineered for high bandwidth and minimal latency, thereby resulting in efficient parallel processing of massive training models. Here, InfiniBand is rapidly migrating to Ethernet based on the Ultra Ethernet Consortium specification released in June of 2025. Scale-out front end connects the backend clusters to external clouds, compute resources, storage, wide area networks, and data center interconnect to handle data ingestion orchestration for AI and cloud traffic in a leaf spine network topology. Arista's flagship Etherlink and EOS are key hallmarks of scale-out networking with a wide breadth and depth of network protocol support. Introduced in 2024, Arista's Etherlink portfolio is now 20+ products with the most comprehensive and complete solution in the industry, especially for scale-out backend and scale-out frontend networking.
It highlights our accelerated networking approach, bringing a single point of network control and visibility differentiation and improved GPU utilization. Poor networks and bottlenecks lead to idle cycles on GPUs, wasting both capital, GPU costs, and operational expenses such as power and cooling. With 30% to 50% processing time spent in exchanging data over networks and GPU, the economic impact of building an efficient GPU cluster with good networking improves utilization and this is super paramount. Our stated goal of $750 million backend AI networking is well on track and gaining from nearly zero revenue three years ago in 2022 to production deployments this year in 2025. As a reminder to you all, the backend AI is all incremental revenue and incremental market share to Arista.
As large language models continue to expand into distributed training and insurance use cases, we expect to see the backend and the frontend converge and call us more together. This will make it increasingly difficult to parse the backend and the frontend precisely in the future, but we do expect aggregate AI networking revenue to be ahead of $1.5 billion in 2025 and growing in many years to come. We will elaborate more on this in Analyst Day in September, including our AI strategy and forecast. What is crystal clear to us and our customers is that Arista continues to be the premier and preferred AI networking platform of choice for all flavors of AI accelerators.
While majority today is NVIDIA GPUs, we are entering early pilots connecting with alternate AI accelerators including startup XPUs, the AMD MI series, and in AI and Titan customers who are building their own XPUs. As we continue to progress with our four top AI Titan customers, AI is also spreading its wings into the enterprise and neo cloud sectors and we are winning approximately 25 to 30 customers to date. The rise in agentic AI ensures any-to-any conversations with bidirectional bandwidth utilization. Such AI agents are pushing the envelope of LAN and WAN traffic patterns in the enterprise. Speaking of WAN, we are very pleased to announce the purchase of SD-WAN leader VeloCloud to offer modern branches in the agentic AI era. VeloCloud's secure AI optimized WAN portfolio offers seamless application-aware solutions to connect customer branch sites, complementing Arista's leading spines in the data center and campus.
In a classic Leaf Spine atomic identifier, we are enabling multipathing, encryption, in-band network telemetry, segmentation, application identification, and traffic engineering across distributed enterprise sites. We are so excited to fill this missing void in our distributed enterprise puzzle to bring that holistic branch solution. This also increases our foothold with managed service providers (MSPs) as an important route to market for our distributed campus and branch offerings. We also intend to work closely with best-of-breed security partners to enable SASE overlays. Please do note that Velo is not material in 2025 and we have some work to do to restore annual revenue back to pre-Broadcom levels. Last quarter I shared the development and internal promotions of several tenured executives at Arista to bolster our leadership.
They display that strong cultural synergy and a mission to ignite innovation and delight our customers as we enter the next phase of Arista 2.0, growing from $5.8 billion in 2023 to a forecasted $10 billion revenue in 2026. We rely on this trifecta foundation of great customers, innovative products, and great next-gen leaders to achieve this. I am so thrilled to welcome Todd Nightingale as Arista's President and Chief Operating Officer. Todd brings that incredible passion for networking with his over two decades of technical leadership in Meraki, Cisco, and most recently CEO of Fastly. In just a month, he is epitomizing the Arista way and I'm really looking forward to his impactful contributions to boost Arista's overall campus and enterprise operations. Todd, welcome to your first ANET earnings call. How does it feel to be here?
Speaker 0
It's amazing. It's only been a month, but I can't tell you how impressed I am with the passion and focus of the team, the trust that Arista customers have in the technology, and the enormous opportunity we have ahead of us in data center AI and in the campus. I'll be primarily focused on our enterprise customer engagement, bringing new customers to Arista and operational excellence across the organization. Personally, I'm so incredibly excited to be back in networking and I'm truly, truly honored to be here. Thank you so much, Jayshree.
Speaker 1
Thank you, Todd. It's going to be a fun journey here with us. You know, it's really an unprecedented time in networking where Arista Networks is so uniquely positioned to enable the modern network transformation. With that, my dear friend Chantelle, over to you, our CFO, for the financial specifics.
Speaker 2
Thank you Jayshree. With that as the backdrop of our strong business outlook, let me now take us through the metrics that underscore our momentum. Total revenues in Q2 were $2.2 billion, up 30.4% year over year, above our guidance of $2.1 billion. This was supported by strong growth across all of our product sectors. International revenues for the quarter came in at $481 million, or 21.8% of total revenue, up from 20.3% in the prior quarter. This quarter over quarter increase was driven by a relatively stronger performance in our EMEA region. The overall gross margin in Q2 was 65.6%, above our guidance of 63%, up from 64.1% last quarter and up from 65.4% in the prior year. The quarter over quarter gross margin improvement was primarily driven by improved inventory management and related excess and obsolescence reserves.
Operating expenses for the quarter were $370.6 million or 16.8% of revenue, up from last quarter at $327.4 million. R&D spending came in at $243.3 million or 11% of revenue, up from $209.4 million in the last quarter. This primarily reflected higher new product introduction costs in the period. Sales and Marketing expense was $105.3 million or 4.8% of revenue, compared to $94.3 million last quarter. Inclusive of a continued focus on our partner programs, our G&A cost came in at $22 million or 1% of revenue, down from last quarter at $23.7 million. Our operating income for the quarter was $1.08 billion, crossing $1 billion for the first time in Arista Networks' history, landing at 48.8% of revenue. Other income and expenses for the quarter was a favorable $88.6 million and our effective tax. This resulted in net income for the quarter of $923.5 million or 41.9% of revenue.
Our diluted share number was 1.271 billion shares, resulting in a diluted earnings per share number for the quarter of $0.73, up 37.7% from the prior year. Now onto the balance sheet. Cash, cash equivalents and investments ended the quarter at $8.8 billion. In the quarter we repurchased $196 million of our common stock at an average price of $80.70 per share. Of the $1.5 billion repurchase program approved in May 2025, $1.4 billion remains available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price and other factors. Now turning to operating cash performance for the second quarter, we generated approximately $1.2 billion in cash from operations in the period, the highest in Arista Networks' history, reflecting a strong business model performance. DSOS came in at 67 days, up from 64 days in Q1.
Driven by billing linearity, inventory turns were 1.4 times, flat to last quarter. Inventory increased to $2.1 billion in the quarter, up from $2 billion in the prior period, reflecting an increase in our finished goods inventory, which is an outcome of our global tariff and supply chain management. Our purchase commitments and inventory at the end of the quarter totaled $5.7 billion, up from $5.5 billion at the end of Q1. We expect this number to stabilize as supplier lead times improve, but will continue to have some variability in future quarters as a reflection of demand for our new product introductions. Our total deferred revenue balance was $4.1 billion, up from $3.1 billion in Q1. The majority of the deferred revenue balance of services related to and directly linked to the timing and term of service contracts, which can vary on a quarter-by-quarter basis.
Our project deferred revenue increased approximately $687 million versus last quarter. We remain in a period of ramping our new products, winning new customers, and expanding new use cases including AI. These trends have resulted in increased customer-specific acceptance clauses and an increase in.
Speaker 1
The volatility of our product.
Speaker 2
Deferred revenue balances as mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis independent of underlying business drivers. Accounts payable days was 65 days, up from 49 days in Q1, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $24 million. In October 2024, we began our initial construction work to build expanded facilities in Santa Clara, and we expect to incur approximately $100 million in CapEx during fiscal year 2025 for this project. Now turning to guidance. Building on this strong Q2 and first half performance, we expect continued momentum in the quarters ahead. Let's first start with our outlook for fiscal year 2025. As Jayshree mentioned, revenue growth is now estimated to be approximately 25% or $8.75 billion.
This is fueled by demand across AI, cloud, and enterprise sectors and demonstrates that Arista Networks' focus on pure play networking is meeting the innovation needs of the market. One item to note of this revenue guide raise, we are now increasing our campus revenue target to be between $750 million and $800 million, inclusive of the minimal amount expected from the VeloCloud acquisition in FY25. We are excited to welcome VeloCloud to our team, and as stated earlier, we are working through integrating and enhancing the business model to better serve our customers. For gross margin, a range is expected of approximately 63% to 64%, inclusive of possible known tariff scenarios and benefiting from improved inventory management. For operating margin, the outlook is approximately 48%, a testament to the ability of Arista Networks to scale efficiently and effectively given the strength of our business and visibility into customer demand.
Here is our guidance for Q3. Quarter revenue of approximately $2.25 billion, continuing to serve our customers and win new logos across AI, data, WAN, and campus centers. Gross margin of approximately 64%, inclusive of possible tariff scenarios. Operating margin of approximately 47% and an effective tax rate expected to be approximately 21.5% with approximately 1.275 billion diluted shares. In closing, this is a great time to be an innovative networking leader. We are halfway through the year with solid momentum and are clear on our execution priorities. This makes us confident and excited in our ability to finish the year strongly. In closing, I would also like to wish Todd a very warm welcome to the Arista Networks team. I will now turn the call back to Rudy for Q and A.
Speaker 0
Thank you, Chantelle. We will now move to the Q and A portion of the Arista Networks earnings call to allow for greater participation. I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Regina, please take it away.
Speaker 2
We will now begin the Q and A portion of the Arista Networks earnings call. To ask a question during this time, simply press STAR and the number one on your telephone keypad. If you'd like to withdraw your question, press STAR one again. Please pick up your handset before asking questions to ensure optimal sound quality. Our first question will come from the line of George Nautter with Wolfe Research. Please go ahead.
Speaker 0
Hi guys. Thanks very much. Appreciate it. I guess I wanted to. The results are terrific, certainly, but I wanted to ask about the competitive environment. I know many investors in recent weeks and months have been looking at some of the growth at Celestica and certainly NVIDIA's networking business and kind of projecting some of that strength on, you know, as being negative for Arista Networks. I guess I was just curious about your perspective on that, how you see the competitive environment, how you see your differentiation. Anything you can say there would be great. Thanks.
Speaker 1
Sure, George. Welcome to Wolf. Thank you for the wishes. Look, we've always lived in a very competitive industry, whether it was Cisco or specific networking vendors. We acknowledge NVIDIA's participation both with InfiniBand and, you know, bundling with the GPUs. We've always acknowledged the coexistence with white box. From our perspective, the competitive landscape has not changed, it's more the same. I recognized that the chatter was louder and we understand that, you know, given the volatility of some of our customers, some years and some quarters are better. I think some of the chatter was louder because our Meta share wasn't growing the same way as it did year over year the prior years. From our perspective, our innovation and differentiation has never been stronger at a platform performance level, at a feature level. I want to add a third one, which is at a customer intimacy level.
They are so appreciative of the support, the quality, and the way we approach how to solve their problems. No change in our environment and innovation, but plenty of chatter outside. I appreciate that, I understand that, and I hope we prove the naysayers wrong.
Speaker 0
Super, thank you.
Speaker 1
Thank you, George.
Speaker 2
Our next question comes from the line of Mita Marshall with Morgan Stanley.
Speaker 1
Please go ahead. Great, thanks. Appreciate the question. I guess just on some of the strength that you were seeing in terms of cloud, I know it's getting increasingly hard to kind of differentiate front end and back end, but do you attribute some of the upside that you're seeing this year towards starting to see front end upgrades maybe quicker than expected, or is this just kind of back end demand being stronger than expected? Thanks. Thanks, Mita. If you recall two, three years ago, maybe it's hard to remember all of that. I was actually very worried that the cloud spending had a little bit frozen and all of the excitement, enthusiasm was going towards GPU and how big is your GPU cluster, that kind of thing. We now see it coming back, the pendulum swinging into a more balanced deployment of both cloud and AI.
I think as a result of all these AI deployments, as I've often said, the traffic patterns of cloud and AI are very different. The diversity of the flows, the distribution of the flows, the fidelity of the flows, the duration, the size and intensity. All of that AI traffic and deployments we have done and others have done is now putting pressure on the front end cloud as well. That's why it's going to get. We wanted to measure ourselves as purely on the native GPU connections. Going forward we see a much more distributed topology of cloud and AI sort of combining together. It's not like HPC clusters where they'll build one and tear it down. When they build an AI cluster, it's very expensive, it's like diamonds and they want to take advantage of that and bring it forward to other cloud resources as well.
To the point, the question you were asking, our increased $550 million had a little bit of VeloCloud, not material, as Chantelle Breithaupt reminds me, but a lot of cloud and AI as well as enterprise campus. Great, thank you.
Speaker 2
Our next question comes from the line of Ryan Koontz with Needham and Company. Please go ahead.
Speaker 0
Great, thanks. I want to ask maybe a question for Todd and Jayshree. Just about the fit for VeloCloud with your traditional go to market motion, which has been heavily direct. Do you expect VeloCloud to really beef up your channel efforts and working with these MSPs? Can you expand that a little bit? Thank you. Yeah. I think it's incredibly complementary on two fronts. One is it's filled an enormous hole in the enterprise campus portfolio for the distributed branch. Being able to bring VeloCloud technology through our traditional Arista channel gives us an opportunity to cross sell SD-WAN into so many existing campus accounts with the existing Arista go to market, which is amazing. VeloCloud had a really strong MSP motion.
We are pushing really hard right now and really embracing that, not just to continue the VeloCloud success, but to now bring all of Arista's portfolio through that same channel, through those same partners, and really embrace that MSP motion and use the VeloCloud intellectual property in their business operation in order to learn from that and bring that MSP motion to all of the rest of this portfolio.
Speaker 1
Said, Todd. You know, sometimes we're the engine and sometimes we're the caboose. In the case of the managed service provider, you know, we're definitely going to leverage the strength of VeloCloud.
Speaker 0
It makes sense. Jayshree, thanks so much.
Speaker 2
Our next question comes from the line of Antoine Gabin with New Street Research. Please go ahead.
Speaker 0
Hi, thank you very much for the question. Can you please remind us what's required for scale-up topologies and how that differs from scale-out? Are traffic packages more predictable, easier to manage, and how do you see the competitive dynamic evolving? Does this create more depreciation for Arista Networks or more room for white box compared to scale-out?
Speaker 1
Yeah. First of all, I would say that scale-up is a new and unique requirement and it particularly is going to come in as people start building more and more AI racks. Right. When you're building an AI rack and you want to boost the radix and performance of an individual rack or cluster and your XPU radix gets bigger and bigger, you also need a very simple interconnect, right? This interconnect in the past has been PCIe Express, CXL, and now you're seeing a lot of NVIDIA NVLink where you can really collapse your system board and XPU socket into an I/O. It's almost not a network, it's an I/O. It's a backend to a backend, if I can call it that. Right. Scale-up networks will be an incremental new market as Arista pursues it today.
Majority of that market lies inside a compute network structure and isn't something Arista is participating in. We are very encouraged by the standards for scale-up Ethernet that Broadcom has initiated, and we're big fans of. We think that Ethernet as a transport protocol is going to favor Arista and Broadcom very much. Any little bit of incremental share we get there will be better than the zero we have right now. We also think UA Link is another spec that's coming out and that may run as an overlay on top of an Ethernet underlay. There needs to be some firm standards there because today scale-up is frankly all proprietary and NVLink. We're encouraged by just like we worked hard to found the Ultra Ethernet Consortium as a member for the backend Ethernet. The migration from InfiniBand to Ethernet is literally happening in three to five years.
We expect the same phenomena on scale-up.
Speaker 0
Thank you, Jayshree.
Speaker 1
Thank you.
Speaker 2
Our next question comes from the line of Amit Daryanani with Evercore. Please go ahead.
Speaker 0
Thanks a lot. Congrats on a nice set of numbers here. Jayshree, as I think about the 25% guide that you folks are talking about for the full year, which is really impressive, can you just talk about what are you seeing specifically that's enabling you to raise your guide from 17% to 25%? What markets or what vector do you see that make you feel better about it? Do you see the potential for this higher growth to be more durable as Arista Networks realizes once in a lifetime opportunity, as you talked about in your call? Thank you.
Speaker 1
Okay, thank you for the wishes, Ahmed. As you know, it's not easy to execute on large numbers, so durable growth gets harder and harder as the numbers get bigger and bigger. We've always believed in a CAGR of mid teens. We've always believed in double digits. We hope we will continue to grow for many years to come in those kind of numbers. Coming back to your question, I think when we guided the year pragmatically back in February, what Chantelle and I saw was a lot of activity but not a lot of confirmation. Sitting here in August, now that activity has translated in all three sectors into a lot of confirmation. Enterprise campus, I couldn't be more bullish. We had a record quarter in terms of demand. Obviously we have to ship, but it's the strongest we've felt.
As you, as Todd might appreciate since he created a lot of Meraki, as you look at the campus, this is going to be very strategic, very large and very important because it's a large TAM of $25 to $30 billion. Getting new logos, getting our value and our differentiation, especially in the post-pandemic era, understood in terms of wired, wireless, IoT segmentation, security, zero trust, zero touch provisioning was critical. We saw that shift happen in the first half of this year. On AI, I don't need to tell you that despite losing one of our key anchor customers, the fifth customer was a sovereign AI customer. That's pretty much out of these numbers. We were still able to, we believe, achieve $750 million in backend targets revenue and exceed $1.5 billion for the year. Exact numbers we'll know when we finally ship.
We can't give you those specifics now, but despite losing one customer, we're having a lot of activity in the four big ones. It's pleasantly a surprise to us to see the advent of enterprise and even some new clouds. Numbers are small. It's not as big as the large titans, but it's all adding up. The third thing, as I was telling Mita, was the cloud itself. When you start putting that kind of pressure on performance and bandwidth and capacity on the back end of the network, eventually you got to go refresh the front end cloud. We're seeing many more migrations from 100 to 400 gig and even 800 and that's helping. All three are contributing to this new growth we are projecting for the year.
Speaker 0
Perfect, thanks.
Speaker 2
Our next question comes from the line of Michael Ng with Goldman Sachs. Please go ahead.
Speaker 1
Hey, good afternoon.
Speaker 0
I just have one and one follow-up, I guess, for Chantelle. I'm just wondering if you could comment a little bit more on the deferred revenue or the billings growth. What was the primary driver there? I was wondering if that was a contributor to the magnitude of the guidance increase that we saw. Second, Jayshree, you mentioned the path towards $10 billion in 2026. Quite a ways out, but maybe you can speak to some of the things that you're seeing. I know you talked a lot through it, but the confidence in that number and things that could break in the right way that could result in that number even being better. Thank you.
Speaker 2
Yeah, I think for the first one, thank you for your question. In the sense of, from the deferred balance, you know, between products and services, this is indicative of new product, new use case, AI, as I mentioned in my remarks, and it's across those categories. As far as this year, you know, the deferred is going to be at this year, next year, because it's 12, 18, 24 months, some of the use cases that we have in there, to your point. You know, it's always helpful to have the deferred revenue balance growing.
Speaker 1
It does move, and does move with.
Speaker 2
Volatility given some of the sizes of so many new use cases and new product introduction. More to come.
Speaker 1
We don't guide it.
Speaker 2
I would say that those are the factors that go into it. For your question and then Jayshree.
Speaker 1
Yeah, you know, thank you, Michael. A couple of things, even on 2025, I think the parallel for me too, because I'm such a historian and I have so many years behind me, you know, if you look at the cloud, we had a lot of deferred, and you know, on one hand I can tell you guys don't pay attention to it, it'll eventually come out and something will come in. It is a very high level of experimentation with new GPUs, traffic patterns, the number of GPUs, the location of the GPUs, the distribution of GPUs, the traffic patterns. There's a lot more work going on there and customers are experimenting, customers are seeing GPUs every 18 months. We have to adapt to that. We have to look at performance, we have to look at high availability, to look at automation, visibility.
It's a nontrivial amount of complex work. By the way, often these are not brand new. It's part of a brownfield where we're trying to do all this. You know, the car is running at 100 miles an hour and we're trying to add AI to it kind of problem. Don't underestimate that this deferred that's been going on since last year will continue this year and perhaps next as well. The length of deferred is taking longer. On one hand I'll say don't pay too much attention to it. Sometimes it'll come out, sometimes it'll go in. On the other hand, this is definitely more because of AI than anything else in terms of 2026. I think it's only fair we save Analyst Day for that. We did announce that.
I would be remiss if I didn't tell you I'm very proud of the team and we're looking to achieve $10 billion in revenue in 2026, two years ahead of schedule. I promised you guys that back in the last Analyst Day in 2028. There you go. That's the headline.
Speaker 0
That's very clear. Thank you, Jayshree. Thank you, Chantelle.
Speaker 1
Thank you. Thank you.
Speaker 2
Our next question comes from the line of Simon Leopold with Raymond James. Please go ahead.
Speaker 0
Thank you very much. I know you don't like to give us specifics on customer contributions. I guess what I'm really trying to see if you could help us is understanding how that might be shifting, given that it does seem like there should be some broadening with the Neo clouds, the sovereigns, and enterprise. At the same time, some of your biggest customers are growing their spending significantly. Any clues or hints or quantification you can offer to help us better appreciate what your concentration and largest customer mix looks like and it's trending towards?
Speaker 1
Yeah, I'll try my best. The minute we call them a titan, which you know, we now have included some customers in the titan that previously weren't and we moved customers out of the titans into the specialty providers, you know, they have a big spend so you should not be surprised to see at least 10% concentration from our two favorite customers and we will get greater contribution from our other customers even if they're not 10% because of the AI investments. Our AI titans, if you will, and our cloud titans are going to make a meaningful, really high contribution to the. That's one half of us. That's one half of the coin. The other half is, you know, why we're so excited to have Todd. Don't underestimate the power of all these customers coming together as an aggregate adding to a very high number.
It's not your one titan as a collection, they're a titan, but each one of them by themselves are a meaningful contributor. What Todd's team, along with Pushmit, Chris, Belmer, Ashwin are doing is just fantastic. We're really going to have a balanced approach of two very meaningful businesses contributing together. Definitely AI is going to create that large investments. Large CapEx that you guys have all seen from our customers is going to translate into some investments into us too. We're equally excited about the enterprise.
Speaker 0
Thank you.
Speaker 2
Our next question comes from the line of Thao Leoni with Bank of America. Please go ahead.
Speaker 0
I want to talk about the sustainability of growth. Tomahawk 6 was delayed, and the question is whether there is any correlation between the delays in Tomahawk 6 and your growth. Are customers buying more now than before? Maybe they waited for it. Another follow-up on the sustainability of growth is also the sustainability of margins at 49%, almost 49% operating margin. When do you start to upset your customers, your big customers, because they have an alternative to buy white boxes and it's cheaper? Do you have that much of a differentiation that justifies paying a lot more for a product versus white boxes, assuming that on white boxes the manufacturer doesn't make 49% margin?
Speaker 1
Okay, Tal, that's a loaded question on sustainability. Have we had sustainability over the last 15 years? Have we had white box over the last 15 years? I'm asking these questions rhetorically. I'm not expecting you to answer them. Look, it's going to be competitive. There's a set of, you know, throwaway white boxes that some set of ODM manufacturers will build where they don't need all the value. Particularly in the leaf situation, we can see that if you don't need features, you don't need value, then you probably won't pay the premium price. I also want to add that 49% operating margin is not a function of just our value, it's a function of our efficiency. This company knows how to do more with less. We don't just throw thousands of marketing people, salespeople, or engineers on one problem. We architect it correctly and we've always been efficient.
I challenge you to find somebody, some other company that does it more efficiently. That's not a white box problem. That's efficiency. Our customers appreciate that. We don't have layers and hierarchies and big company corporate stuff. We do this efficiently. They're kind of two different things. No doubt we will coexist with white box. No doubt a set of customers will appreciate our support, our quality, our innovation, and would be willing to pay the premium. As I've often said to you too as well, Tal, you can trade CapEx for OpEx and vice versa. You can buy a cheap box and then you can support it yourself. You're going to need hundreds of engineers to do that.
That's one model and the other is Arista, where we'll put in the buffers, the congestion control, the value, the EOS, and hopefully you will need less support staff to do that.
Speaker 0
What about the?
Speaker 1
You did ask me about Tomahawk 6. I mean, Broadcom's been a fantastic partner. I don't think they're late on it. This is very complex. Silicon Tomahawk 6 is in our labs. Stay tuned for new product next year.
Speaker 0
Thank you.
Speaker 1
Thank you.
Speaker 2
Our next question comes from the line of James Fish with Piper Sandler. Please go ahead.
Speaker 0
Hey, great, great quarter Jayshree for you. We've talked a little bit in the past about blue box instead of white box, I guess. What are you seeing on sort of the blue box side versus full system with some of your main customers? Todd, sorry, you can't escape me here. The VeloCloud side of things, obviously that space has evolved where it's gone into the SASE mode. Jayshree, you even said like, hey, we're going to partner with our security partners for a full SASE, but do you need to think about that more directly? Just because it is becoming a world where customers are looking for a full SASE offering from one throat to choke, as they say, I guess. How are you thinking about a broader SASE offering as opposed to just having the SD-WAN part? Thanks, guys.
Speaker 1
Do you want to take the first one while I figure out this first question or the second one? Yeah.
Speaker 0
We are looking very carefully at how we support customers from a fully integrated SASE SD-WAN solution. It's a secure WAN that matters, and delivering that solution with great assurance is something that certainly is top of mind for us. I think we have a real opportunity to do that with partners, partnership. There are so many amazing cloud security vendors out there right now, and we have so many customers that work with the VeloCloud solution along with those partners. I think that's the way we're going to be leaning in moving forward. Certainly, we'll be talking more about that at Investor Day later this year.
Speaker 1
Yeah. Just to add to what Todd said, James, we see the bifurcation of SD-WAN, sort of there's a fork in the road in two ways. One is where there's a security angle on it, and if it's just simple security, encryption, segmentation, a firewall, we can do that. If it's really the cloud security like Zscaler or Palo Alto do, we will absolutely work with best-of-breed partners and not pretend to be something we're not. Our branch infrastructure to support security is very much an Arista priority. Our branch infrastructure, as Todd said, to become a SASE or secure WAN is an overlay on top of that that we work with our partners. We really see, like I said, that fork in the road where SD-WAN isn't just a SASE solution, it's also a branch solution.
When you have all these campuses with large headquarters and then your home is a branch, your retail is a branch, your library is a branch, you need a mini-me solution of our campus. This is where I think VeloCloud will really shine with Arista products, with our wired, wireless, and bringing all of that CloudVision and VeloCloud Orchestrator together for a seamless provision is a big goal of ours. All the way from the multi-domain CloudVision to the cognitive unified edge and experience down to the branch. We're excited about that fork in the road and onewheel partner and in onewheel build more integration ourselves. Blue box is very much an important part of our strategy, still in strategy form. We expect to see that evolve in the next few years.
We haven't had to build that muscle yet because we're still in crazy AI mode, but we absolutely will complement and coexist with the white box to offer Arista blue box. What do I mean by that? It means a very battle-tested, highly well-designed hardware the way Andy Bechtolsheim and his team know how to do can be delivered, you know, as an upgrade or as a better hardened white box. We fully plan to do that and in fact do that with bundled software today.
Speaker 0
Thanks Hess.
Speaker 1
Thank you, James.
Speaker 2
Our next question comes from the line of Samik Chatterjee with J.P. Morgan.
Speaker 1
Please go ahead.
Speaker 0
Yep. Hi, thanks for taking my question, Jayshree. Strong set of results here, and congratulations on the strong outlook as well. If I go back to your comments about the ability to meet the $750 million AI backend number even as the fifth customer is absent now, is that largely stemming from bigger cluster size deployments from your existing tier one customers, or is something else moving around in terms of timing of those deployments related to expectations? Just as a follow-up, I think the fourth customer you had earlier referenced was much slower in terms of activity. Can you just give us an update on that front? Yeah, thank you.
Speaker 1
That's a good question, Sami. Thank you for the wishes as well. I think two of our customers have already approached or are going to quickly approach 100,000 GPUs. I don't think it's any more about just how big. You know, we used to talk about million GPUs and all that. Increasingly, what we're seeing is more and more distributed GPU clusters for training and inference. Two customers have reached that goal, the third one might reach that goal. The fourth one that I said we just begin with is probably too early to reach that 100,000. That's probably a goal for next year. That's the composition. Two are strong, one is medium, and the other still developing.
To make that number or actually to exceed that number, you may have noticed that I pointed out that we now have in aggregate, I think last time we said 15 and now we're saying 25 to 30 enterprise and neo cloud customers. They're not big individually, but together they add up to contribute as well for the loss of the fifth customer and the slowness of the fourth. We believe with the increase in $550 million, that AI will be a contributor to that. Exactly how it will shape up will depend on what we ship out. Feeling really good, and I won't measure it anymore just on number of GPUs. I think there's a lot more to do with locality, distribution, radix, and also choice of multi-tenant optimizations, collective libraries, level of resilience, etc. We're seeing a lot more complexity run into this than straight number of GPUs.
Speaker 0
Thank you.
Speaker 1
Thank you.
Speaker 2
Our next question comes from the line of Erin Rakers with Wells Fargo. Please go ahead.
Speaker 0
Yeah, thanks for taking a question. Also, congrats on the quarter. This probably builds on a few other earlier questions, but you know, Jayshree, I'm curious as we think about the sovereign AI opportunity, whether or not that's factoring at all into kind of what you're seeing currently. I know you alluded to a fifth customer which was a sovereign falling out, but I'm curious how you think about that opportunity set, what you're seeing as far as customer engagement and if we should kind of think about that as becoming a more material, incremental driver as we look through 2026 and beyond. Thank you.
Speaker 1
Yeah, no, Erin, that's a good point. We've once bitten, twice shy. Since our fifth customer was a sovereign AI and it didn't work out, we're certainly not factoring it into our numbers this year, but we haven't lost faith or hope that that could be an important segment for us in the next several years. I think there's going to be a lot of expanded build outs. In fact, one of the new clouds is a sovereign AI, which is a non-NVIDIA cluster that we're working with right now. That may factor in in 2026, but having said that, it's still early days and we're cautiously optimistic.
Speaker 0
Thank you.
Speaker 2
Our next question comes from the line of Atif Malik with Citi.
Speaker 1
Please go ahead.
Speaker 0
Hi. Thank you for taking my question. Jayshree, you talked about scale-up Ethernet to be incremental to your TAM. Curious if you have any sense how big this TAM is in three years.
Speaker 1
Atif, I don't know yet. In terms of port density, in terms of units, if I look at the ratio within a rack versus outside in units, it's quite high, 8 to 1, 10 to 1. In terms of dollars, I don't think it's nearly as much because the level of functionality required is much simpler. How about we beg that question out for September when we'll know more?
Speaker 0
That's the deal. Thank you.
Speaker 1
Okay, thank you. I owe you one answer.
Speaker 2
Our next question will come from the line of Carl Ackerman with BNP Paribas. Please go ahead.
Speaker 0
Yes, thank you.
Speaker 1
Jaishree.
Speaker 0
You noted you are seeing good activity with the top four hyperscalers. While you indicated that your backend revenue this year will be primarily driven by two of them, would you expect that all four cloud providers would adopt Arista switches for backend deployments in 2026? Where are you seeing the most opportunities with these new cloud providers? That certainly could be a big opportunity as you see it. Thanks.
Speaker 1
Carl, the short answer would be yes, we got some work to do. The answer is absolutely all four of them. Two of them already have large and all the other two will be deployed in the backend. It'll also fuel the front end. In terms of new clouds, almost always the new cloud is a combination of back and front. It's never one or just the other. Definitely the new clouds also have a backend component.
Speaker 0
Thank you. Hina, we have one last question.
Speaker 1
Thank you.
Speaker 2
Our final question will come from the line of David Vogt with UBS. Please go ahead.
Speaker 0
Great. Thanks guys for squeezing me in. Jayshree, I just wanted to maybe pick your brain a little bit. You mentioned scale up, but can we talk about the competitive or maybe the technical opportunities with scale out with Jericho 4 that was announced today or yesterday and how you're thinking about kind of what that means for your technology position with regards to sort of distributed AI going forward. I know scale up is an incremental opportunity, but maybe just kind of share your thoughts on where you stand. Scale out.
Speaker 1
This is a really good, thoughtful question because this is our bread and butter. Arista is the premier scale-out spine platform. The 7800 Spine, our AI spine, is a really flagship franchise platform. It takes advantage of all of the virtual output queuing, the congestion control, the per-flow queuing, the buffering, et cetera, in a way that nobody else in the industry is able to demonstrate. By the way, besides being a great AI spine, it's also a great routing platform for the WAN. This product is sort of the anchor for a lot of things we do at scale-out, both on the backend and frontend, and has been our workhorse for some time and is only getting better. Much of what we've done so far is 400 gig with Jericho 4. Congratulations Broadcom. We're looking forward to the 800 gig and then beyond for others as well.
Thank you for reminding us that we're continuing to push the envelope of innovation, and we fully expect the series that we started with R1, R2, R3 to evolve to R4, all in the context of a very consistent software and platform architecture. Thank you.
Speaker 0
This concludes Arista Networks' second quarter 2025 earnings call. We have posted a presentation that provides additional information on our results, which you can access in the Investor section of our website. Thank you for joining us today and for your interest in Arista.
Speaker 2
Thank you for joining, ladies and gentlemen. This concludes today's call. You may now disconnect.