Arista Networks - Q4 2025
February 12, 2026
Transcript
Operator (participant)
Welcome to the Fourth Quarter and 2025 Arista Networks Financial Results Earnings Conference Call. During the call, all participants will be in a listen-only mode. After the presentation, we will conduct a question-and-answer session. Instructions will be provided at that time. If you need to reach an operator at any time during the conference, please press the star key followed by zero. As a reminder, this conference is being recorded and will be available for replay from the investor relations section on the Arista website following this call. Mr. Rudolph Araujo, Arista's VP of Investor Advocacy, you may begin.
Rudolph Araujo (VP of Investor Advocacy)
Thank you, Regina. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks Chairperson and Chief Executive Officer, and Chantelle Breithaupt, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal fourth quarter, ending December 31, 2025. If you want a copy of the release, you can access it online on our website. During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the first quarter of the 2026 fiscal year, longer-term business model and financial outlooks for 2026 and beyond.
Our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, tariffs and trade restrictions, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization, and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K, and which could cause actual results to differ materially from those anticipated by these statements. These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call.
This analysis of our Q4 results and our guidance for Q1 2026 is based on non-GAAP and excludes all non-cash stock-based compensation impacts, certain acquisition-required charges, and other non-recurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. With that, I will turn the call over to Jayshree.
Jayshree Ullal (Chairperson and CEO)
Thank you, Rudy, and thank you everyone for joining us this afternoon for our fourth quarter and full 2025 earnings call. Well, 2025 has been another defining year for Arista. With the momentum of generative AI and cloud and enterprise, we have achieved well beyond our goal at 28.6% growth, driving a record revenue of $9 billion, coupled with non-GAAP gross margin of 64.6% for the year and a non-GAAP operating margin of 48.2%. The Arista 2.0 momentum is clear as we surpassed 150 million cumulative ports of shipments in Q4 2025. International growth was a good milestone in both Asia and Europe, growing north of 40% annually.
As expected, we have exceeded our strategic goals of $800 million in campus and branch expansion, as well as $1.5 billion in AI center networking. Shifting to annual customer sector revenue for 2025, cloud and AI titans contributed significantly at 48%. Enterprise and financials recorded at 32%, while AI and specialty providers, which now includes Apple, Oracle, and their initiatives, as well as emerging neo clouds, performed strongly at 20%. We had two greater than 10% customers, customer concentration in 2025. Customer A and B drove 16% and 26% of our overall business. We cherish our privileged partnerships that have spanned 10 to 15 years of collaborative engineering. With our ever-increasing AI momentum, we anticipate a diversified customer base in 2026, including one, maybe even two additional 10% customers.
In terms of annual 2025 product lines, our core cloud AI and data center products, built upon a highly differentiated Arista EOS stack, is successfully deployed across 10 gig to 800 Gigabit Ethernet speeds, with 1.6 Terabit migration imminent. This includes our portfolio of Etherlink AI and our 7000 series platforms for best-in-class performance, power efficiency, high availability, automation, agility for both the front and back-end compute, storage, and all of the interconnect zones. Of course, we interoperate with NVIDIA, the recognized worldwide market leader in GPUs, but also realize our responsibility to broaden the open AI ecosystem, including leading companies such as AMD, Anthropic, Arm, Broadcom, OpenAI, Pure Storage, and VAST Data, to name a few, that create the modern AI stack of the 21st century.
Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models, processing tokens at teraflops. Arista's core sector revenue was driven at 65% of revenue. We are confident of our number one position in market share in high-performance switching, according to most major industry analysts. We launched our Blue Box initiative, offering enriched diagnostics of our hardware platforms, dubbed Netdi, that can run across both our flagship EOS and our open NOS platforms. We saw an excellent uptick in 800 gig adoption in 2025, gaining greater than 100 customers cumulatively for our Etherlink products, and we are co-designing several AI rack systems with 1.6T switching emerging this year. With our increased visibility, we are now doubling from 2025 to 2026 to $3.25 billion in AI networking revenue.
Our network adjacencies market is comprised of routing, replacing routers, and our cognitive AI-driven AVA Campus. Our investments in cognitive wired and wireless, zero-touch operation, network identity, scale and segmentation get several accolades in the industry. Our open modern stacking with SWAG, Switch Aggregation Group, and our recent VESPA for layer two and layer three wired and wireless scale are compelling campus differentiators. Together with our recent VeloCloud acquisition in July 2025, we are driving that homogeneous, secure client to branch to campus solution with unified management domains. Looking ahead, we are committed to our aggressive goal of $1.25 billion for 2026 for the cognitive campus and branch. We have also successfully deployed in many routing edge, core spine, and peering use cases.
In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI spines, with that massive 460 terabits of capacity to meet the demanding needs of multi-service routing, AI workloads, and switching use cases. The combined campus and routing adjacencies together contribute approximately 18% of revenue. Our third and final category is the network software and services based on subscription models such as A-Care, CloudVision, Observability, Advanced Security, and even some branch edge services. We added another 350 CloudVision customers a day, almost 1 new customer a day, and deployed an aggregate of 3,000 customers with CloudVision over the past decade. Arista's subscription-based network services and software revenue contributed approximately 17%, and please note that it does not include perpetual software licenses that are otherwise included in core or adjacent markets.
Arista 2.0 momentum is clear. We find ourselves at the epicenter of mission-critical network transactions. We are becoming the preferred network innovator of choice for client to cloud and AI networking, with a highly differentiated software stack and a uniform CloudVision software foundation. We are proud to power Warner Bros. distribution network streaming for 47 markets in 21 languages in the Pan-European Winter Olympics that is happening as I speak. We are now north of 10,000 cumulative customers, and I'm particularly impressed with our traction in the 5-10 million customer category, as well as the 1 million customer category in 2025. Arista's 2.0 vision resonates with our customers who value us for leading that transformation from incongruent silos to reliable centers of data. The data can reside as campus centers, data centers, WAN centers, or AI centers, regardless of their location.
Networking for AI has achieved production scale with an all-Ethernet-based Arista AI Center. In 2025, we are a founding member of the Ethernet-based standards for both scale-up with ESUN, as well as completing the Ultra Ethernet Consortium 1.0 specification for scale-out AI networking. These AI centers seamlessly connect the back-end AI accelerators to the front end of compute storage, WAN, and classic cloud networking. Our AI-accelerated networking portfolio, consisting of three families of Etherlink spine leaf fabric, are successfully deployed in scale-up, scale-out, and scale-across networks. Network architectures must handle both training and inference frontier models to mitigate congestion. For training, the key metric is obviously job completion time, the amount of time taken between admitting a job, training job, to an AI accelerator cluster and the end of a training run. For inference, the key metric is slightly different.
It's the time taken to a first token, basically the amount of latency it takes for a user submitting a query to receive their first response. Arista has clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow, and all the patterns associated with it. Our AI for networking strategy, based on AVA, Autonomous Virtual Assist, curates the data for higher-level functions. Together with our publish-subscribe state foundation in EOS, NetDL or Network Data Lake, we instrument our customers' networks to deliver proactive, predictive, and prescriptive features for enhanced security, observability, and Agentic AI operations. Coupled with the Arista validated designs for network simulation, digital twin, and validation functionality, Arista platforms are perfectly optimized and suited for network as a service. Our global relevance with customers and channels is increasing.
In 2025 alone, we conducted 3 large customer events across 3 continents: Asia, Europe, and United States, and many other smaller ones, of course. We touched 4,000-5,000 strategic customers and partners in the enterprise. While many customers are struggling with their legacy incumbents, Arista is deeply appreciated for redefining the future of networking. Customers have long appreciated our network innovation and quality, demonstrated by our highest Net Promoter Score of 93% and lowest security vulnerabilities in the industry. We now see the pace of acceptance and adoption accelerating in the enterprise customer base. Our leadership team, including our newly appointed co-presidents, Ken Duda and Todd Nightingale, have driven strategic and cohesive execution. Tyson Lamoreaux, our newest senior vice president, who joined us with deep cloud operations experience, has ignited our hypergrowth across our AI and cloud Titan customers.
Exiting 2025, we are now at approximately 5,200 employees, which also includes the recent VeloCloud acquisition. I am incredibly proud of the entire Arista A Team, and thank you all employees for your dedication and hard work. Of course, our top-notch engineering and leadership team has always steadfastly prioritized our core Arista way principles of innovation, culture, and customer intimacy. Well, I think you would agree that 2025 has indeed been a memorable year, and we expect 2026 to be a fantastic one as well. We are amid an unprecedented networking demand with massive and a growing TAM of $100+ billion. And so despite all the news on the mounting supply chain allocation, rising costs of memory and silicon fabrication, we increase our 2026 guidance to 25% annual growth, accelerating now to $11.25 billion. With that happy news, I turn it over to Chantelle, our CFO.
Chantelle Breithaupt (CFO)
Thank you, Jayshree, and congratulations to you and our employees on a terrific 2025. As you outlined, this was an outstanding year for the company, and that strength is clearly reflected in our financial results. Let me walk through the details. To start off, total revenues in Q4 were $2.49 billion, up 28.9% year-over-year, and above the upper end of our guidance of $2.3 billion-$2.4 billion. It was great to see that all geographies achieved strong growth within the quarter. Services and subscription software contributed approximately 17.1% of revenue in the fourth quarter, down from 18.7% in Q3, which reflects the normalization following some non-recurring VeloCloud service renewal in the prior quarter.
International revenues for the quarter came in at $528.3 million, or 21.2% of total revenue, up from 20.2% last quarter. This quarter-over-quarter increase was driven by a stronger contribution from our large global customers across our international markets. The overall gross margin in Q4 was 63.4%, slightly above the guidance of 62%-63% and down from 64.2% in the prior year. This year-over-year decrease is due to the higher mix of sales to our cloud and AI Titan customers in the quarter. Operating expenses for the quarter were $397.1 million, or 16% of revenue, up from the last quarter at $383.3 million.
R&D spending came in at $272.6 million, or 11% of revenue, up from 10.9% last quarter. Arista continued to demonstrate its commitment and focus on networking innovation with a fiscal year 2025 R&D spend at approximately 11% of revenue. Sales and marketing expense was $98.3 million, or 4% of revenue, down from $109.5 million last quarter. FY 2025 closed the year with sales and marketing at 4.5%, representative of the highly efficient Arista go-to-market model. Our G&A costs came in at $26.3 million, or 1.1% of revenue, up from $22.4 million last quarter, reflecting continued investment in systems and processes to scale Arista 2.0.
For fiscal year 2025, G&A expense held at 1% of revenue. Our operating income for the quarter was $1.2 billion, or 47.5% of revenue. This strong Q4 finish contributed to an operating income result for fiscal year 2025 of $4.3 billion or 48.2% of revenue. Other income and expense for the quarter was a favorable $102 million, and our effective tax rate was 18.4%. This lower than normal quarterly tax rate reflected the release of statutory tax reserves due to the expiration of the statute of limitations. Overall, this resulted in net income for the quarter of $1.05 billion or 42% of revenue. It is exciting to see Arista delivering over $1 billion in net income for the first time.
Congratulations to the Arista team on this impressive achievement. Our diluted share number was 1.276 billion shares, resulting in a diluted earnings per share for the quarter of $0.82, up 24.2% from the prior year. For fiscal year 2025, we are pleased to have delivered a diluted earnings per share of $2.98, a 28.4% increase year-over-year. Now, turning to the balance sheet. Cash, cash equivalents, and marketable securities ended the quarter at approximately $10.74 billion. In the quarter, we repurchased $620.1 million of our common stock at an average price of $127.84 per share.
Within fiscal 2025, we repurchased $1.6 billion of our common stock at an average price of $100.63 per share. Of the $1.5 billion repurchase program approved in May 2025, $817.9 million remain available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price, and other factors. Now, turning to operating cash performance for the fourth quarter, we generated approximately $1.26 billion of cash from operations in the period. This result was an outcome of strong earnings performance, with an increase in deferred revenue, offset by an increase in accounts receivable, driven by higher shipments and end-of-quarter service renewals.
DSOs came in at 70 days, up from 59 days in Q3, driven by renewals and the timing of shipments in the quarter. Inventory turns were 1.5 times up from 1.4 last quarter. Inventory increased marginally to $2.25 billion, reflecting diligent inventory management across raw and finished goods. Our purchase commitments at the end of the quarter were $6.8 billion, up from $4.8 billion at the end of Q3. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters due to the combination of demand for our new products, component pricing, such as the supply constraint on DDR4 memory, and the lead times from our key suppliers.
Our total deferred revenue balance was $5.4 billion, up from $4.7 billion in the prior quarter. In Q4, the majority of the deferred revenue balance is product related. Our product deferred revenue increased approximately $469 million versus last quarter. We remain in a period of ramping our new products, winning new customers, and expanding new use cases, including AI. These trends have resulted in increased customer-specific acceptance clauses, and an increase in the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis, independent of underlying business drivers. Accounts payable days were 66 days, up from 55 days in Q3, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $37 million.
In October 2024, we began our initial construction work to build expanded facilities in Santa Clara and incurred approximately $1 million in CapEx during fiscal year 2025 for this project. As we have moved through 2025, we have gained visibility and confidence for fiscal year 2026. As Jayshree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 25% revenue growth, delivering approximately $11.25 billion. We maintain our 2026 campus revenue goal of $1.25 billion and raise our AI centers goal from $2.75 billion to $3.25 billion. For gross margin, we reiterate the range for the fiscal year of 62%-64%, inclusive of mix and anticipated supply chain cost increases for memory and silicon.
In terms of spending, we expect to continue to invest in innovation, sales, and scaling the business to ensure our status as a leading pure-play networking company. With our increased revenue guidance, we are now confident to raise the operating margin outlook to approximately 46% in 2026. On the cash front, we will continue to work to optimize our working capital investments with some expected variability in inventory due to the timing of component receipts on purchase commitments. Our structural tax rate is expected at 21.5%, back to the usual historical rate, up from the seasonally lower rate of 18.4% experienced last quarter, Q4 2025.
With all of this as a backdrop, our guidance for the first quarter is as follows: revenues of approximately $2.6 billion, gross margin between 62%-63%, and operating margin at approximately 46%. Our effective tax rate is expected to be approximately 21.5%, with approximately 1.275 billion diluted shares. In closing, at our September Analyst Day, we had a theme of building momentum, and we are doing just that. In the campus WAN, data, and AI centers, we are uniquely positioned to deliver what customers need. We will continue to deliver both our world-class customer experience and innovation. I am enthusiastic about our fiscal year ahead. Now back to you, Rudy, for Q&A.
Rudolph Araujo (VP of Investor Advocacy)
Thank you, Chantelle. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Regina, please take it away.
Operator (participant)
We will now begin the Q&A portion of the Arista earnings call. To ask a question during this time, simply press star and then the number one on your telephone keypad. If you would like to withdraw your question, press star and the number one again. Please pick up your handset before asking questions to ensure optimal sound quality. Our first question will come from the line of Meta Marshall with Morgan Stanley. Please go ahead.
Meta Marshall (Executive Director)
Great, and congratulations on the quarter. I guess in terms of kind of the commentary you had, Jayshree, on the one or two additional 10% customers, I guess just digging more into that, what are the, the puts and takes of, you know, is it bottlenecks in terms of their building? Is it... Like, what would make or break kind of whether those become two new additional kind of 10% customers? Thank you.
Chantelle Breithaupt (CFO)
Thank you, Meta, for the good wishes. So obviously, if I didn't have confidence, I wouldn't dare to say that, would I? But there's always variables. Some of it may be sitting in deferred, so there's an acceptance criteria that we have to meet, and there's also timing associated with meeting the acceptance criteria. Some of it is demand that is still underway, and, you know, in this age of all the supply chain allocation and inflation, we've got to be sure we can ship. So we don't know if it's exactly 10% or high single digits or low double digits, but, a lot of variables will decide that final number, but certainly the demand is there.
Meta Marshall (Executive Director)
Great. Thank you.
Chantelle Breithaupt (CFO)
Thank you.
Operator (participant)
Our next question will come from the line of Samik Chatterjee with JP Morgan. Please go ahead.
Samik Chatterjee (Managing Director and Senior Equity Research Analyst)
Hi. Thanks for taking my question, and Jayshree, congrats on the quarter and the outlook. I don't want to sort of say that the 25% growth is not impressive, but since you're doing 30% is what the guidance is for 1Q, maybe if I could understand what's maybe sort of leading to somewhat of a cautious in terms of visibility for the rest of the year. Is it these sort of 1-2, 2 new customers and their ramps that you're sort of more cautious about? Or is it availability of supply in relation to some of the components or memory that's sort of giving you maybe more, bit more cautiousness about the visibility for the remainder of the year, if you could understand the drivers there?
Chantelle Breithaupt (CFO)
Yeah.
Samik Chatterjee (Managing Director and Senior Equity Research Analyst)
Thank you.
Chantelle Breithaupt (CFO)
No.
Jayshree Ullal (Chairperson and CEO)
Thank you. Thank you, Samik. First, I don't think I'm being cautious. I think I went all out to give you a high dose of reality, but I understand your views on caution, given all the CapEx numbers you see from customers. That's an important thing to understand, that we don't track the CapEx. The first thing that happens in the CapEx is they got to build the data centers and the, get the power and get all of the GPUs and accelerators, and the network comes, lags a little. So demand is going to be very good, but whether it, whether the shipments exactly fall into 2026 or 2027, Todd, you can clarify when they really fall in, but there's, there's a lot of variables there. That's, that's one issue.
The second, as I said, is a large amount of these are new products, new use cases, highly tied to AI, where customers are still in their first innings. So again, you know, I'm giving you the greatest visibility I can, you know, fairly early in the year on the reality of what we can ship, not what the demand might be. It might be a multi-year demand that ships over multiple years. So let's hope it continues, but of course, you must understand that we're also facing a law of large numbers. So 25% on a base of now $9 billion, when we started last year at $8.25, is a really, really early and good start.
Samik Chatterjee (Managing Director and Senior Equity Research Analyst)
Thank you.
Operator (participant)
Our next question will come from the line of David Vogt with UBS. Please go ahead.
David Vogt (Managing Director and Senior Equity Research Analyst)
Great. Thanks, guys, for taking my question. Maybe Chantelle and Jayshree, can you help quantify sort of both the revenue impact and potential kind of gross margin impact embedded in your guide from the memory dynamics and the constraints? I know last quarter, and you even mentioned in this quarter, you know, obviously, the supply chain does have some constraints. When you think about, I think, Jayshree, you just said kind of the, the real outlook that you see, maybe can you help parameterize, you know, what you think could hold you back-
Jayshree Ullal (Chairperson and CEO)
Right.
David Vogt (Managing Director and Senior Equity Research Analyst)
If that's the way to phrase it, and just give us a sense for what upside could be, you know, in a perfect world, effectively, if you could share that.
Jayshree Ullal (Chairperson and CEO)
I'm going to give some general commentary, and Chantelle, if you don't mind adding to it. You know, our peers in the industry have been facing this probably longer than we have, because I think the server industry probably saw it first, because they're more memory intensive. Add to that, that we're expecting increases from the silicon fabrication, that all the chips are made, as you know, centrally in with one company, Taiwan Semiconductor. So Arista has taken a very thoughtful approach, being aware of this since 2025, and frankly, absorbed a lot of the costs in 2025 that we were incurring. However, in 2026, the situation has worsened significantly. We're having to smile and take it just about at any price we can get, and the prices are horrendous. They're an order of magnitude exponentially higher.
So clearly, with the situation worsening and also expected to last multiple years, we are experiencing shortages in memory. Thankfully, you, as you can see, reflected in our purchase commitments, we are planning for this. And I know that memory is now the new gold for the AI and automotive sector, but clearly, it's not going to be easy, but it's going to favor those who planned and those who can spend the money for it. Chantelle?
Chantelle Breithaupt (CFO)
Yeah, I think, I think the only thing I'd add to your question, David, and thank you for that, is that so we're, we're comfortable in the guide, and that's why we have the guide and why we raised the numbers that we did. So we're comfortable we have a path to there within the numbers we provided. The range of $62-$64, I think we are pleased to hold despite this kind of pressure coming into it. You know, this has been our guide since September at our Analyst Day, so we're pleased to hold that guide and find ways to mitigate this, you know, this journey. Now, whether it ends up being, you know, $62.5 versus $63.5 in the guide in that range, that's, that's where we'll, we'll continue to update you, but the range we're comfortable with.
David Vogt (Managing Director and Senior Equity Research Analyst)
Understood. Thanks, guys.
Jayshree Ullal (Chairperson and CEO)
Thank you, David.
Operator (participant)
Our next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead.
Aaron Rakers (Managing Director and Senior Equity Analyst)
Yeah, thanks, though, for taking the question, and congrats as well on the quarter and the guide. I guess when we think about the $3.25 billion guide for the AI contribution this year, I'm curious, Jayshree, how much you're factoring, if at all, if any, from scale-up networking opportunity, how do you see?
Jayshree Ullal (Chairperson and CEO)
Yeah.
Aaron Rakers (Managing Director and Senior Equity Analyst)
Is that more still of a 27? And also, can you unpack like ex the AI and ex the campus contribution? It appears that you're guiding still pretty muted, low single digit growth on non-AI. Just curious to how you see the-
Jayshree Ullal (Chairperson and CEO)
Oh!
Aaron Rakers (Managing Director and Senior Equity Analyst)
the non-AI, non-campus growth.
Jayshree Ullal (Chairperson and CEO)
Yeah. Okay. Yeah. Well, you know, rising tide rises all boats, but some go higher and some go lower. But to answer your specific question, what was it, Aaron?
Aaron Rakers (Managing Director and Senior Equity Analyst)
How much scale up? Scale up.
Jayshree Ullal (Chairperson and CEO)
Oh, how about scale up? We have consistently described that today's configurations are mostly a combination of scale-out and scale-up. We're largely based on 800 gig and smaller radix. Now that the ESUN specification is well underway, and Ken Duda, you can, I think the spec will be done in a year or this year for sure. So, Ken and Hugh Holbrook are actively involved in that. We need a good solid spec, otherwise we'll be shipping proprietary products like some people in the world do today. And so we will tie our scale-up commitment greatly to availability of new products and a new ESUN spec, which we expect the earliest to be Q4 this year.
Therefore, majority of the we'll be in some trials where a lot of, you know, Andy Bechtolsheim and the team is working on a lot of active AI racks with scale-up in mind, but the real production level will be in 2027, primarily centered around not just 800 gig, but 1.6 T. And I think that-
Aaron Rakers (Managing Director and Senior Equity Analyst)
Thank you.
Jayshree Ullal (Chairperson and CEO)
Oh, okay. Thank you, Aaron.
Operator (participant)
Our next question will come from the line of Amit Daryanani with Evercore ISI. Please go ahead.
Amit Daryanani (Senior Managing Director)
Yep, thanks a lot, and congrats from my end as well for some really good numbers here. Jayshree, if I think of some of these model builders like Anthropic, that I think you folks have talked about, you know, they're starting to build these multi-billion, multi-billion dollar clusters on their own now. Can you just talk about your ability to participate in some of these build-outs as they happen, be that on the DCI side or maybe even beyond that? And by extension, does this give you an opportunity to ramp up with some of the larger cloud companies that these model builders are partnering with over time as well, as they build out TP or training clusters? I'd love to just understand how that kind of business scales with you folks. Thank you.
Jayshree Ullal (Chairperson and CEO)
Yeah, no, Amit, that's a very, thoughtful question, and I think you're absolutely right. I, the network infrastructure is playing a critical role with these model builders in a number of ways. If you look at us, initially, we were largely working with, you know, one or two model builders and one or two accelerators, NVIDIA and AMD and OpenAI was the primarily dominant one. But today, we see that there's really, you know, multiple layers in a cake where you've got the GPU accelerators.
Of course, you've got power as the most difficult thing to get, but Arista needs to deal with multiple domains and model builders, and appropriately, whether it is Gemini or, you know, xAI or Claude or OpenAI, and many more coming, these models and the multi-protocol algorithm or nature of these models is something we have to make sure we build the network correctly for. So that's one. And then to your second point, you're absolutely right. I think the biggest issue is not only the model builders, but they're no more in silos in one data center, and you're going to see them across multiple colos and multiple locations and multiple partnerships with our cloud titan customers that we've historically not worked with this.
So I think you'll see more co-pilot versions of it, if you will, with a number of our cloud titans. So we expect to work with them as AI specialty providers, but we also expect to work with our cloud titans in bringing the cloud and AI together.
Amit Daryanani (Senior Managing Director)
Thank you.
Jayshree Ullal (Chairperson and CEO)
Thank you, Amit.
Operator (participant)
Our next question comes from the line of George Notter with Wolfe Research. Please go ahead.
George Notter (Managing Director and Senior Research Analyst)
Hi, guys. Thanks very much. I was just curious about the product deferred revenue and how you see that, you know, coming off the balance sheet ultimately. Obviously, it's just been stacking up here quarter after quarter after quarter. So a few questions here: Does that come off in big chunks that we'll see, you know, in different quarters in the future? Does it come off more gradually? Does it continue to build? Like, what does the profile look like for that product deferred coming off the balance sheet and flowing through the P&L? And then also, I'm curious about how much product deferred do you have in the full year revenue guidance, the 25%? Thanks a lot.
Chantelle Breithaupt (CFO)
Yeah. Hey, George. Thanks for the questions. Not much has changed in the sense of how we have this conversation. What goes into deferred is new product, new customers, new use cases. The great new use case is AI. The acceptance criteria for that, for the larger deployments, is 12-18 months. Some can be as short as 6 months, so there's a wide variety that goes in. Deferred has balances coming in and out every quarter. We don't guide deferred, and we don't say product specific. What I can tell you in your questions is that there will be times where there are larger deployments that will feel a little lumpier as we go through. But again, it's a net release of a balance, so it depends what comes in at that same quarter timing.
George Notter (Managing Director and Senior Research Analyst)
Got it. Okay. Any sense for what's in the full year guide, then? I assume not much. Is that fair to say?
Jayshree Ullal (Chairperson and CEO)
Yeah. It's super hard, George. It's when the acceptance criteria happens. You know, if it happens December 32nd, it's a different situation. If it all happens in, you know, Q2, Q3, Q4, that's the difference. So that's something we really have to work with the customer. So-
George Notter (Managing Director and Senior Research Analyst)
Thank you.
Jayshree Ullal (Chairperson and CEO)
Sorry that we're not able to be clairvoyant on that.
George Notter (Managing Director and Senior Research Analyst)
Makes sense. Thank you.
Jayshree Ullal (Chairperson and CEO)
Thank you.
Chantelle Breithaupt (CFO)
Thank you.
Operator (participant)
Our next question comes from the line of Ben Reitzes with Melius Research. Please go ahead.
Ben Reitzes (Managing Director, Partner, and Head of Technology Research)
Hey, thanks a lot, and I guess my congrats to you guys. You know, this execution and guide is really something. So, I wanted to-
Jayshree Ullal (Chairperson and CEO)
Thank you, Ben.
Ben Reitzes (Managing Director, Partner, and Head of Technology Research)
You're welcome. I wanted to ask about two things that I just was wondering if you could talk a little bit more about your Neocloud momentum, and what that is looking like in terms of materiality. And then also, if you don't mind touching on AMD, with the launch. We're kind of hearing about you getting a lot of networking attached to the 450 type product or their new chips. I'm wondering if that is a catalyst or not, as you go throughout the year. Thanks so much.
Jayshree Ullal (Chairperson and CEO)
Yeah. So Ben, as you can imagine, the specialty cloud providers have historically had a cacophony of many types of providers. We are definitely seeing AI as one of the clear impacts. It used to be content providers, tier two cloud providers, but AI is clearly driving that section. And it's a suite of customers, some of who have real financial strength and are looking now to invest and increase and pivot to AI. So the rate at which they pivot in AI will greatly define how well we do this. And, you know, they're not yet titans, but they want to be or could be titans, is the way to look at it, so... And we're going to invest with them, and these are healthy customers. It's nothing like the dotcom era, so we feel good about that.
There are a set of neo clouds that we watch more carefully because some of them are, you know, oil money converted into AI or crypto money converted into AI. And over there, we are going to be much more careful because some of those neo clouds are, you know, looking at Arista as the preferred partner, but we would also be looking at the health of the customer, or they may just be a one-time. We don't know the exact nature of their business, and those will be smaller biz and they don't contribute in large dollars, but they are becoming increasingly plentiful in quantity, even if they're not yet in numbers.
So I think you're seeing this dichotomy of two types in that category, or three types: the classic CDN and security specialty providers, tier two cloud, the AI specialty are going to lean in and invest, and then the neo clouds in different geographies.
Ben Reitzes (Managing Director, Partner, and Head of Technology Research)
The AMD?
Jayshree Ullal (Chairperson and CEO)
Oh, yes, the AMD question. You know, a year ago, I think I said this to you, but I'll repeat it. A year ago, it was pretty much 99% NVIDIA, right? Today, when we look at our deployments, we see about 20, 20%, maybe a little more, 20%-25%, where AMD is becoming the preferred accelerator of choice. And in those scenarios, Arista is clearly preferred because they're building best-of-breed building blocks for the NIC, for the network, for the IO, and they want open standards as opposed to a full-on vertical stack from one vendor. So you're right to point out that, AMD, and in particular, it's a joy to work with Lisa and Forrest and the whole team, and we do very well in that multi-vendor open configuration.
Ben Reitzes (Managing Director, Partner, and Head of Technology Research)
Thank you.
Operator (participant)
Our next question will come from the line of Tim Long with Barclays. Please go ahead.
Tim Long (Managing Director and Senior Equity Research Analyst)
Thank you. Yeah, appreciate all the color. Jayshree, maybe we could touch a little bit on scale across. It's obviously gotten a lot of attention, particularly on the optics layer from some others in the industry. Obviously, you guys have been in DCI, which is kind of a similar type technology, but curious what you think as far as Arista's participation in more of these next gen scale across networks. And is this something that would be good for, like, a Blue Box type of product, or would that more be in the scale up? So if you could give a little color there, that would be great.
Jayshree Ullal (Chairperson and CEO)
Right. Okay. So... You know, most of our participation today, we thought would be scale out, but what we are finding is due to the distributed nature of where and how they can get the power and the bisectional bandwidth growth, where essentially the throughput scale out or scale across is all about how much data you can move, right? As the workloads become more and more complex, you have to make them more and more distributed because you just can't fit them in one data center, both from a power, bandwidth, throughput capacity. Also, these GPUs are trying to minimize the collective degradation.
So as you scale up or out, the communication patterns become very, very much of a bottleneck, and one way to solve it is to extend this across data centers, both through fiber and as you rightly pointed out, a very high injection bandwidth DCI routing. And then there's a sustained real-world utilization you need across all of these. So for all these reasons, we are pleasantly surprised with the role, the role of coherent long-haul optics, which we don't build, but we have worked in the past very greatly with, with companies that do, and they're seeing the lift, and the 7800 spine chassis as the flagship platform and preferred choice that has been designed by our engineering team now for several years for this robust configuration.
So less Blue Box there and, and much, much more of a full-on Arista flagship box with the EOS and all of the virtual output queuing and buffering to interconnect regional data centers, with extremely high levels of routing and high availability, too. So this really lends into everything Arista stands for coming all together in a universal AI spine.
Tim Long (Managing Director and Senior Equity Research Analyst)
Okay, excellent. Thank you, Jayshree.
Jayshree Ullal (Chairperson and CEO)
Thank you.
Operator (participant)
Our next question will come from the line of Karl Ackerman with BNP Paribas. Please go ahead.
Karl Ackerman (Managing Director of Semiconductors and Networking Hardware)
Yes, thank you. Agentic AI should support an uptake in conventional server CPUs, where your switches have high share within data centers. And so given your upwardly revised outlook of 25% growth for this year, could you speak to the demand prospects you are seeing for front-end high-speed switching products that address agentic AI products? Thank you.
Jayshree Ullal (Chairperson and CEO)
Yeah. Exactly, Karl. I think in the beginning... Well, let's just go back in time in history. It's not that long ago. Three years ago, we had no, no AI. We were staring at InfiniBand being deployed everywhere in the back end, and we pretty much characterized our AI as only back end, just to be pure about it, right? Three years later, I'm actually telling you we might do, oh, north of $3 billion this year and growing, right? That number definitely includes the front end as it's tied to the back-end GPU clusters, and it's an all-Ethernet, all-AI system for agentic AI applications. Now, a lot of the agentic AI applications are mostly running with some of our largest cloud AI and specialty providers.
But I don't rule out the possibility, you could see this in our numbers, with north of 8,800 great customers, that many of that is going to feed into the enterprise as well as agentic AI applications come for genomic sequencing, science, you know, automation of software. I don't know. I don't think, Ken, any of us believe that AI is eating software, but AI is definitely enabling better software, right? And we're, we're certainly seeing that in Ken's team as well in our adoption of that. So the rise of agentic AI will only increase not just the GPU, but all gradations of XPU that can be used in the back end and front end.
Karl Ackerman (Managing Director of Semiconductors and Networking Hardware)
Thank you.
Jayshree Ullal (Chairperson and CEO)
Thank you, Karl.
Operator (participant)
Our next question comes from the line of Simon Leopold with Raymond James. Please go ahead.
Simon Leopold (Managing Director and Senior Equity Analyst)
Thank you very much for taking the question. I wanted to come back on the issue around sort of what's going on with the memory market. So two aspects to this. It's one, I'm wondering how much of a role has been price hikes, you raising your prices to customers... or and/or, whether or not within the substantial amount of purchase commitments you have, whether there's a significant aspect of memory in there, so you've pre-purchased memory effectively at much lower prices than the spot market today? Thank you.
Jayshree Ullal (Chairperson and CEO)
Thank you. Okay, I wish I could tell you we did purchase all that memory that we needed. No, we didn't. But while our peers in the industry have done multiple price hikes already, especially those in the server market or memory intensive switches, we have clearly been absorbing it, and memory is in our purchase commitments, but so is everything else. The entire silicon portfolio is in our purchase commitments. Due to some of the supply chain reactions, Todd and I have been reviewing this, and we do believe there will be a one-time increase on selected, especially memory-intensive SKUs, to deal with it, and we cannot absorb it if the prices keep going up the way they have in January and February.
I would tell you that all the purchase commitments I have in my current, in Chantelle's current, commitments are not enough. We need more memory.
Simon Leopold (Managing Director and Senior Equity Analyst)
Thank you.
Operator (participant)
Our next question will come from the line of James Fish with Piper Sandler. Please go ahead.
James Fish (Managing Director and Senior Research Analyst)
Hey, ladies, great quarter, great end of the year. Jayshree, are hyperscalers getting nervous now at all in ordering ahead? What's your sense of pull-in of demand potentially here, including for your own Blue Box initiative? And Chantelle, for you, just going back to George's question, are you? I know it's difficult to answer, but are you anticipating that that product deferred revenue is gonna continue to grow through the year, or just it's way too difficult to predict and you've got customers that could just say, "You know, we accept, great, and ship them all now," and so we end up with a big quarter, but product deferred down?
Jayshree Ullal (Chairperson and CEO)
I'm gonna let Chantelle answer this difficult question over and over again.
Chantelle Breithaupt (CFO)
Sure.
Jayshree Ullal (Chairperson and CEO)
Go ahead, Chantelle.
Chantelle Breithaupt (CFO)
Yeah. Happy... Thank you, James. I appreciate it. So I think for deferred, generally, is so we, we don't guide deferred, but to try to give you more insight, there will be, back to George's question, there will be certain deployments that get accepted and released, but the part that's difficult is what comes into the balance, right, James? So I can't guide. That would be, that would be a, a wild guess on what's gonna go in, which is not prudent, I think, from my perspective. So we'll continue to, to mention what's in it. We'll continue to show you through the balances. We'll talk about it in the script in the sense of the movement, but, that's probably as much as I can tell you with, well, you know, with a responsible answer looking forward.
Jayshree Ullal (Chairperson and CEO)
James, this is one of those times, no matter how many times you ask us this question in several different ways, the answer doesn't change. Okay.
James Fish (Managing Director and Senior Research Analyst)
I mean, we're all-
Jayshree Ullal (Chairperson and CEO)
So-
James Fish (Managing Director and Senior Research Analyst)
... Insanity is doing the same thing over and over again.
Chantelle Breithaupt (CFO)
Yeah.
Jayshree Ullal (Chairperson and CEO)
I know. I know. So on the hyperscaler, are they getting nervous? I don't think they're getting nervous. They, you know, you've seen what a strong business they have, how much cash they put out, and how successful they are. But I do think they are working more closely with us. Typically, we had a 3-6-month visibility. We're getting greater visibility.
Operator (participant)
Our next question will come from the line of Tal Liani with Bank of America. Please go ahead.
Tal Liani (Managing Director and Senior Research Analyst)
Hi, guys. I almost had the same question to you what I asked you last quarter because you grouped-
Jayshree Ullal (Chairperson and CEO)
We understand that.
Tal Liani (Managing Director and Senior Research Analyst)
You increased the guidance.
Jayshree Ullal (Chairperson and CEO)
We understand that, Tal Liani question.
Tal Liani (Managing Director and Senior Research Analyst)
Yeah, no, it's... I'll explain. You increased the guidance, but the entire increase in the guidance is basically the cloud. And if I look at it, it's very simple to dissect your numbers. If I remove campus and I remove cloud, and you provide these two numbers for both 25 and 26, the rest of the business, which is 60% of the business, you guide it to grow zero. And in previous years, it was—I can make estimates. It was anywhere from 10% to 30% growth. So the question is, why are you guiding this way, that 60% of the business is not gonna grow? Is it because the-
Jayshree Ullal (Chairperson and CEO)
Okay, can I-
Tal Liani (Managing Director and Senior Research Analyst)
It's just conservatism?
Jayshree Ullal (Chairperson and CEO)
Hiya. No, can I pause you there? Because I know you like to dissect our math several different ways and come up with conclusions. We're not guiding that our business is gonna be flat, or we're not gonna grow here or grow there. But generally, when something is very fast-paced and growing, then other things grow less. And exactly whether it will be flat or grow double digits or single digits, Tal, I'm, it's February. I don't know what the rest of the year will be, okay? So I take-
Tal Liani (Managing Director and Senior Research Analyst)
No, but that's the question.
Jayshree Ullal (Chairperson and CEO)
Take-
Tal Liani (Managing Director and Senior Research Analyst)
The question is-
Jayshree Ullal (Chairperson and CEO)
Uh, uh
Tal Liani (Managing Director and Senior Research Analyst)
... is there allocation here? Meaning, if you, let's say you have only set number of memory slots, so you allocate it to cloud, and then the rest of the business doesn't get it, or is it just conservatism and lack of ability to-
Jayshree Ullal (Chairperson and CEO)
It's, it's-
Tal Liani (Managing Director and Senior Research Analyst)
Lack of ability.
Jayshree Ullal (Chairperson and CEO)
It's neither, it's neither of the above. It's... We, we don't allocate to our customers. It's first in, first served, and in fact, the enterprise customers get a very high sense of priority, as do our cloud customers. Customers come first. So, but allocation of memory may allow us to be in a situation where the demand is greater than our ability to supply. We don't know. It's too early in the year.
Tal Liani (Managing Director and Senior Research Analyst)
Got it.
Jayshree Ullal (Chairperson and CEO)
We're confident that we could guide, you know, six months after our Analyst Day to a higher number, but we don't know what the next four quarters will look like to the precision you're asking for.
Tal Liani (Managing Director and Senior Research Analyst)
Got it. Thank you.
Jayshree Ullal (Chairperson and CEO)
Thank you.
Operator (participant)
Our next question comes from the line of Atif Malik with Citi. Please go ahead.
Adrienne Colby (Equity Research Analyst)
Hi, it's Adrienne Colby for Atif. Thank you for taking my question. I was hoping to ask about for an update on the risks for large AI customers. I know that the fourth customer you talked about was a bit slower to ramp to 100,000 GPUs. Just wondering if you can update us on their progress there, and perhaps what's next for the other three customers that have already crossed that threshold? And lastly, is there any indication that the fifth customer that ran into funding challenges might come back to you?
Jayshree Ullal (Chairperson and CEO)
Okay. Adrienne, I'll give you some update. I'm not sure I have precise updates, but we are in all four customers deploying AI with Ethernet, so that's the good news. Three of them have already deployed a cumulative of 100,000 GPUs and are now growing from there. You know, clearly migrating now into beyond pilots and production to other centers, power being the biggest constraint. Our fourth customer is migrating from InfiniBand, so it's still below 100,000 GPUs at this time. But I fully expect them to get there this year, and then we shall see how they get beyond that.
Operator (participant)
Our next question will come from the line of Michael Ng with Goldman Sachs. Please go ahead.
Michael Ng (Managing Director and Senior Equity Research Analyst)
Hey, good afternoon. Thank you for the question. I just have one and one follow-up. First, I was wondering if you could talk a little bit about the new customer segmentations that you guys unveiled with cloud and AI and AI and specialty. You know, what's the philosophy around that? And, you know, does that kind of signal more opportunity in places like Oracle and the Neoclouds? And then second, you know, with cloud and AI at 48% of revenue and A and B at a combined 36, you know, you have 12% leftover. Is that a hyperscale customer? Does it kind of imply that, you know, you have a new hyperscaler that is approaching 10%?
Because obviously, you know, we thought that, you know, the next biggest one would've been Oracle, but that's moved out of cloud now. So any thoughts there would be great. Thank you.
Jayshree Ullal (Chairperson and CEO)
Yeah. Yeah, sure, Michael. So, well, first of all, my math is 26 + 16, so it's 42, so I don't have 12%, unless you had 58. It's really only 6%. So on the cloud and AI tightness, the way we classified that is it's significantly large-scale customers with greater than 1 million servers, greater than 100,000 GPUs, and R&D focus on models and sometimes even their own XPUs. And this can, of course, change. Some others may come into it, but it's a very select few set of customers, you know, less than 5 or about 5. That's the way to think of it, right? On the change on the specialty cloud, as I said, we're noticing that some customers are really, really focused solely on AI with some cloud, as opposed to cloud with some AI.
So when it's a heavily set AI-centric, we especially with Oracle's AI Acceleron and multi-tenant partnerships that they've created, they have naturally got a dual personality, some of which is OCI, the Oracle Cloud, but some of it is really AI, fully AI-based. So the shift in their strategy made us shift the category and bifurcate the two.
Michael Ng (Managing Director and Senior Equity Research Analyst)
Thank you, Jayshree.
Jayshree Ullal (Chairperson and CEO)
Thank you.
Kenneth Duda (Founder and CTO)
You know, we have time for one last question.
Operator (participant)
Our final question will come from the line of Ryan Koontz with Needham and Company. Please go ahead.
Ryan Koontz (Managing Director)
Great. Thanks for squeezing me in. Jayshree, in your prepared remarks, you talked about your telemetry capabilities, and I wonder if you could expand on that and discuss where are you seeing that key differentiation, what sorts of use cases you're able to really seize the upper hand competitively with your telemetry capabilities? Thank you.
Jayshree Ullal (Chairperson and CEO)
Yeah. I'm gonna, I'm gonna say some, and I, I think Ken, who's been designing this and working on it, will say even more. Ken Duda, our President and CTO. So telemetry is at the heart of our, both our EOS software stack as well as our CloudVision for enterprise customers. We have a real-time streaming telemetry that has been with us since the beginning of time, and it's constantly keeping track of all our switches. It isn't just a pretty management tool. And at the same time, our cloud customers and AI customers are seeking some of that visibility, too, and so we have developed some deeper AI capabilities for telemetry as well. Over to you, Ken, for some more detail.
Kenneth Duda (Founder and CTO)
Yeah, no, thanks for that question. That's great. Look, the EOS architecture is based on state orientation. This is the idea that we capture the state of the network and then stream that state out from the system database on the switches into whatever, the CloudVision or whatever system can then receive it. And we're extending that capability for AI with a combination of, in-network data sources related to flow control, RDMA counters, buffering and congestion, counters, and also host-level information, including what's going on in the RDMA stack on the host, what's going on with collectives, latencies, the, any flow control problems or buffering problems in the host NIC. Then we pull those, that information all together in CloudVision and give the operator a unified view of what's happening in the network and what's happening in the host.
This greatly aids our customers in building an overall working solution because the interactions between the network and the host can be complicated and difficult to debug when it's different systems collecting them.
Jayshree Ullal (Chairperson and CEO)
Great job, Ken.
Kenneth Duda (Founder and CTO)
That's right.
Jayshree Ullal (Chairperson and CEO)
I can't wait for that product.
Kenneth Duda (Founder and CTO)
Really helpful.
Ryan Koontz (Managing Director)
Thank you.
Kenneth Duda (Founder and CTO)
This concludes Arista Networks' Fourth Quarter 2025 Earnings Call. We have posted a presentation that provides additional information on our results, which you can access on the investor section of our website. Thank you for joining us today and for your interest in Arista.
Operator (participant)
Thank you for joining, ladies and gentlemen. This concludes today's call. You may now disconnect.