Sign in

You're signed outSign in or to get full access.

DigitalOcean - Earnings Call - Q2 2025

August 5, 2025

Executive Summary

  • Q2 2025 delivered solid top-line and profitability with revenue $218.7M (+14% YoY) and adjusted EBITDA $89.5M (41% margin); non-GAAP diluted EPS was $0.59.
  • Guidance raised across the board: FY 2025 revenue to $888–$892M, adjusted EBITDA margin to 39–40%, adjusted FCF margin to 17–19%, and non-GAAP EPS to $2.05–$2.10.
  • AI momentum accelerated: AI/ML revenue more than doubled YoY; incremental ARR reached $32M, the highest since Q4 2022; Scalers+ revenue grew 35% YoY to 24% of total.
  • Consensus beats: revenue ($218.7M vs $216.6M*), non-GAAP EPS ($0.59 vs $0.468*), and EBITDA ($89.5M vs $85.2M*) — sustained estimate outperformance from Q4 2024 to Q2 2025. Values retrieved from S&P Global.*

What Went Well and What Went Wrong

  • What Went Well
    • “We delivered another quarter of solid performance across both AI and core cloud… we more than doubled our AI/ML revenue year-over-year.” — CEO Paddy Srinivasan.
    • Strong customer mix shift: Scalers+ revenue +35% YoY to 24% of total; count +23% YoY; ARPU $111.70 (+12% YoY) and NDR improved to 99%.
    • Profitability and cash flow: adjusted EBITDA $89.5M (41% margin); adjusted FCF $57.0M (26% margin); CFO reaffirmed confidence in maintaining attractive FCF while accelerating growth.
  • What Went Wrong
    • Gross margin dipped to 60% (vs 61% in Q1 and 62% in Q4) amid capacity investments; management expects margins to remain around current levels near term.
    • Net Dollar Retention held at 99% (down from 100% in Q1); management highlighted mixed expansion behavior among larger long-tail customers and lag in AI contribution to NDR.
    • Capacity constraints remain a “way of life” in AI; while manageable, scaling requires ongoing investment in GPUs, power, and cooling.

Transcript

Speaker 5

Ladies and gentlemen, thank you for standing by. My name is Krista, and I will be your conference operator today. At this time, I would like to welcome everyone to DigitalOcean's second quarter 2025 earnings conference call. All lines have been placed on mute to prevent any background noise. After the speaker's remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press star followed by the number one on your telephone keypad. If you would like to withdraw your question, press star one again. Thank you. I would now like to turn the conference over to Melanie Strate, Head of Investor Relations. Melanie, you may begin.

Speaker 3

Thank you, and good morning. Thank you all for joining us today to review DigitalOcean's second quarter 2025 financial results. Joining me on the call today are Paddy Srinivasan, our Chief Executive Officer, and Matt Steinfort, our Chief Financial Officer. Before we begin, let me remind you that certain statements made on the call today may be considered forward-looking statements, which reflect management's best judgment based on currently available information. Our actual results may differ materially from those projected in these forward-looking statements, including our financial outlook. I direct your attention to the risk factors contained in our filings with the SEC, as well as those referenced in today's press release that is posted on our website. DigitalOcean expressly disclaims any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements made today.

Additionally, non-GAAP financial measures will be discussed on this conference call, and reconciliation to the most directly comparable GAAP financial measures can be found in today's earnings press release, as well as in our investor presentation that outlines the financial discussion on today's call. A webcast of today's call is also available in the IR section of our website. With that, I will turn the call over to Paddy.

Speaker 6

Thank you, Melanie. Good morning, everyone, and thank you for joining us today as we review our second quarter 2025 results. We continue to make meaningful progress on the strategy we laid out at our investor day back in April. This is evidenced by our strong second quarter results and supported by the fact that we are raising our full-year guidance on both revenue and profitability metrics. My comments today will include a recap of our Q2 financial results and an update on both our progress in product innovation and our enhanced go-to-market strategy across both core cloud and AI, which are enabling over 174,000 digital native enterprise customers to scale on our platform. Let me start with the second quarter financial results highlighted on slide 10 of our earnings deck.

The growth momentum from Q1 continued into the second quarter, with revenue of $219 million growing 14% year over year. We saw excellent strength in our AI/ML business, with revenue growing north of 100% year over year. Revenue from our Scalar Plus customers, our customers who were at a $100,000 plus annual run rate during the quarter, continued to see strong growth during the quarter at 35% year over year and increased to 24% of total revenue. Finally, we achieved incremental ARR in the second quarter of $32 million, our highest incremental ARR since Q4 of 2022, and the highest organic incremental ARR in over three years. Given our strong top-line performance in the first half of the year and our confidence in the second half outlook, we are raising our full-year revenue guidance range to $888 million to $892 million.

We are also excited about the traction we are getting with larger customers and an increase in committed contracts. I spoke last quarter about a multi-year $20 million plus committed deal, and this was a contributor to the material growth in our remaining performance obligation balance as we continue to seek and secure large multi-year deals with our higher spend customers and key strategic partners. Not only did our momentum carry over to the second quarter, but also the growth to come with it, the growth continued to come with healthy profitability, including adjusted free cash flow of $57 million, which is 26% of revenue. As a result of this performance, we are raising our full-year free cash flow guide to 17% to 19% of revenue, demonstrating our ability to accelerate revenue while maintaining attractive free cash flow margins.

Turning to the balance sheet, we continue to make progress on our capital allocation priorities and remain on track to address the outstanding 2026 convertible debt prior to the end of this calendar year. Matt will go into further details on this front in his prepared remarks. Now, let me give you some updates on the product innovation that we continue to deliver for our digital native enterprise customers, which you can see highlighted on slides 11 and 12 in the earnings presentation. During the quarter, we released more than 60 new products and features addressing the needs of our higher spend customers, which includes builders, scalers, and Scalar Plus customers who now drive 89% of our revenue.

Notably, 64 of our top 100 customers have adopted a product or a feature released within the last year, and 26 of the top 100 customers have adopted a new capability released within the last quarter. Both are clear proof points of the impact product innovation is having on our digital native enterprise customers. Let me now provide a few product highlights from the quarter, starting with core cloud. This past quarter, we officially announced our Atlanta data center, and its resources are now available to all customers. As a reminder, this is our newest and largest data center, and it is purpose-built to deliver high-density GPU infrastructure optimized for AI inferencing, which requires a lot more than just GPUs.

This data center has our core cloud stack, including compute, storage, and other cloud features that are critical to enabling AI-native customers to run full-stack applications powered by AI and not just the training or inference part of their software. This agentic cloud data center infrastructure is a key differentiating factor for us over other neoclouds, as it provides a complete stack for running sophisticated AI applications that have comprehensive needs beyond GPUs. More on that a little later. During the quarter, we continue to build capabilities for larger digital native enterprises. These customers typically require high-quality storage, especially for AI workloads. To support that requirement, we enabled NFS for GPU storage so that customers can run the most demanding GPU applications with access to higher performance object storage to meet the demands of enterprise workloads such as video streaming and data lakes.

We also introduced two advanced networking features in public preview: Bring Your Own IP Address, or BYOIP, and Network Address Translation Gateways, or NAT Gateways. These are critical capabilities that will enable more and larger digital native enterprise workloads to migrate to DigitalOcean. BYOIP allows customers to use their existing publicly available IP addresses on BYOIP rather than having to acquire new DigitalOcean-specific IP addresses. This makes it easy for customers to lift and shift their workloads to our platform without requiring extensive changes to their applications, while NAT Gateway allows a customer's resources to securely access the internet from within their virtual private cloud on the BYOIP platform. These innovations on the core cloud platform are enabling us to scale and win more workloads from our digital native enterprise customer base.

To leverage that traction, we are complementing our industry-leading product-led growth motion with a small dedicated migrations team to support customers moving existing workloads from hyperscalers and other clouds to DigitalOcean's platform, and we facilitated 76 of these migrations during the quarter. One example of this is a company called Exetium, a next-generation cybersecurity provider delivering innovative, no-cost incident response as part of its fully managed security operations center, or SOC, offering. Designed for businesses and managed service providers, or MSPs, Exetium's managed SOC provides real-time threat detection, threat hunting, and incident response, all without the high costs typically associated with legacy solutions. Exetium signed an 18-month contract with DigitalOcean, selecting the platform to migrate from other cloud providers due to our compelling total cost of ownership, performance, and ease of use, enabling Exetium to deliver its cutting-edge cybersecurity solutions more efficiently and at scale.

ServeMe.host, a Scalar Plus customer that offers managed hosting specifically tailored for the Kraft content management system, has already adopted our newly released Network Address Translation Gateway, enabling their customers to securely access the internet within their DigitalOcean virtual private cloud. We're also very excited about the progress we're making on our AI/ML platform, which we now call the DigitalOcean Gradient AI Agentic Cloud, which complements our full-stack general-purpose cloud. Slide eight in the earnings presentation shows the power of having these two platforms side by side, enabling our customers to take full advantage of the integrated stack that is required to build and run AI-powered applications in the future. The Gradient AI Agentic Cloud has three components: Gradient AI Infrastructure, Gradient AI Platform, and Gradient AI Agents.

Let me start with the Gradient AI Infrastructure, where we expanded our GPU droplets lineup significantly to now include eight major types, including the H, L, and RTX series GPUs from NVIDIA, and the latest Instinct series GPUs from AMD. Another major update that makes Gradient AI Infrastructure great for inferencing is a new inference-optimized GPU droplet, which simplifies the setup and deployment of LLMs by leveraging Docker, and this new GPU droplet comes preconfigured with BLLM and includes built-in optimizations like multi-GPU parallelism, smart batching, faster and higher token generation, built-in support for Hugging Face model downloads, speculative decoding, prompt caching, and multi-model concurrency so that customers can go from deployment to serving tokens in minutes on any GPU droplet without having to do all these steps manually.

We recently announced a collaboration with AMD that provides BYOIP customers with access to AMD Instinct MI325X GPU droplets, in addition to MI300X droplets. These GPUs deliver high-level performance at lower TCO and are ideal for large-scale AI inferencing workloads. Another example of this growing collaboration between the two companies is the Gradient AI Infrastructure powering the recently announced AMD Developer Cloud, which enables developers and open-source contributors to test drive AMD Instinct GPUs instantly in a fully managed environment managed by our Gradient AI Infrastructure. This enables developers to start AI development with zero hardware investment and accelerate the time to value in tasks like benchmarking and inference scaling. This further advances our mission of democratizing access to AI while maintaining the quality, performance, and flexibility our customers have come to expect from BYOIP. Let's look at how customers are taking advantage of our Gradient AI Infrastructure.

Featherless.ai is a serverless AI inference platform offering API access to an expansive and growing catalog of open weight models, primarily Hugging Face models like Llama, Mistral, Quen, DeepSeek, RWKV, and more. Featherless.ai leverages DigitalOcean for its simplicity and price performance, and they were an early adopter of our AMD MI300X GPU Droplets, which offer industry-leading price performance and ease of use for inference workloads. Another GPU Droplet customer is ScribeAI, a digital native enterprise specializing in AI-generated documentation, which is used by 94% of the Fortune 500 companies. ScribeAI migrated their AI/ML training workloads to DigitalOcean from competitive cloud providers and is now leveraging BYOIP's GPU Droplets to build and train their process documentation and knowledge sharing platform.

Moving on to the next layer of our Gradient AI Agentic Cloud, we recently announced the general availability of DigitalOcean Gradient AI Platform, which provides the industry's easiest and most cost-effective platform for developing production-grade AI agents with automated safety and security guardrails. The Gradient AI Platform, as shown on the right side of slide eight of the earnings deck, is a one-of-a-kind platform that caters to the end-to-end agent development lifecycle, or ADLC for short, enabling AI-native SaaS and any software application customer to build, test, deploy, monitor, and operate agentic AI software. Customers can use a rich set of proprietary and open-source foundation models, including OpenAI, Anthropic, Mistral, DeepSeek, and Llama, as high-performance serverless endpoints. These serverless endpoints automatically scale to meet real-time application demands, thus freeing customers from having to manage compute resources on their own.

The Gradient AI Platform provides built-in guardrails that verify AI behavior and new best-in-class agent evaluation frameworks to drive high accuracy and relevance of AI results and a robust experimentation capability to deliver optimal AI performance. Over 14,000 agents have been created since announcing this platform, which is almost double the number of agents last quarter. More than 6,000 customers have leveraged this platform since January, with 30% of these customers being new to DigitalOcean. One of the customers leveraging our new Gradient AI Platform is Quickest, with a Q, a leading AI-powered collaborative workspace product that helps product, marketing, and sales teams generate strategy documents, campaigns, and playbooks using shared AI personas. Quickest leverages the Gradient AI Platform to create persona-generating agents, enabling model comparisons and orchestrating tasks on the Gradient AI Platform to fetch and summarize the marked-down content.

Quickest chose DigitalOcean because they needed a flexible and scalable infrastructure to support complex AI workflows, and they valued the simplicity of deploying agents and integrating them to the Quickest product line with very little coding involved. Moving on to the Gradient AI Agents layer, our first commercial AI agent is the Cloudways Copilot, which continuously monitors critical server components like the web stack, disk space, iNodes, and host health to detect issues in real time, diagnose root causes, and deliver actionable recommendations faster than traditional alerting systems. An example of a customer leveraging this product is Mint Media, a full-service media and marketing company specializing in video production and digital marketing. Mint Media uses our Cloudways Copilot GenAI agents to automatically detect and remediate web hosting issues.

Mint Media manages over 180 websites and saw significant time savings by leveraging Cloudways Copilot and the associated AI-powered insights and automated issue resolution. What previously required hours of manual debugging is now handled in minutes through the agent's detailed actionable recommendations. In addition to the product innovations we delivered, we also made material progress on the go-to-market front during this quarter. From a new customer acquisition perspective, we saw meaningful progress in the top of the funnel from our product-led growth enhancements, with revenue from core cloud customers in their first 12 months significantly outpacing growth of prior years, which is a great leading indicator of future growth potential. Our direct sales motion and the strong ecosystem partnerships are driving more AI-native customers with large-scale inferencing requirements than we have ever seen in the past.

Our growing success with these marquee customers is evident in the increased RPO that I mentioned earlier in my comments, and we anticipate this trend to continue as we scale out our AI capabilities. In closing, I'm pleased both by the results of the second quarter and by the progress we are making on the strategy that we articulated at our investor day back in April. We maintained our top-line growth momentum from Q1 to Q2 while maintaining healthy profitability metrics, enabling us to raise our guidance across both revenue and profitability metrics for the fiscal year 2025. We delivered continued product innovation and both drove improved performance in our industry-leading product-led growth engine and continue to get traction with our direct sales go-to-market motion, especially for AI.

We recently launched the Gradient AI Platform into full general availability, a significant step in our offering to our customers a twin stack of cloud capabilities as outlined in slide eight of the earnings slide deck. In a single unified stack, we provide a mature, complete general-purpose cloud, and on the other stack, a modern agentic AI cloud. These integrated stacks enable AI-native customers to run inferencing at scale while taking advantage of the core cloud modules and digital native customers to build AI directly into their software applications without having to do the heavy lifting of dealing with AI infrastructure. With this unique twin cloud and AI stack, we are getting increasing momentum with AI-native companies with larger scale inferencing workloads, and we are expanding our partnerships with key ecosystem players in the AI domain.

We're also making good progress on our balance sheet and refinancing priorities, positioning us for a strong 2026. Thank you, and I'll now turn it over to Matt.

Speaker 1

Thanks, Paddy. Good morning, everyone, and thanks for joining us today. As Paddy discussed, we are very pleased with our Q2 2025 performance, and we are confident in our ability to sustain and build on this momentum in the latter half of the year. In my comments, I'll walk through our Q2 results in detail, provide an update on our balance sheet and capital allocation strategy, and share our third quarter and full-year 2025 financial outlook. Starting with the top line, revenue in the first quarter was $219 million, up 14% year over year. Our annual run rate revenue, or ARR, was $875 million, which was $32 million above Q1. This incremental ARR of $32 million was the highest incremental ARR since Q4 of 2022 and the highest organic incremental ARR achieved in over three years.

We continue to build and strengthen our relationships with our higher spend customers and key strategic partners. This is evidenced by the material increase in our remaining performance obligation balance as we continue to secure large multi-year deals with our digital native enterprise customers, which is an early but promising new go-to-market motion for the company. Our product innovation and go-to-market enhancements are resonating with this target customer base. In Q2, revenue from our Scalar Plus customers, or customers whose annualized run rate revenue in the quarter was greater than $100,000 and who represent 24% of overall revenue, grew 35% year over year with a 23% increase in customer count. This is clear evidence of the increasing traction that we are getting with our largest customers as they expand their use of our core cloud products and adopt our new AI offering.

Q2 revenue growth was primarily driven by improvements in customer acquisition across both core cloud and AI, as well as strong customer adoption of our AI/ML products. As Paddy mentioned, revenue from core cloud customers in their first 12 months significantly outpaced growth in prior years, which is a great leading indicator of future growth as these stronger recent cohorts not only drive up revenue from customer acquisition, but also they should positively contribute to net dollar retention when they reach their 13th month and become part of our NDR cohort. Our Q2 net dollar retention was 99%, up from 97% in the same quarter last year and within the expected range that we communicated on the prior quarter's call.

We also delivered strong AI/ML revenue growth in Q2 as we continue to see a robust demand environment, particularly for inferencing workloads, with AI revenue growing north of 100% year over year. Turning to the P&L, we delivered strong performance on all of our key profitability metrics. Gross margin for the second quarter was 60%, which was 100 basis points higher than the prior year. Adjusted EBITDA was $89 million, an increase of 10% year over year. Adjusted EBITDA margin was 41% in the second quarter, approximately 100 basis points lower than the prior year. Non-GAAP diluted net income per share was $0.59, a 23% increase year over year. This increase is a direct result of expanding per share profitability by driving durable revenue growth while exercising ongoing cost discipline.

GAAP diluted net income per share was $0.39, a 95% increase year over year as we continue to grow revenue, drive operating leverage, and prudently manage stock-based compensation. Q2 adjusted free cash flow was $57 million, or 26% of revenue, up significantly from our front-loaded Q1, which included a large portion of the upfront investment required to bring the Atlanta data center online. As I'll detail later in my comments, we remain confident in our ability to deliver attractive adjusted free cash flow margins for the full year, although the timing of capital investment payments will continue to create quarter-to-quarter variations in adjusted free cash flow margins, hence our highlighting of the trailing 12-month adjusted free cash flow margin on slide 15. Our balance sheet continues to be strong as we continue to maintain material cash and cash equivalents and ended the quarter with $388 million in cash.

We also continue to execute our share repurchase program in the quarter, with $20 million of repurchases in Q2, buying back approximately 691,000 shares. This brings our cumulative share repurchases since IPO to $1.6 billion and 34.8 million shares through June 30, 2025. At the end of Q2, we had $3.4 million remaining on our current share repurchase authorization. On the debt front, we continue to actively evaluate the market and our financing alternatives and remain committed to fully addressing the 2026 convert over the balance of this calendar year. We had multiple attractive financing options available to us, including convertible debt, bank debt, and bonds, and we plan to tap into these markets as needed to optimize our long-term cost of capital. Before we move on to guidance, I'll highlight one non-cash item related to both the balance sheet and the P&L.

We continue to evaluate the necessity of our valuation allowance on certain existing tax deferred tax assets each quarter in accordance with U.S. GAAP. While the valuation allowance is still necessary for Q2, in the latter half of fiscal 2025, we may release all or a portion of our valuation allowance of $109 million, which was discussed in our most recent 10-K as well as in our most recent 10-Q. When released, we estimate this would have the financial impact of decreasing our non-cash tax expense by the amount of the release, resulting in a corresponding increase in net income. When this occurs, it will be a positive non-cash event and will have no impact on non-GAAP financial metrics. Moving on to guidance, for the third quarter of 2025, we expect revenue to be in the range of $226 to $227 million, representing approximately 14.1% year-over-year growth at the midpoint.

For the full year 2025, we are raising our annual revenue guidance to the range of $888 to $892 million, representing approximately 14% year-over-year growth at the midpoint. Given our strong Q2 performance, visibility into our customers' usage trends, and the strength of the AI/ML demand environment, we are able to raise our full-year guide with confidence. For the third quarter of 2025, we expect our adjusted EBITDA margins to be in the range of 39% to 40%. For the full year, we raise our adjusted EBITDA margin guide to the range of 39% to 40%. For the third quarter of 2025, we expect non-GAAP diluted earnings per share to be $0.45 to $0.50, based on approximately 102 to 103 million in weighted average fully diluted shares outstanding.

For the full year 2025, we expect non-GAAP diluted earnings per share to be $2.05 to $2.10, based on approximately 103 to 104 million in weighted average fully diluted shares outstanding. Turning to adjusted free cash flow, we raise our guided adjusted free cash flow margins for the full year to 17% to 19%. Increasing our projected cash flow margins at the same time, we are accelerating our revenue growth outlook, which speaks to the confidence we have in our ability to maintain attractive free cash flow margins while we accelerate our top-line growth. Consistent with our historical guidance practice, we are not providing adjusted free cash flow guidance on a quarter-by-quarter basis, given it is heavily influenced by working capital timing, as you saw in our year-to-date results. That concludes our prepared remarks, and we'll now open the call to Q&A.

Speaker 5

Thank you. We will now begin the question and answer session. If you would like to ask a question, please press star one on your telephone keypad to raise your hand and join the queue. If you'd like to withdraw your question, simply press star one again. We also ask that you limit yourself to one question and one follow-up. Your first question comes from Patrick Walravens, Citizens. Please go ahead.

Oh, great. Thank you very much, and congratulations. Paddy, could you talk a little bit more about the AI/ML revenue and the over 100% increase there and maybe walk us through a little bit the history of this offering and why the current version is really starting to kick in?

Speaker 6

Thank you, Patrick. Good morning. Good way to get started. The AI/ML revenue, as I mentioned in the call, grew more than 100% year over year. If you remember, last Q2 is when we brought a lot of H100 NVIDIA gear online. More than doubling that this quarter was a significant step for us. What is different is, as I explained, we have a three-layer AI stack. On the foundational level is our Gradient AI Infrastructure stack, which is a network of GPUs, both from AMD as well as NVIDIA. In the middle layer is our Gradient AI Platform that we just took from private and public preview all the way to general availability. On the topmost layer is agents. The type of customers that use these three layers are slightly different at this point.

AI infrastructure is consumed typically by AI-native companies that have their own model or have taken an open-source model and are doing some tweaks to it and hosting those models and scaling them, especially in the inferencing mode, are typically consuming the AI infrastructure. A majority of our revenue comes from the Gradient AI Infrastructure stack. That is not very dissimilar from the rest of the industry. The Gradient AI Platform that we recently pushed out to GA is where any software application, like a SaaS provider, for example, can start consuming AI into their own applications without having to do the heavy lifting of building and managing their own GPU infrastructure. We have serverless endpoints for these LLMs, for example, and we have a bunch of other tools and modules that are critical building blocks for consuming AI into your own application.

It becomes very, very easy to build AI into your existing application. That is what is powering the growth of our AI revenue, predominantly on the infrastructure side, but we are driving a lot of adoption and mindshare with developers with the AI platform. On the agentic layer, the first commercial application of that is the Cloudways Copilot that is typically adopted by end customers as a way to automate some of the manual tasks that they are seeing in managing and operating cloud-based applications.

That's very helpful. Thank you.

Speaker 5

Your next question comes from the line of Mike Sikos with Needham & Company. Please go ahead.

Hey, guys. Thanks for taking the questions here. Just to further the conversation on the AI/ML, good to see the north 100% revenue growth reflecting some of the more recent trends you guys have seen on the ARR front. We just wanted to see, I know historically you guys have given us more color on the underlying components for that net new ARR. I think last quarter you guys have cited north of 160% year-on-year. Maybe I missed the data point, but just wanted to see how that net new is growing on the AI/ML front in the June quarter.

Speaker 1

I think what we said is that our ARR was growing. The AI ARR was growing north of 160% in prior quarters. That wasn't referring to the incremental ARR. It was the actual ARR. The north of 100% reflects still very strong growth. In fact, if you look at the incremental ARR for this quarter at $32 million, it was a good balance across both AI and core cloud. It was our highest incremental ARR in the company's history. The reason that it dropped, if this is where you were going with the question around from 160% to north of 100%, is just as Paddy had said, we lapped the Q2 when we launched all of our AI capabilities and we had a bunch of pent-up demand. The Q2 growth in the AI business in particular from last year was high. It was just a difficult time.

If you look at the incremental ARR that we're adding for the business on a growth-forward basis, we're accelerating. It's an accelerating business.

Got it. For the NDR, I know that the 99% here is in keeping with that commentary you guys have provided last quarter. Can you just explain what actually acted against that? I would have thought there would have been at least some benefit from you guys lapping that Cloudways price increase in April.

Yeah, I think that when we look at the NDR, and this is the reason that we signaled it'll likely bounce around the kind of current range into this quarter and probably going for the next couple of quarters, is that in the market, we haven't seen that degradation in the market. We haven't really seen any change in the market since the April timeframe. As we look at some of our larger customers in the long tail, there's a, I'd say, there's a mixed impact on customers. It's very individual. Some customers we see that are maybe on edge and they're optimizing or they're a little bit hesitant to expand their business. In the same industry or in the same size of customer, we also see a number of customers that are accelerating the business.

They're doing really well and they're expanding their business with us and they're growing their workloads. You see that in the growth of the customers, the Scalar Plus at 35%. We're seeing really strong growth in parts of our customers, but we're also seeing others that they're being cautious and aren't scaling as fast. We think that we're likely to stay kind of in this level. I'd say what the good news is, despite the fact that the NDR was just a hair lower at 99%, we were able to raise our guidance. We're delivering the best incremental ARR that we've delivered in a very long time. We're very encouraged by the trends. I think that NDR is still, it's such a laggy metric. It's going to be a little stubborn to improve, but that's not going to slow us down from a revenue growth standpoint.

We're doing enough with the new product acquisition on the core cloud, which is doing really, really well, getting really good cohorts and they're coming in. We've got the migration motion, which is a relatively new motion. It doesn't always impact NDR. We've got the growth and acceleration in the AI business. We're very bullish on the growth prospects, and that was what enabled us to raise the guidance for the year.

Great. Thank you, guys.

Speaker 5

Your next question comes from the line of Gabriela Borges with Goldman Sachs. Please go ahead.

Hey, good morning. Thank you. I wanted to touch on the unit economics of the AI business. Matt, I know in the past you've talked about the three-year payback period, but we've both been very consistent in saying as you move from bare metal GPUs to more differentiated services, exactly as you've illustrated in the graphic in the slides, you should be able to command more gross margin, essentially. Maybe give us an update on how those efforts are tracking. How do you feel about the gross margin and the LTV to CAC of the AI business relative to the core business?

Speaker 1

Yeah, we're still, you know, we are very encouraged and comfortable with the margins that we're getting in the AI business. As you said, Gabriela, the higher layers of the stack, the three-layer stack that Paddy describes, have better margins than pure infrastructure. Even at the pure infrastructure level, we're very comfortable with the returns, particularly given the long-term value that we believe. You talked about the LTV, the long-term value that we believe we'll generate from those customers is, as Paddy has talked about multiple times, inferencing customers, which is what we're seeing more and more of even at the infrastructure layer as we're kind of going through this. They will pull other cloud services through. They need databases, they need storage, they need bandwidth, they need standard compute CPU.

This is a bit of, you know, we're still investing ahead in terms of if there's a bunch of infrastructure, the margins on that are lower than the margins at the higher stacks. You need that baseline infrastructure capability to get into higher layer services. We think it's a very good investment, a very good use of our capital, and we're very encouraged by the returns that we're getting and the promise of higher returns as that business matures and we get more pull-through revenue and we get more of the revenue shifting to the higher layers of the AI stack.

Speaker 6

Just to add to it, Gabriela, this is Paddy. Just to add to what Matt just said, that's why we're also forward investing in making our Gradient AI Agentic Cloud very, very optimized for inferencing. I talked about our inference optimized Droplet. If you look at that right side of slide eight, you will also see that we are investing in model optimization. We are investing in infrastructure optimization at the infrastructure level. Everything is aimed to scale inferencing workloads on our platform, which tend to have very long tails. As Matt mentioned, they also drag through some of the other cloud primitives. They drag the left side along with them as the inferencing workloads scale globally. We feel very good about where we are and some of the early success we are seeing with very marquee customers that are starting to scale up their inferencing footprint on us.

Yeah, that makes sense. Thank you. Paddy and Matt, the follow-up I have here just on these comments on highest incremental ARR, highest organic ARR in over three years in terms of the net new that you're adding. Can we think of this as the new high watermark? I'm looking at what's being implied in guidance. Talk to us about your ability to consistently deliver growth off that metric and whether there's any unevenness, whether because of seasonality or company-specific factors like the timing of new AI capacity coming online that we should be aware of as we think about the forward model.

Yeah, I can just talk, Matt, and you can fill in. We did not have anything unnatural this last quarter. Like we didn't bring a bunch of capacity online or there was no seasonality associated with it. I think we are just, as we mentioned in our prepared remarks, honing our product-led growth motion for our core cloud customers, and that is starting to really produce results. On one hand, our migration motion is bringing in a new type of customers that are typically digital native enterprise customers, and we are starting to grow them. On the AI side, we're just starting to see some scaled-up inferencing customers. It's a combination of all of those. It's just not one big contract or one spike in capacity of GPUs or anything like that.

It's a very secular and durable type of momentum that we are seeing on the new customer acquisition side. Matt?

Speaker 1

I agree with all that, Paddy. I think that, again, the reminder on ARR is it's not based on a booking. It's not based on a sale. It's based on actual customer revenue and customer utilization. I think that we hope that that's a steady predictor in going forward of the exit trajectory that we're on and a good indicator. It's certainly a critical metric for us. As Paddy said, we're encouraged by our ability to increase that. Certainly, it'll, like any metric, vary quarter to quarter. I hope it'll always be up and to the right, but we have enough motions going that we're very confident in our ability to improve that metric.

Really nice paragraph. Thank you for the detail.

Speaker 5

Your next question comes from the line of Raymond Lenschow with Barclays. Please go ahead.

Speaker 6

Perfect. Thank you. Staying on that AI notion and inferencing, what is, if you think about, like Paddy, you talked about your, you know, how you try to differentiate, etc. Where is the industry at the moment in terms of also capacity constraints? Is that still a factor for you that it's helping, or is it really now about all differentiation? Thank you. I have one follow-up on that. Thank you, Raymo. Capacity constraints are a way of life in AI as we are scaling like everyone else. We are trying to stay ahead of it a little bit, but there are just so many factors there in terms of the real estate footprint, the power, the cooling, and the actual gear. There are just a lot of variable factors here.

I think for us, it all boils down to why some of these marquee AI-native customers are starting to choose us over the other alternatives that they have. It is really the twin-stack cloud that we have laid out in slide eight. I don't think there are too many cloud providers that can claim to have both sides of that equation. We certainly feel like we are driving home that point in terms of not only offering a world-class AI infrastructure, but increasingly those same customers are also starting to leverage some of the guardrails, the agent evaluation framework, the agent observability, and things like that, going up stack on the right side of the agentic cloud. As Matt mentioned, they also have very sophisticated storage, data processing, and CPU compute requirements as well.

At the end of the day, these are very sophisticated applications that require the might of a full-stack general-purpose cloud. I think that is the differentiator that we are leaning on. We feel really confident. I've been talking about this for about four quarters. Finally, we have the twin stacks that we have described on slide eight of the earnings deck. We feel really good. We're just getting started. Some of the RPO and the large contracts that we have been talking about have not even started hitting their full stride as we are scaling those customers. We feel really good about the forward momentum that we are building. That kind of leads into my next question for Matt.

If I think about the second half, you know I got a good few questions already of people saying, actually, you're kind of raising probably a little bit fuller by more than you're actually beating Q1, Q2. There's obviously a lot of confidence in the second half. Should we think about more RPO gives you more visibility, which kind of drives some of that guidance because we know you as a conservative person normally? Thank you.

Speaker 1

Thanks, Raymo. I wish that it was all the RPO that was giving us full confidence. If you look at the RPO, we're really encouraged by the increase. It's still a very, very small portion of our business. That's certainly encouraging. I'd say when we look at the performance that we had in the first half, we look at the visibility that we have into the customer usage patterns. We look at the migrations that we're seeing and that motion kind of coming. We look at the traction we're getting with AI and with some of the direct sales and partnerships and some of the conversations that Paddy articulated we're having with large AI-native companies. We just have enough irons in the fire that we're confident in increasing the revenue guide.

What to me is most encouraging, because you do know I am a relatively conservative guy, is that we're able to increase our free cash flow margins at the same time. To me, that we can demonstrate that we can grow revenue, we can accelerate revenue while maintaining attractive free cash flow margins. To me, that's incredibly encouraging as we think about what's in front of us in the second half and how that sets us up for 2026.

Speaker 6

Yeah. Okay. Perfect. Thank you. Congrats.

Speaker 5

Your next question comes from the line of Jason Adder with William Blair. Please go ahead.

Speaker 1

Thank you. Good morning, guys. I just wanted to see if you could give us a little bit of a breakdown of the business right now when we think about the kind of AI side versus the non-AI side. I know you've given the growth rates. Can you tell us, you know, just sort of ballpark, is this like, I don't know, I'm kind of, I'm in the neighborhood of like 5% to 10% of revenue now from AI. I don't know if there's any specificity you can give on that, but that would be really helpful. Jason, we're not, you know, we don't break this out. Part of it is because, you know, we believe that a lot of the AI capabilities are going to be pulling through other capabilities.

The impact of the growth is beyond what's represented if you just kind of wrote down the SKUs that we consider AI. You're in the ballpark. I'd say it's increasingly becoming a material chunk of the business. It's still small because it's a business that we just launched a year ago, and we're accelerating. That's a reasonable ballpark for a percentage of revenue. We expect that to increase, and it will become an increasingly meaningful portion of our business in 2026. It'll still be a small portion. The core cloud is still a very healthy and growing portion of our business. The AI business is a great complement to that and is accelerating our growth and also opening up different entire channels and new customers to bring in that will drive that core cloud growth up as well. Okay. Great.

And then just as a quick follow-up, is it fair to assume that the core cloud business grew at a similar rate in Q2 versus Q1? I mean, that kind of low double digits. Is that accurate? Yeah. We still see momentum in the core cloud business. While the NDR was a little bit lower in Q2 than it was in Q1, the revenue that we're getting from new customers is ahead of our plan and our expectations. We're doing a really good job there. You got to remember NDR is a little wonky, laggy metric because what happened, like the change in revenue from a year ago has as much impact as the change in revenue this year. The core cloud business continues to accelerate. It's in that low double-digit growth rate and is improving. Most of the upside then was from new customers, it sounds like. Yeah, correct.

Yeah, because with NDR coming down a little bit, the new customer acquisition plus the growth in AI offset the slight headwind from the NDR. If you look at the incremental ARR, if you look at it on an exit run rate standpoint, there was a very good balance between the core business and AI. We both saw AI at its highest point, but there was still very good core cloud growth on an incremental ARR as well.

Speaker 6

Okay. Awesome. Thanks, guys.

Speaker 5

Your next question comes from the line of Josh Baer with Morgan Stanley. Please go ahead.

Great. Thanks for the question. I just wanted to confirm that in the net dollar retention rate, AI/ML revenue is not in that metric. Is that right?

Speaker 1

Yes, right, Josh. That is still the case and will likely be the case for a while. As we've talked about it internally, and we've talked about investor data, we said it'll eventually contribute to the NDR, and we still believe it will. It'll likely be for more inferencing workloads where they're steady production workloads. They're not projects where someone comes in, tests something for a month, and then kind of scales it back. If you think about the time lag of someone being in NDR, a customer doesn't count even in our core cloud until their 13th month. If you're turning up inferencing workloads now with marquee customers, it'll be a year before they would even hit NDR. We will incorporate at least the inferencing portion of AI at some point, but it's certainly not going to be in the next couple of quarters.

It continues to not include, NDR continues to not include AI.

Okay. Got it. I would think, like especially now as it's scaling, but also you have more than 12 months. You talked about 100% growth off of the Q2 last year where there was AI revenue, and it's all organic, you know, kind of missing piece to that NDR % just around that expansion from existing customers. I did want to ask you about the large deals, like how we should be expecting the potential for large deals in the future, and then also for you, Matt, how you're thinking about it from a guidance perspective, assuming that would be a little bit lumpier or have longer sales cycles, or it's just a new motion for you guys. How do you incorporate the potential for large deals in guidance? Thank you.

Do you want to start and talk about the nature of the large deals, and I can answer Josh's question about the guidance?

Speaker 6

Yeah. The nature of large deals is a very new muscle for us, both from a sales, business development, forecasting, all of the above. I think what we are driven by is, can we make these customers successful, and do we have enough of a technology edge that can attract and retain and get these customers to scale? That's the number one thing that I'm focused on, that Bratin and Larry are focused on, making sure that we have the ability to articulate our technology differentiation in a durable fashion and have the right engineering expertise on the ground to make these customers successful. I feel fairly encouraged by the couple of early successes that we have had, and hopefully we can, and we see enough in the pipeline to be quite encouraged with these kinds of deals.

With inferencing, it just takes time to go from winning a customer deal to actually scaling that up with real-world traffic. We are in the process of doing that with some of our customers. Extrapolating that into the future, we'll see how we can do a more predictable job in terms of forecasting how these things fall. I expect this to be lumpy and spiky in the beginning before it starts normalizing because our customers are also new to this, and they get sudden spikes based on some new updates to their models or new updates to their software. Some of them are in the consumer AI space. Some of them are in the B2B AI space. We're learning along with them, and they're learning with us in terms of their business model and how it is scaling out.

I'll let Matt answer how we will start reflecting these things in our financials.

Speaker 1

With that context, Josh, you would expect based on our track record and our history, we'll be conservative in forecasting those. The good news is, as Paddy said, we book revenue, and when we get that revenue, it's not like we're signing massive deals that just turn on right away. We have visibility into the ramps and how those customers are going. Given it's such a new motion and given some of the newness of it for both us and the customers Paddy described, we'll be conservative in terms of including any projected revenue from large deals until we're very comfortable that things are on the right track and we're growing and we have good visibility to that growth. I would expect that you would continue to see us be conservative as it relates to any large deals reflected in our forecast.

Speaker 6

Great, thank you.

Speaker 5

Your next question comes from the line of James Fish with Piper Sandler. Please go ahead.

Speaker 1

Hey, guys. You know, you keep using the word conservative here, but on the guide side, we haven't seen this level of second half step up in some time, really going back to the pandemic. You guys deserve credit here doing $32 million of net new ARR organic. Can you just walk us through the linearity you are seeing, what you're expecting from some of the newer solutions in the second half to raise guide by this much? Any of the other moving parts that helps you bridge this kind of larger than normal step up here? If I look at this and say you book similar kind of to slightly better net new ARR in the sort of $30 million, $35 million range over the next two quarters, it really doesn't leave much wiggle room based on how you guys are defining ARR versus revenue now. Yeah.

I think, Jim, it's a good question. Recall in the last quarter, we didn't raise guidance. We beat Q1. We didn't raise the guide to Q2. We did that intentionally because the market had changed pretty dramatically, and we just didn't know what was going to happen from a macro standpoint. We've now got a full quarter under our belt on that front. We feel good about the visibility we have with the core customers. We've got a bit of the beat from the first quarter and the beat in the second quarter to pass through. As I said, we have enough levers at the moment that we're confident in. We've got the revenue from new customers, the month 1 to month 12 that's doing very well. That's relatively stable and predictable. We're seeing increased volume. We're seeing increased conversion. We're seeing better customers in that cohort.

That's a fairly durable kind of improvement that we've made. We're really confident in that. We've got the migration motion that we turned up that, as Paddy talked about, 70-something migrations during the quarter. That's a very new motion for us. We've got clearly a pipeline of those because those aren't things that you just, like somebody comes in one day and you turn on a migration. You have to be talking to the customer for a period of time. We're managing a pipeline around that. We also have very good visibility into our AI pipeline and are getting increasing traction there. We've got enough things that are going that give us confidence to be able to deliver on that. As I said in the prior question or the answer, we haven't fully reflected the large deal potential in the guide that we have.

That certainly gives us the upside potential beyond what we've even been talking about. We feel good that we're confident in the base, confident enough to raise the guide, and that there's still other things we can be doing and progress we could be making over the balance of this year to give us further room. Paddy, maybe for you, can you talk about what you're seeing on sort of the GPU pricing dynamic? It seemed like across the space, pricing came down a little bit. How you're thinking about the ability to repurpose any GPUs that kind of migrate from customer to customer or what you're seeing in terms of utilization at this point across the GPU side. Thanks, guys.

Speaker 6

Thank you, Jim. The utilization is very robust. We are running very lean on our GPU fleets, regardless of the generation of GPUs we are talking about. As we become more and more heavy on the inferencing side, it gives us a lot of degrees of freedom in terms of how we allocate the machines. Typically, what we are seeing with our inferencing customers is, yes, they do care about the generation of GPUs, but they care more about the price performance rather than just the raw throughput of any given generation of technology. Let's say you have 100 units of GPU on the current generation, and if we can deliver the same price performance with 90 units of GPU in the next generation, the customer really doesn't care as long as it's in the same family of GPUs and they don't have to re-engineer or do anything.

We are getting to a point where it's more about the price performance rather than the price alone or the performance alone. That gives us a lot of degrees of freedom in terms of how we allocate which family of GPUs across our inference workload customers. I think this is going to get even more important as we start scaling up many of our customers across geographies and start doing this in multiple data centers. A lot of new things to be figured out there, but the pricing dynamics and training workloads are quite a bit different from the ones that we are experiencing in a stack that is predominantly driving inferencing.

Speaker 5

We have time for one more question, and that question comes from the line of Brad Reback with Stifel. Please go ahead.

Great. Thanks very much. Matt, as we think about gross margin for the back half of the year as the revenue mix maybe shifts a little bit and you continue to invest in the CapEx, how should we think about the trajectory? Heading into next year, as you lap the change in useful life, what type of impact should we expect then? Thanks.

Speaker 1

Thanks, Brad. The gross margins are, we expect to be relatively consistent to current levels over the balance of this year. Again, as you said, the AI business is growing fast, but it's still a small part of the business. It's not going to have a material impact on gross margins. If you roll that out to next year, clearly we're not at the point ready to give guidance, but we would expect it to have kind of a modest headwind to gross margins. It's still going to be in the vast majority of our business is going to be at the same high margins that we have. We continue to drive efficiencies in the core business, bandwidth optimization, the longer-term data center optimization strategy that we have. We're confident that we can maintain kind of healthy gross margins in the realm that we have right now.

If AI becomes a much, much bigger portion of our business, you'll clearly have visibility into that as we do. At that point, you would see a little bit of margin pressure. At this point, the gross margin we expect to stay right around where it is through the balance of the year.

Speaker 6

That's great. Thanks very much.

Speaker 5

Your next question comes from the line of Mark Zhang with Citi. Please go ahead.

Hey, great. Good morning, guys. Great to be here in the end. Maybe just want to dig a little bit more into the RPO performance. Very nice to see. Can you give us a sense of maybe the deal characteristics here? What are the average deal sizes, contract durations? I just wanted to confirm that AI was the leading contributor here, or you saw good contributions from Core Cloud as well? Thanks.

Speaker 6

Go ahead, Matt.

Speaker 1

I was saying from the start in the reverse. The increase in RPO was from both Core Cloud and AI. It wasn't just AI. Clearly, there's some AI deals that are in there. You can see that I think the average duration, and I might be quoting Q1, so I apologize if it's slightly off, but it's like 19 months. You can get the average kind of length of the deal. The two, say, call it two years on the outside and sometimes one year, somewhere between one and two years is the typical for us because this is a relatively new motion for us. It's great that we're getting customers that are used to. Value

Speaker 5

The ability to just do straight consumption with us, to make the commitments to, you know, for a minimum level of revenue over some period of time, is something that's very encouraging and speaks to the product innovation and the improvements we've made in the core cloud and customers' confidence in our ability to continue to meet their needs. Paddy, if you wanted to add something to that.

Speaker 3

No, I think you nailed it, Matt. Yeah, it is definitely a combination of both our core cloud as well as AI. This is not just reflective of just one giant huge deal or anything like that.

Speaker 5

Got it. Thank you. Maybe a quick follow-up just on capital allocation. It seems like you guys have been stepping up on share repurchases since, I guess, the end of last year. Now we're working out the authorization is dwindling down to about $3 million. What's sort of the stock opposite around just, you know, capital allocation going forward? Thanks.

Speaker 6

Yeah, our capital allocation, we actually reduced the amount of repurchases that we've been doing over the last two years. We did almost $500 million in 2023, and then across 2024 and into 2025, it was only $140 million. Our primary objective at the moment, and we articulated this in Investor Day, is it's all about organic growth and investing to drive organic growth. Secondly, and as important, we're committed to making sure that we've taken care of the balance sheet and we've addressed the outstanding convertible debt. We've said that we're going to do that by the end of this year. We started that process with our $800 million bank facility, $500 million of that as a term loan. We're dialing back the share repurchases just so that we can make sure that we take care of those first two objectives.

As soon as we take care of those two objectives, the first one will be ongoing, but the second being taking care of the outstanding convertible debt, we'll go back to a, I'd say, a reasonable level of share repurchases that are targeted at offsetting dilution. I think priority one is organic growth. Priority two is take care of the convertible debt. Priority three is use the repurchases to offset dilution. Right now, priorities one and two are the bigger focus for the next quarter.

Speaker 1

Your next question comes from a line of Thomas Blakey with Cantor. Please go ahead.

Hey guys, congratulations on the results and thanks for squeezing me in here. I had a point of clarification first to, I think it was Jason Ader's question earlier. Matt, did you say that the core cloud accelerated in 2Q? From a question perspective, I know the core, you know, AI is organic now, growing over 100%. What kind of derivative impact did that have to NDR, if any, Patty or Matt? You would think there'd be some kind of flow-through of these customers buying more services on the platform. I would just be curious to see what kind of impact that had on that metric. Thank you.

Speaker 6

On the second part of your question, a lot of the AI customers that are coming to us are new customers, right? They're in that, particularly in the infrastructure side of AI. They're not yet buying, you know, a tremendous amount of products on the core cloud side. Even if they did, they haven't been in the cohort long enough to count towards NDR. There's basically not much impact from that. That's the future benefit, which I think you're appropriately pointing out. I'm sorry, could you repeat the first part of your question?

Yeah, I think you said earlier on the call to a question that core cloud, you know, kind of excluding AI/ML, accelerated. I just wanted to make sure I heard that correctly.

Yeah, the year-over-year growth rates in the core cloud continue to improve. When you look at a metric like NDR, it's a function of what happened, the change in revenue last year compared to the change in revenue this year. It's got a lot of kind of laggy components to it. On the core cloud, in terms of the incremental ARR and the overall ARR growth of the core business, that continues to accelerate.

That's good.

Speaker 1

Your next question comes from a line of Wamsley Mohan with Bank of America. Please go ahead.

Yeah, thanks for taking my question here. Firstly, on your AI customers, are you seeing higher volatility or churn in that customer base? Just to clarify, is the penetration of these customers, how would you categorize that between maybe learners, builders, scalers in your traditional way of thinking about the customers? Where are these in their journey? Any thoughts around graduation rates on these customers?

Speaker 3

Yeah, great question, Wamsley. It's good to hear from you. It's a completely different customer acquisition motion. We don't think of them as testers, learners, builders, scalers, because they typically don't go through that journey on our platform. A lot of these customers are, in the initial stages, there were a lot of very early-stage startups. As we are seeing a lot of traction on the inferencing side, these customers, in their own evolution or in their own progression, have crossed some of the chasms in terms of both funding as well as finding product-market fit and customer traction. They're coming to us with inferencing needs that are scaling, which by definition means that they have found product-market fit, and now they have a captive audience that is willing to pay for their inferencing need.

We are starting to see, there was a lot of the test and leave kind of phenomenon and the fine-tuning on the training side last year. Now, as we have started flipping more and more towards the inferencing side, these customers come, they stay, they expand, and they start leveraging different parts of our stack described in my diagram. It's a very different lifecycle that we're seeing on this side.

Okay, great. Thanks, Paddy. If I could follow up quickly with Matt on the growth CapEx side, any incremental thoughts over here? I know you said organic investments and driving organic growth are sort of highest priorities. Relative to your comments that you made last quarter, how should we be thinking about the growth CapEx profile over the next few quarters or into next year? Thank you so much.

Speaker 6

Yeah, thanks, Wamsley. I think a couple of things. One, I would point to, again, we've increased the free cash flow margin guidance, and we feel good about that relative to the growth rates that we're articulating. What we said in the last quarter, and say again, is if we see the opportunity to accelerate growth beyond what we communicated at the Investor Day of 18 to 20% by 2027, we certainly do that. We have a lot of tools in our toolkit to be able to do that in a capital-efficient and cash-flow-efficient way. We're very confident, remain very confident that we can grow revenue while maintaining attractive free cash flow margins.

Speaker 1

Ladies and gentlemen, that does conclude our question and answer session, and it does conclude today's conference call. Thank you for your participation, and you may now disconnect.