Akamai Technologies - Q4 2025
February 19, 2026
Transcript
Operator (participant)
Good day, and welcome to the Q4 2025 Akamai Technologies Inc earnings conference call. Today, all participants will be in a listen-only mode. Should you need assistance during today's call, please signal for a conference specialist by pressing the star key followed by zero. After today's presentation, there will be an opportunity to ask questions. To ask a question, you may press star, then one on your telephone keypad. To withdraw your question, please press star, then two. Please note that today's event is being recorded. I would now like to turn the conference over to Mark Stoutenberg, Head of Investor Relations. Please go ahead, sir.
Mark Stoutenberg (Head of Investor Relations)
Good afternoon, everyone, and thank you for joining Akamai's fourth quarter 2025 earnings call. Speaking today will be Tom Leighton, Akamai's Chief Executive Officer, and Ed McGowan, Akamai's Chief Financial Officer. Please note that today's comments include forward-looking statements, including those regarding revenue and earnings guidance. These forward-looking statements are based on current expectations and assumptions that are subject to certain risks and uncertainties and involve a number of factors that could cause actual results to differ materially from those expressed or implied. The factors include, but are not limited to, any impact from macroeconomic trends, the integration of any acquisition, geopolitical developments, and other risk factors identified in our filings with the SEC. The statements included on today's call represent the company's views on February 19th, 2026, and we assume no obligation to update any forward-looking statements.
As a reminder, we will be referring to certain non-GAAP financial metrics during today's call. A detailed reconciliation of GAAP to non-GAAP metrics can be found under the financial portion of the Investor Relations section of akamai.com. With that, I'll now hand the call off to our CEO, Dr. Tom Leighton.
Tom Leighton (CEO)
Thanks, Mark. I'm pleased to report that Akamai delivered strong fourth quarter results as we continued to make major progress in positioning Akamai for the future. Revenue grew to $1.095 billion, up 7% year-over-year as reported, and up 6% in constant currency. Non-GAAP operating margin was 29%, and non-GAAP earnings per share was $1.84, up 11% year-over-year as reported and in constant currency. Q4 revenue for cloud infrastructure services, or CIS, was $94 million, up 45% year-over-year as reported and up 44% in constant currency. That's an acceleration from the 39% growth rate we achieved in Q3.
The rapid growth was broad-based within CIS, driven by our ISV solutions, by infrastructure as a service and storage customers, and by customers leveraging EdgeWorkers and WebAssembly, which offer improved performance and lower costs for edge-native applications. In each of these areas, we're starting to benefit from AI-related tailwinds as customers make greater use of AI applications and agents across their businesses. Last quarter, Akamai took a major step toward the future with the launch of Akamai Inference Cloud, our platform to support the growing demand to scale AI inference on the internet. Akamai's architecture uniquely positions us to power and protect AI the way we power and protect the web, by bringing AI physically close to users, enabling the faster performance and global scale needed to unlock AI's full potential.
We believe the AI market is entering a critical transition point, the first inning of a long game to come, where inference or the execution of queries against a trained model is the new frontier. This requires purpose-built infrastructure to enable distributed, low latency, globally scalable AI at the edge, with response times measured in a few tens of milliseconds. Akamai Inference Cloud does just that by incorporating NVIDIA Blackwell GPUs into Akamai's distributed cloud infrastructure, with its unparalleled global reach and security at the edge. This enables intelligence to run instantly, securely, and exactly where it's needed, right next to the user, agent, or device. As evidence of our strong momentum, we're delighted to announce that we recently signed a four-year, $200 million commitment for our cloud infrastructure services with a major U.S. tech company at the forefront of the AI revolution.
I've had the privilege to work at Akamai for many years, and I have to say that it's really exciting to see such a pivotal player in the AI ecosystem choosing Akamai Inference Cloud for such a large AI use case. We also signed many other new and expanded contracts for our cloud infrastructure services in Q4. An AI chatbot platform based in India signed a three-year contract for our IaaS and Enhanced Compute Support solutions and saved 45% on compute costs they would have paid to a hyperscaler. A very well-known antivirus software company chose Akamai's cloud for their VPN service, telling us they liked our performance and support better than what they previously got from two of our cloud competitors.
A leading social networking platform that was using us on a pay-as-you-go basis committed to consolidate their multi-vendor stack onto Akamai's cloud platform, providing us with another takeaway from a hyperscaler. Two ad tech companies in China chose us for our significantly lower latency and dramatically reduced egress costs. One of the world's largest retail companies expanded their use of our edge compute platform to improve their digital shopping experience and increase conversion rates. As a result of the strong customer demand that we're seeing and the strong AI tailwinds across the marketplace, we anticipate that the very rapid growth rate for our cloud infrastructure services will accelerate further in 2026. Our security solutions also performed well in Q4, led by continued strong demand for our market-leading API Security and Guardicore Segmentation solutions.
Revenue from these high-growth security products grew 36% year-over-year as reported, and 34% in constant currency. Last month, Akamai was recognized as a Customers' Choice for network security micro-segmentation in the Gartner Peer Insights Report for 2026. Akamai earned a 99% recommendation rate, scoring above market norms for both user adoption and overall experience. Last quarter, we saw continued strong demand for our Guardicore Segmentation platform with both new and existing customers. One of North America's largest financial institutions purchased our segmentation solution to gain visibility and protection across all of their network assets as part of a four-year, $40 million contract. South Korea's largest mobile operator selected Akamai following the well-publicized BPFDoor security incident, which exposed gaps in east-west security and zero trust maturity. The customer chose our solution for workload-level segmentation, deep visibility, and resilient enforcement across hybrid environments.
We also signed deals for segmentation in Q4 with one of the largest carriers in the U.K., a major branch of the U.S. Armed Services, and multinational banks in North and South America and Scandinavia. In Q4, we also saw increased demand for our API Security solution, signing new customers across multiple verticals, including financial services, technology, healthcare, real estate, retail, and travel. Customers who chose Akamai API Security in Q4 included a major European automaker, a telco in the Middle East, as well as airlines serving Asia Pacific and Latin America. We also signed a five-year, $47 million commitment from one of the largest hardware companies in the world in a contract that included API Security and cloud infrastructure services, along with other Akamai offerings.
We had many other customers in Q4 who purchased multiple security products across our portfolio, including one of Asia's largest airlines, which signed a $10 million contract for multilayered protection over five years, and a three-year, $45 million renewal with one of the world's largest financial institutions to migrate nearly 100 critical applications away from hyperscaler security and onto the Akamai platform to ensure best-in-class DDoS and web application protection, high availability, and robust security support from Akamai Security Operations Command Center. Earning the trust of customers is imperative for Akamai. The world's biggest brands trust us to keep their apps performing well, even under peak traffic conditions. They trust us to protect them from myriad attacks and to keep their data safe, and they trust us for our reliability.
We saw how much this trust mattered to customers who relied on us during the recent holiday season, a time when one of our competitors took down their customers with multiple multi-hour outages. Major enterprises know who they can trust, and we're grateful for the trust that our customers place in Akamai. Last quarter, we were honored to be named by Forbes in their list of America's Most Trusted Companies and in their list of America's Best Companies for 2026. Forbes analyzed thousands of the largest public and private companies in the U.S. across 11 dimensions, including financial performance, customer sentiment, employee ratings, reputation for innovation, executive leadership, cybersecurity, and sustainability. We were also honored by The Wall Street Journal, naming Akamai to its list of America's Best Managed Companies, the Management Top 250.
This ranking by the Drucker Institute analyzed publicly traded companies based on customer satisfaction, innovation, financial strength, social responsibility, and employee engagement and development. Before I hand off to Ed, I want to thank our employees and our management team for their achievements in 2025. Together, we're successfully executing on our ongoing transformation of Akamai into the cybersecurity and cloud company that powers and protects business online. We believe that the investments we're making today are enabling Akamai to do for cloud and AI what we've done for security and CDN, and enabling Akamai to grow even faster as a result. Now I'll turn the call over to Ed to say more about our results and our outlook for Q1 and the year. Ed?
Ed McGowan (CFO)
Thank you, Tom. I'm pleased to report that we delivered excellent fourth quarter results, with total revenue of $1.095 billion, up 7% year-over-year as reported, and up 6% in constant currency. We also delivered strong bottom line results, with non-GAAP EPS of $1.84, up 11% year-over-year as reported and in constant currency. Moving now to revenue. Compute revenue, which is comprised of the high-growth cloud infrastructure services, or CIS solutions, and our other cloud applications, or OCA, was $191 million, up 14% year-over-year as reported and in constant currency. For Q4, CIS revenue was $94 million, accelerating to 45% growth year-over-year as reported, and 44% in constant currency, a nice jump from 39% growth last quarter. CIS now represents approximately 50% of total compute revenue.
Moving to security. Revenue is $592 million, up 11% year-over-year as reported, and 9% in constant currency. Revenue from API Security and Zero Trust enterprise security combined was $90 million, an increase of 36% year-over-year and 34% in constant currency. Notably, API Security grew by more than 100% year-over-year, exiting the year with a revenue run rate exceeding $100 million. Security revenue was driven by strength of our high-growth product suites and a favorable tailwind from term license revenue. For the fourth quarter, license revenue rose to $18 million, up from $12 million in the same period last year. As a reminder, our term license agreements are generally for one to three years, and we continue to maintain exceptionally high renewal rates in our term license business.
Moving to delivery. Revenue is $311 million, down 2% year-over-year as reported, and down 3% in constant currency. These results highlight the continued steadying trends we have seen in our delivery business throughout 2025. International revenue was $542 million, up 11% year-over-year or up 8% in constant currency, representing 50% of total revenue in Q4. U.S. foreign exchange fluctuations had a negative impact on revenue of $5 million on a sequential basis, and a $12 million positive impact on a year-over-year basis. Moving to profitability. In Q4, we generated non-GAAP net income of $270 million, or $1.84 of earnings per diluted share, up 11% year-over-year as reported and in constant currency. This better-than-expected performance was primarily driven by higher-than-expected top-line revenue in the fourth quarter.
Finally, our Q4 CapEx was $154 million, or 14% of revenue. Moving to cash and our capital allocation strategy. As of December 31st, our cash, cash equivalents, and marketable securities totaled approximately $1.9 billion. During the fourth quarter, we did not repurchase any shares. For the full year of 2025, we spent $800 million to buy back approximately 10 million shares, marking the largest annual buyback in our history. As it relates to the use of capital, our intentions remain the same, to continue buying back shares over time to offset dilution from employee equity programs and to be opportunistic in both M&A and share repurchases. Now, before I provide Q1 and full year 2026 guidance, I want to touch on some housekeeping items. First, as Tom pointed out, we recently signed our largest compute customer contract.
We're very excited that this technology company has committed to a minimum four-year spend of approximately $200 million on our cloud infrastructure services, with a large majority of that spend for our AI inference cloud. We expect to start recognizing revenue from this contract in the fourth quarter of 2026. Second, to capitalize on this transaction and the AI inference cloud pipeline, we intend to invest approximately $250 million of CapEx this year to augment AI inference cloud. Third, we have recently observed significant inflationary pressure within the computer hardware market due to unprecedented industry investment in AI. Specifically, we are seeing a dramatic increase in the price of memory chips, which is driving up the cost of servers. This supply constraint has necessitated an upward adjustment to our CapEx forecast of approximately $200 million for 2026.
Next, I want to remind you of some typical seasonality we experience in operating expenses throughout the year. First, we recently completed a targeted reduction in our workforce to better align our talent with our long-term growth priorities. While this action streamlined certain areas and reduced our OpEx, we do not anticipate it generating net savings for the full year. Instead, we are reinvesting those savings directly back into the business, specifically to scale our go-to-market efforts and to support our colocation and CIS infrastructure requirements to maximize our growth opportunities. In Q4, we took a $55 million restructuring charge that was primarily comprised of severance costs and impairments of certain intangible assets. Second, looking at the first quarter, we typically see a seasonal increase in expense.
This is driven by higher payroll costs resulting from the reset of Social Security taxes for employees who maxed out in 2025, and stock vesting from employee equity programs, which tend to be more heavily concentrated in the first quarter. Third, as we look to the second quarter, we expect operating expenses to remain relatively flat on a sequential basis. The savings realized from our restructuring and the roll-off of the higher Q1 payroll taxes will be offset by our annual merit cycle, which takes effect on April 1st. Moving to FX. Foreign currency markets are expected to remain volatile throughout 2026. As a reminder, we have approximately $1.3 billion in revenue that is denominated in foreign currency.
Largest currency exposure on revenue includes the euro, the yen, and the Great British Pound. Finally, as previously noted, cloud infrastructure services now accounts for approximately 50% of our total compute revenue and is growing rapidly. Recognizing CIS as a primary growth engine and a significant focus of our investments for the compute business, we will begin reporting it as a standalone revenue category effective in the first quarter of 2026. For simplicity, we will consolidate delivery and other cloud apps into a single reporting category starting in Q1. To assist with your year-over-year analysis and financial modeling, we have published eight quarters of revenue history for these revenue categories and the supplemental schedules as part of today's reporting package on our Investor Relations website. In addition, for added transparency, we will disclose quarterly revenue for OCA independently for the remainder of 2026.
Now moving on to guidance. For the first quarter of 2026, we are projecting revenue in the range of $1.06 billion-$1.085 billion, up 4%-7% as reported, or 2%-5% in constant currency over Q1 2025. We expect Q1 revenue to be lower sequentially from Q4, driven by the following factors. First, reduced one-time license revenue in Q1 from Q4 levels. Second, two fewer calendar days in Q1 compared to Q4, thus two less days of usage revenue. And finally, less seasonal traffic in Q1 compared to Q4. The current spot rates, foreign exchange fluctuations, are expected to have a +$4 million impact on Q1 revenue compared to Q4 levels and a +$22 million impact year-over-year. At these revenue levels, we expect cash gross margins of approximately 71%-72%.
Q1 non-GAAP operating expenses are projected to be $339 million-$348 million. We anticipate Q1 EBITDA margin of approximately 39%-41%. We expect non-GAAP depreciation expense of $145 million-$147 million, and we expect non-GAAP operating margin of approximately 26%-27%. With the overall revenues and spend configuration I just outlined, we expect Q1 non-GAAP EPS in the range of $1.50-$1.67. The EPS guidance assumes taxes of $57 million-$60 million, based on an estimated quarterly non-GAAP tax rate of approximately 19%. It also reflects a fully diluted share count of approximately 148 million shares.
Moving on to CapEx. For the reasons I highlighted earlier, we expect to spend approximately $254 million-$264 million in the first quarter. This represents approximately 23%-25% of revenue. Looking ahead to the full year for 2026, we expect revenue of $4.4 billion-$4.5 billion, which is up 5%-8% as reported, and 4%-7% in constant currency. Moving on to security. We expect security revenue to grow in the high single digits on a constant currency basis in 2026. For cloud infrastructure services, or CIS, we project revenue growth to accelerate to 45%-50% year-over-year. We expect this momentum to build throughout the second half of 2026, driven mainly by the scaling of AI inference cloud business.
For delivery and other cloud apps, we expect both will decline in the mid-single digits year-over-year. Specific to delivery, we expect the revenue to decline in mid-single digits for the year, with Q1 being slightly higher due to the wraparound impact of the Edgio transaction from last year. By way of comparison, and for consistency with 2025, using our former compute reporting methodology, we expect the combined growth of CIS and OCA to be at least 20% year-over-year. At current spot rates, our guidance assumes foreign exchange will have a +$36 million impact on revenue in 2026 on a year-over-year basis. Moving on to operating margins for 2026, we are estimating non-GAAP operating margin of approximately 26%-28%, as measured in today's FX rates.
The decline in operating margin for the full year 2026 is due mainly to increased co-location and depreciation expense associated with the continued buildup of our CIS business. We anticipate that full year capital expenditures will be approximately 23%-26% of total revenue, driven by the investments and costs that I mentioned earlier. As a percentage of total revenue, our 2026 CapEx is expected to be roughly broken down as follows: For network-related CapEx, we expect approximately 4% for our delivery and security business, approximately 10%-13% for compute, and for other CapEx, we expect approximately 8% for capitalized software, with the remainder being for IT and facilities-related spending. Excluding the impact of the increased hardware pricing, 2026 CapEx would have trended within the 18%-22% range.
The impact of increased server costs is mainly included in the compute line item above. Moving to EPS for the full year 2026, we expect non-GAAP earnings per diluted share in the range of $6.20-$7.20. This non-GAAP earnings guidance is based on a non-GAAP effective tax rate of approximately 19% and a fully diluted share count of approximately 147 million shares. With that, I'll wrap things up. Tom and I are happy to take your questions. Operator?
Operator (participant)
Thank you. We will now begin the question and answer session. As a reminder, to ask a question, you may press star then one on your telephone keypad. If you are using a speakerphone, please pick up your handset before pressing the keys. If your question has been addressed and you would like to withdraw it, please press star then two. We will now pause momentarily to assemble our roster. Today's first question comes from Sanjit Singh with Morgan Stanley. Please proceed.
Sanjit Singh (Executive Director)
Thank you for taking the questions, and congrats on a very strong Q4 results. Ed, you provided a lot of great detail on the dynamics around CapEx as well as the momentum you're seeing within the CIS business. When I look at the increase in CapEx, and it's roughly coming up by, I think, $270 million. Going back to, like, the discussion that we've had in prior quarters, that roughly a dollar of CapEx equals a dollar of revenue, does that still hold? And as we think about this increase in CapEx, how should we think about that translating into revenue from a timing perspective, both this year and then maybe going beyond 2026?
Ed McGowan (CFO)
Yeah. Hey, Sanjit, thanks for the question. So, you know, obviously I talked about having some inflation in memory chips. Hopefully, that is something that doesn't last for a long time. So that obviously, skews your CapEx a bit, and as I talked about, most of that is affecting your compute because there's a lot more memory in those servers. So $1 of CapEx for $1 of revenue would not hold true for this particular buying CapEx, but it's not that far off. Generally speaking, we're seeing something roughly like that. Obviously, for larger deals with longer commitments, we will offer volume discounts, but for even for some stuff, you might get a slightly better return.
Like, for example, we'll be launching a rental service where you can rent GPU by the hour, starting sometime later this quarter, where the list price for that's $250, so that would work out a little bit higher. But generally speaking, it's a decent number to work with. I'd model it a little bit lower for this year, just given that we've seen higher CapEx costs associated with the memory prices.
Sanjit Singh (Executive Director)
Understood. And then just one follow-up on the Akamai Inference Cloud opportunity. Really encouraging to see that four-year deal with a major tech company. Can you speak a little bit about the pipeline? I know we have some really big customers looking at the opportunity, but just in terms of the breadth of interest, pipeline, any color you can provide there on, you know, potential more customers signing up for the service?
Tom Leighton (CEO)
Yeah, pipeline, very strong. In fact, you know, the inference cloud offering we announced in the fall, where we deployed the GPUs into 20 cities, that's already sold out, even though it's not generally available yet, just from the beta customers. Now we're ramping up the investment there, as Ed mentioned, and very strong pipeline. In fact, with the large customer we talked about already, you know, committing to take over a substantial portion of that. You know, the areas of interest are broad. At a high level, obviously, inference applications, also post-model training, but specifically, you know, things like transcoding, real-time translation, generative media, you know, to generate images and video on the fly. The new Blackwell GPU is very good at doing that with much lower latencies.
Vision, you know, processing what is seen, you know, customer support bots, all sorts of gaming applications, streaming, rendering, modifying characters as you go along in the game. In commerce, you know, virtual fitting room kinds of applications, so it's almost like the buyer is looking at themselves in a mirror, you know, wearing the clothes. Also, making sure the clothes will fit, so you have fewer returns. A lot of robotics and autonomous vehicle kinds of applications, you know, areas that these folks might not, you know, traditionally be Akamai customers, now potentially, you know, large compute customers.
Generally, the field of, you know, local LLMs, you know, as people, you know, or companies do more kinds of things themselves, but wanna operate their own, you know, model, that's great because that's the kind of thing you'd wanna do on inference cloud and have it done, you know, close to where your employees are. So we're very enthused about what we're seeing so far, and a lot of potential for growth for us.
Operator (participant)
The next question comes from Mike Cikos with Needham & Company. Please proceed.
Mike Cikos (Senior Analyst)
Hey, team. Thanks for the questions here, and congrats on the strong end to 2025. The first question I had for you, on that major U.S. tech customer, can you help us think about how this came together? It's great to see the duration. We're talking four years and the $200 million minimum commitment. But was this a new logo to Akamai, or were they a previously existing customer within CIS or another portion of the Akamai portfolio? And then I just have a quick follow-up.
Ed McGowan (CFO)
Sure, I'll take this one, Tom. So, the good news, it was an existing customer. It wasn't one of our largest customers, though. This was somebody who was using us for CDN and security, and then, you know, had discussions with them going for several months now on a pretty exciting workload. We're not at liberty to disclose who it is, but, you know, the good news is, existing customer who has dramatically increased their spend, and we hope there's a lot more business to do with them.
Mike Cikos (Senior Analyst)
That's excellent, and, and I appreciate that, Ed. I guess the follow-up, Sanjit's final question. But when thinking about the capital intensity here, and, and I really appreciate the disclosure. It sounds like you guys have been busy on your side. But how do we think about the level of CapEx you guys are deploying here? Are you, are you changing in any way how you're sourcing servers or, or going in and buying hardware versus where we've been previously, just given the, the heightened price components that we're seeing out there on the market and the feel that this is somewhat different as far as the cycle and persistence of these pricing dynamics? Anything there would be incremental as well. And thank you again.
Ed McGowan (CFO)
Yeah, sure. Yeah, sure. No, no problem. And, and, you know, the capital intensity isn't necessarily increasing for any other reason than we're seeing significant demand for CIS. So that's the major driver. And obviously, you know, making that purchase of $250 million for the inference cloud is very well informed. And as Tom mentioned, we have, you know, it's great to have one customer who's taking, you know, a good chunk of that, and having that committed is just a great opportunity for us to put that capital to use. So I hope we do more of that. So I'm very happy about that. Now, in terms of the complexity of what we're doing or what we're buying changing, no, not really.
We're buying, you know, mostly servers and, you know, networking equipment and things like that. We are looking at, you know, trying to reduce the impact of the memory chip increase in cost. So we're looking at sourcing things differently from different sources, et cetera. But generally speaking, there isn't really any significant change. And as far as our co-location posture, we're still using the, you know, third-party colo providers. At some point, maybe that changes once we get a lot larger, but no real significant change.
Hopefully, as I broke out the different components, if you wanna think about it this way, you know, you take out the $200 million for the price increases, and then if you look at that purchase of the AI inference cloud as sort of something that we did that was a little different than last year, the normalized CapEx is kind of at the lower end of what our range typically would have been. So, you know, this is a good kind of capital intensity increase when you have a chance to fuel a business that's growing as fast as CIS is.
Mike Cikos (Senior Analyst)
Great to hear. Thank you, Ed.
Operator (participant)
The next question comes from Rishi Jaluria with RBC. Please proceed.
Rishi Jaluria (Managing Director of Software Equity Research)
Oh, wonderful. Thanks so much for taking my questions. Nice to see, you know, acceleration in the CIS business at scale. Maybe two questions, if I may. Number one, if I start to think about some of the success that you're having on CIS, it sounds like you're having that with existing Akamai customers that may have used you for delivery and or security or a combination there above. Maybe can you help us understand, you know, as you think about going back to those customers, is the total ACV or, you know, whatever sort of metric you wanna use, with those customers growing meaningfully as a result of this?
In other words, just trying to get a sense that it's not a situation of, you know, money that maybe they would have spent for delivery in the past, and as we think about pricing and DIY, it's money that's going elsewhere, that this is actually being additive to those customers' total bills, if that makes sense. Then I've got a quick follow-up.
Ed McGowan (CFO)
Yeah. Yeah, no, great question. It's certainly, it's additive. You know, we don't—we're not horse trading any delivery for compute or anything like that. As a matter of fact, this particular large customer was done out of cycle, so it wasn't even done as part of a renewal, so it's all 100% additive. And I would say, yeah, we're having good success with existing customers, but also with new customers. You know, Tom talked about the pipeline. What's interesting with that pipeline is we are starting to see verticals we don't typically are strong in from a legacy perspective as far as CDN goes, and so that's good to see. We see partners bringing us new business, and there's really a mix in that pipeline of new and existing customers.
I've actually seen total new customer count pick up over the last, you know, year and a half or so, and I think a lot of that has to do with having CIS as an offering that's more broad.
Rishi Jaluria (Managing Director of Software Equity Research)
Got it. That's helpful. And then, you know, maybe I'd be, I'd be a little remiss if I didn't ask about kind of some of the one-time factors going on in calendar year 2026. You know, as you think about your guide for the year and obviously appreciate the granularity, just can you maybe help us understand. You know, I know this isn't the Akamai of 10 years ago, when maybe, you know, live events were a lot more meaningful, but still just wanna understand, what are kind of your assumptions in terms of, you know, the major events happening between Winter Olympics going on right now, between the FIFA World Cup in the summer, got some big AAA gaming releases that may or may not happen.
Obviously, release dates get, keep getting pushed out. Maybe just help, help, help us understand kind of the puts and takes, you know, and, and how that ties into your numbers. Thanks so much.
Ed McGowan (CFO)
Yeah, sure, happy to take that one. So if you think about events, they come in different flavors. You've got the small events like a live concert or a Super Bowl. Those tend to be very small revenue. You know, sometimes you might get a root capacity reservation fee, so maybe that might be $500,000-$1 million or something. So nothing too dramatic there. Something like the Olympics, three weeks long, it's a few million dollars. Depends on how many rights holders you have, how many different rights holders you sign, et cetera. So it's not a huge jump and, you know, doing, you know, $1 billion+ a quarter. It's fairly insignificant to the quarter. It's good business, so we'll take it.
Something like the World Cup's a little bit longer, so you'll probably see, you know, maybe 3-5, 5-6, something like that. But again, nothing overly material, although it's nice to have all these events. And then things like a NFL season, much better. You're gonna generate a lot more revenue there from a number of different customers. So it really depends on the length of time and the number of people that have rights. Something like a gaming release, depends. If it's a really popular release that has a lot of updates to it, that can be popular and can drive some extra revenue. It really depends. Something like Fortnite certainly was a big tailwind for us several years ago.
If you see a new console refresh cycle, that's a much bigger impact for us because you're talking now about hundreds of millions of consoles getting firmware updates and lots of updates. So that's the way to think about the events. So it's nice to have them, but it's not overly material for the year.
Rishi Jaluria (Managing Director of Software Equity Research)
Very helpful. Thank you.
Operator (participant)
Our next question comes from Roger Boyd with UBS. Please proceed.
Roger Boyd (Executive Director)
Great. Thanks for taking the questions and congrats on a good end of the year. I wanted to ask about the handful of larger CIS deals that you had noted last year as being delayed out of the back half of the year. Can you just update us on how those are progressing and maybe how those are embedded into the 2026 guide? I think you mentioned the $200 million deal you signed this quarter will start to ramp in the fourth quarter. At a high level, can you just talk about the typical ramps you're seeing in compute? Is any part of this a result of capacity constraints, and do you expect to see these ramps on the compute deals get shorter over time? Thanks.
Ed McGowan (CFO)
Yeah, it really depends. You know, some we can get up and running pretty quickly. It really just depends on the size of the transaction, and if there's any, you know, specific geo where we may need to get some additional colocation. You know, the colocation market is tight, but we've got—we're a big buyer of colocation, so we're doing pretty well there. We did see some of the larger workloads ramp up at the end of last year, and we've modeled in what we think those will do. And I talked about this particular really large deal will start ramping in Q4. And part of that is we have to, you know, we're ordering all the chips, putting them in place, getting some space. So it just takes a bit to ramp that up.
It is a, you know, obviously, GPUs are a pretty tight supply chain, but, you know, we're able to get those out and launched here. So we've modeled in a variety of different outcomes on that, in terms of our guidance range. But, you know, if the bigger the deal, usually takes a little bit longer to ramp, and in some cases, people can get up and running very shortly.
Roger Boyd (Executive Director)
Very helpful. Thanks, Ed.
Operator (participant)
The next question is from Fatima Boolani with Citi. Please proceed.
Fatima Boolani (Co-Head of US Software Equity Research and Managing Director)
Oh, good afternoon. Thank you so much for taking my question. Excuse me. I wanted to focus on the trajectory of the delivery business. I think this has been asked in a couple of different permutations, but I wanted to ask it at more of a higher level with respect to the aggregate environment for internet traffic and traffic volumes. You know, you had a bunch of your peers sort of talk to you know, accelerating or improving traffic trends. I was hoping you could compare and contrast for us what you're seeing on the Connected Cloud network. And then, the flip side of that coin is just the pricing dynamic.
So, you know, you know, to your point, the delivery business has seen a pretty substantive degree of stabilization over calendar 2025, and it seems like that is going to persist. So I just kind of wanted to unpack the P and the Q on the delivery equation. And then I have a follow-up as well, please.
Tom Leighton (CEO)
Yeah, at a high level, the trends, you know, that we're seeing and projecting for this year are pretty comparable to what we, you know, saw towards the latter half of last year. Traffic environment seems, you know, very reasonable. Obviously, a fewer players in the market than a couple of years ago. Pricing environment is remains competitive. We still have folks out there selling, in some cases, at very low prices, which we won't do. We, we, you know, in particular, we see some costs rising, as we've talked about, especially in memory, and in some cases, we'll actually be raising, you know, prices to help offset those costs.
But I would say at a high level, what we're expecting this year is pretty comparable to what we saw toward last year, especially in the back half of the year.
Fatima Boolani (Co-Head of US Software Equity Research and Managing Director)
I appreciate that. And, Tom, you had sort of talked about the rental service that you're going to launch, in this upcoming quarter. I wanted to take the opportunity to have you unpack that, you know, what the expected structure is, what the economics look like, and maybe in a more broader sense, the type of utilization that you are expecting on your network, as you think about and deploy this $250 million of incremental capital to scale out the inference in cloud to ahead of the capturable opportunity. Thank you.
Tom Leighton (CEO)
Yeah. So in terms of inference cloud, you know, there are two models. One is the traditional model, where you buy access to the GPU by the VM hour, the token, and that's what will be, you know, going GA later this quarter. The GPUs we deployed into 20 cities are already, though, pretty much sold out. So we're adding an order of magnitude more capacity, and that's what the $250 million investment is for. And in addition to selling by the token or VM hour, we'll be selling clusters, so that you might decide to buy hundreds or thousands of GPUs in certain locations. So that'll be a new model that we're introducing this year and have some very large customers buying CIS in that way.
Ed McGowan (CFO)
Yeah, the one thing I would add, Fatima, is we, you know, in terms of the early pipeline, we are seeing a bit more skew to the customers who want to, you know, guarantee the capacity. So they're asking for whether it's several hundred or 1,000 or whatever GPU for a period of time, you know, multiyear time kind of deals, which is obviously a better model. I'd you know, like to see that. In terms of the usage, we haven't done that yet, so we don't know exactly how that's going to play out. So we've, you know, got a range of various outcomes there. But, you know, certainly there's a lot of early excitement and demand in the pipeline that we're seeing for what we're buying.
Fatima Boolani (Co-Head of US Software Equity Research and Managing Director)
Thanks for that detail, Ed.
Operator (participant)
The next question comes from Frank Louthan with Raymond James. Please proceed.
Speaker 17
Hey, guys. Good evening. This is Rob on for Frank. Hey, congratulations on the strong 4Q. So my question is, what sort of revenue commitments are you guys able to get from customers today relative to before? How prevalent are those now versus previously? You know, what percentage of revenue on the delivery side is under those commitments, and what's your outlook for delivery growth this year, specifically with AI-based traffic, if you can give us a better sense of that? Thank you.
Tom Leighton (CEO)
Yeah, we are seeing longer commits for really, for all of our services. Partly that's by design, and I think customers are also interested in having that take place. With the delivery growth, you know, we're looking at about the same rate, so mid-single digits this year. Ed, do you want to add to that?
Ed McGowan (CFO)
No, I would just say, you know, you'll see, like, the RPO is growing as for the total company, quite a bit, and that's just a function of what Tom's talking about in terms of folks making, you know, longer-term commitments. You know, we've incentivized our sales force to get longer commitments. As far as the delivery market itself, not a huge change there in terms of commitments. There are some customers that might commit a percentage. Some might give you some type of exclusive for either a part of their business or geographic area, et cetera. So there's really no dynamic change in the delivery business. It's roughly the same in terms of committed versus uncommitted. But since the other parts of the business are growing much faster, security and compute, we're seeing a lot longer and bigger commitments.
Speaker 17
Okay, great. Thank you.
Operator (participant)
Our next question comes from John DiFucci with Guggenheim. Please proceed.
John DiFucci (Senior Research Analyst and Senior Managing Director)
Thanks for taking my question. A lot, a lot of interesting things happening here, Tom and Ed, and especially around the CIS business. And thanks again for breaking that out historically, too. Last year, you announced a very large contract with a social media customer, and I think this is sort of a follow-up to Roger's question. And that company had a lot going on internally, right, and externally, too, and it required the additional build-out of capacity by you. I think, you know, we're a year into that, and we believe, I believe the build-out is complete by you, but I still think there's a lot going on with that company. I guess, could you—'cause a lot of this stuff could come in lumpy, and I'm just trying to figure out how to think about this going forward.
And this is, like, the first deal like this, and it's great to hear about that $200 million four-year deal, too. But with this deal, have you started recognizing revenue yet from that customer? And if not, can you share a little bit about when you expect to recognize revenue? And then I guess one other part related to this is, that social networking deal you talked about, that's gonna consolidate on Akamai and take away from a hyperscaler that I think Tom mentioned in his prepared remarks, is that the same customer or is that another customer? Sorry for the long-winded question.
Ed McGowan (CFO)
Yeah. No, no worries, John. I hope by interesting, you mean good interesting. So I'll take that. It's not the same customer, it's a different customer. In, in terms of the lumpiness you talked about, generally speaking, we don't see lumpiness per se. You know, as I talked about with the new deal we just signed, the $200 million four-year deal, I expect that to be, you know, fairly even. You know, maybe there's some upside, as usage ramps, but there's not a, you know, like, say, a big chunk of revenue, and then it goes away or whatnot.
But we do expect that to start ramping in Q4, just as we, you know, start deploying, you know, make the purchase, get the TPUs, get them up, customer has to do their testing, and then they go into a full launch. So that just takes some time. So starting in Q4, we expect that to ramp up and then continue into next year. And then in terms of the large customer we signed last year, that $100 million deal, we did start taking a little bit of revenue in Q4. We expect that to continue to ramp up throughout the year. I will say there is some seasonality.
We do have a little bit of, you know, work in the compute business that might be tied to say, you know, like, a season or something like that, say, a sports season. So you may see a little bit of extra revenue in, say, Q4, and it dips a little bit in Q1. But generally speaking, you don't see big lumpiness, as you, as you said, in the compute business.
John DiFucci (Senior Research Analyst and Senior Managing Director)
Okay. And that, that makes sense. That makes sense. I was thinking sort of like Oracle, but they're, they're bringing on these huge AI training data centers, which are just come all online, but that's not how your business is. So thank you for that. And I guess just one follow-up, and not, a little bit unrelated here, Ed, it's a, a accounting question. How much of that fourth quarter restructuring charge of $55 million, was any of that in cash for this quarter? Or, or was it, because cash flow was a little weaker than, than I think people expected, and, CapEx was higher, so I get that. But was that?
Ed McGowan (CFO)
Yeah, actually, yeah. Yeah, good question. So most of the cash flow is a timing issue, just in terms of the timings of cash receipts and payments, and we made some pretty big tax payments before the end of the year, so that skews the cash flow. But if you look at last year, I think it's relatively in line with last year. But in terms of the restructuring, that's cash will go out in Q1. So the majority, little over half was intangible assets, so there's no cash associated with that. Severance was a little less than half. That'll hit in Q1.
John DiFucci (Senior Research Analyst and Senior Managing Director)
Okay, great. Thank you very much. And a lot going on here, but I, and I actually definitely meant good when I said interesting. So thank you.
Operator (participant)
The next question is from Will Power with Baird. Please proceed.
Will Power (Senior Research Analyst)
Okay, great. Maybe just to switch gears to, you know, security, great to see the continued, you know, Guardicore Segmentation, API Security strength, and, and, you know, API, I guess, topping up, you know, $100 million. It'd be great just to get a, you know, a, you know, better kind of outlook, you know, sense for growth expectations on those two pieces in, in 2026. You know, how that folds in. And then, and then probably for you, Tom, it'd be great to get your perspective just on how you're thinking about, you know, any, you know, potential, you know, AI risk, kind of across your security portfolio, just given some of the market, you know, concerns out there.
Seems like the businesses have been pretty, pretty resilient, but maybe any just comments on what you're maybe seeing competitively from any, you know, other, you know, AI entrants or, technologies in the marketplace.
Tom Leighton (CEO)
Ed, why don't you take the first, then I'll do the second?
Ed McGowan (CFO)
Sure. Happy to. So, yeah, very happy with what we're seeing with Guardicore and API Security. We had a really good, strong fourth quarter finish in terms of bookings. And the nice thing with both of these businesses is we're seeing a nice mix of new customer versus existing, especially with Guardicore. As a matter of fact, the majority of revenue is coming from new customers associated with Guardicore, which is great. And then with API, both actually are, you know, very low on a penetration rate. Within API Security, less than 10% of our existing customers have purchased that, so there's an enormous amount of runway there. We're seeing a lot, a big adoption across many, many different verticals, too. So it's not just a one vertical, like financial services, it's really across everything.
So we expect, as we go into next year, very similar to last year in terms of API and Guardicore, now at a little bit more scale, driving the majority of the growth. The other product lines, whether it's, you know, bot management and WAF, continuing to grow, albeit slower, and then services continuing to grow as well. So we expect growth in most of those categories, maybe Prolexic, it tends to be a little bit more venture, and maybe that's not, maybe that's more flattish, but we do expect growth across the board, and this year to look pretty similar to last year, with the majority coming from API and Guardicore.
Tom Leighton (CEO)
Yeah, and to your second question, that's a great question. You know, we are not seeing risk from, you know, AI-induced SaaS, do-it-yourself kinds of things. And, you know, one of the key reasons for that is for our services, security services, you really need to run it on a large distributed platform, by and large. You know, and one reason for that is if you tried to sort of do it yourself in your data center or in a few locations, you just get overwhelmed with the volume of the traffic, and you don't have any chance to really apply the security because you're flooded. And that's where Akamai's distributed platform makes the critical difference, is we intercept all that traffic, the bad traffic, out where it starts, and we can do that at great scale.
And so, you know, we're not, I don't think, we don't have that kind of exposure. Now, the good news is, if the AI, you know, induced risk to SaaS, if that materializes, that's a big tailwind for us on the compute side, because these enterprises are gonna need to run their models that are doing these SaaS tasks, and generally, they're gonna probably wanna run them close to where their employees are, and that's a perfect application, you know, for our inference cloud. So on balance, if that really materializes, that's a tailwind for Akamai, I think, and not a headwind.
Will Power (Senior Research Analyst)
That's helpful. Thank you.
Operator (participant)
The next question comes from Jackson Ader with KeyBanc Capital Markets. Please proceed.
Aidan Daniels (Equity Research Associate)
Hey, this is Aidan Daniels on for Jackson Ader. Thanks for taking our question. I was just curious on the compute side, what are you guys seeing as some of the main reasons for customers choosing Akamai over, whether it's other hyperscalers or other competitors, for compute workloads at the edge? I know cost has been a key element you guys have called out in the past, but I was just looking for some added color on how, Akamai can continue to win some of these deals. Thanks.
Tom Leighton (CEO)
Yeah, great question. It's performance, it's scale, and yeah, cost is generally lower. But just as an example, you know, we talked about on the last call, the three big hyperscalers in the U.S. are all using our compute. And for them, it's not a cost issue, because they have their own clouds, obviously. For them, it's a performance issue, because we can run their logic in a lot more locations than they can do themselves with their clouds. And so that results in better performance for them. They're closer to the users and better scale, especially if you're doing things around video that are, you know, bit intensive. You need to do that in a much more distributed fashion. And then for other customers, you know, cost does come into play.
As we've talked about, some of our customers getting, you know, really substantial savings as they move out of, you know, the major cloud providers, the hyperscalers, to Akamai. In fact, Akamai achieved major savings as we moved out of the hyperscalers, a lot of our applications onto our own cloud. So better performance, better scalability, and better cost in many cases.
Aidan Daniels (Equity Research Associate)
Thank you.
Operator (participant)
The next question comes from Patrick Colville with Scotiabank. Please proceed.
Patrick Colville (Director and Equity Research Analyst for U.S. Software)
Oh, thanks for having me on. This one's for Dr. Tom, please. I guess I just want to go back to the inference cloud. I mean, you talked earlier about some nice use cases for accelerated compute at the edge. And it seems like the common thread is that latency is important for those use cases. But I guess my question is this: I mean, in the CPU world, edge compute was a good market, but it wasn't enormous. You know, most compute happened locally on device or at the hyperscale core. Why would accelerated compute be different, that you're gonna have this, you know, large and, you know, very exciting market at the edge?
Tom Leighton (CEO)
Yeah, good question. And it's not just latency. Latency, of course, matters, but it's scale. You know, when you think about some of the AI applications, you know, generative media, you know, you're generating video, processing video, and just, you don't have the capacity, the bandwidth at a core data center to be, you know, generating or processing millions of personalized videos, you know, concurrently. You gotta do that in a distributed fashion. Just like anything with live sports or anything like that, it's got to be distributed. So it's not just latency. And increasingly, as we're seeing these applications, they are, you know, bandwidth intensive. Also, we talk about doing speech. You know, when you're conversing with your avatar, it does need to be real time. You can't be going far away to a data center, or it's not the same experience.
Now, in the past, you know, the GPUs weren't fast enough to, you know, make that work, but now, they are getting to that point where it is a few tens of milliseconds, and so the latency does matter more now.
Patrick Colville (Director and Equity Research Analyst for U.S. Software)
Can I just ask a quick follow-up there on inference cloud again, actually? I mean, two parts. The first one is, do you need to do any software updates in terms of the software that Akamai has for customers to run inference cloud? And then I guess the second part is, in terms of Akamai's target customers here, it seems like the customer profile is slightly different to the existing customers. Am I interpreting that right, that you'll be able to sell this to existing customers, but also a new cohort and maybe even AI natives?
Tom Leighton (CEO)
It's a broader customer pool. So, you know, our existing customer base, yeah, they, they are good targets for us. But there's also, as we talked about, Ed mentioned, there's a lot of customers we're signing that weren't using Akamai before. Maybe they didn't really have delivery, you know, needs or even, you know, web app firewall at any kind of scale, and so they're new to Akamai. And in terms of software updates, we're always, you know, upgrading the software in our cloud platform, but it's nothing special per se, with the GPUs. It works very much in the way that Akamai cloud has worked, Linode has worked. We are selling an additional model, as Ed talked about, with clusters with a long-term contract, in addition to the traditional model, which by the VM hour, by the token.
Patrick Colville (Director and Equity Research Analyst for U.S. Software)
Crystal clear. Thank you so much, Tom.
Operator (participant)
The next question is from Jonathan Ho with William Blair. Please proceed.
Jonathan Ho (Technology, Media, and Communications Research Analyst)
Hi, good afternoon. Congratulations on the large AI inferencing deal. I was wondering if you'd give us a little bit more color in terms of, you know, what was unique about Akamai to cause the customer to maybe choose your solution over competitors. And, you know, if you could, maybe give us a sense of, you know, philosophically, whether you're building out capacity to meet that demand, or are you, you know, comfortable investing even above that demand as you're adding capacity? Thank you.
Tom Leighton (CEO)
Yeah, it's what we've been talking about. It's, you know, really good performance, very reasonable cost. And I'd add, you know, for something that's this critical, an application, trust matters. And we talked, I talked a little bit about that, you know, a few minutes ago. You know, Akamai customers do trust us. We've really earned that, you know, with our delivery and security services, our reliability, our customer support. And for something this big and critical, I think that makes, you know, a big difference. So, and we are needing to build out in this case, and that's part of the, the large investment that Ed talked about. We're greatly increasing the capacity of inference cloud.
You know, as I mentioned, we pretty much sold out the 20 locations with the GPUs that we have deployed starting in the fall, and now we're, you know, gonna increase that by about an order of magnitude, and part of that will be used, you know, by this large customer that we talked about.
Jonathan Ho (Technology, Media, and Communications Research Analyst)
Got it.
Operator (participant)
The next question comes from Rudy Kessinger with D.A. Davidson. Please proceed.
Rudy Kessinger (Managing Director and Senior Research Analyst)
Hey, great. Thanks for taking my question, guys. Jonathan actually took the main one that I had. But on the $250 million you're spending to augment the AI inference cloud build-out, I guess, you know, by year-end this year, I mean, how many, you know, locations do you intend to have GPU capacity? And I believe that the initial announcement last quarter was, like, 17 or 19 locations or something, but how many do you intend to have that, you know, GPUs in by the end of this year?
Tom Leighton (CEO)
Yeah, you know, we're at about 20 now, and I don't expect that number to be a lot larger, but the locations we're in themselves will be a lot larger, which enables us to add the model where we can sell clusters of GPUs.
Operator (participant)
The next question comes from Mark Murphy with JPMorgan. Please proceed.
Arti Vula (Equity Research Associate)
Hey, this is Arti Vula on for Mark Murphy. Thanks for taking the question. Ed, I believe you mentioned that you're seeing deals in the pipeline coming from verticals that maybe weren't as prevalent before. Could you help us understand what those newer verticals are, and then are those coming more from the direct sell motion or from the channel? Thanks.
Ed McGowan (CFO)
Yeah, so it's a little of both. We're getting some from the partners that we work with. You know, we announced a relationship with NVIDIA. They refer customers over to us, as an example. And in terms of the verticals, you know, think of things like life sciences, manufacturing, healthcare, you know, different types of industrials. Typically, generally, don't have really big websites, but do spend an awful lot on compute, and they're also good security customers as well. So, direct motion is part of it. The direct is, you know, doing a good job of introducing this to all of our existing customers. I've gone on a couple of calls, and certainly, it's drumming up a lot of interest. And, you know, customer feedback is that, you know, they believe we have a right to win here.
It makes a lot of sense for us going here, and there's an enormous amount of curiosity, and we're doing a lot of proof of concepts. So, you know, good to see that the demand is coming from, you know, a variety of different sources.
Arti Vula (Equity Research Associate)
And then I think at least three, you know, named wins versus hyperscalers now is across CIS and security. You know, you guys have always found success there, but do you see any changes in the competitive dynamics there? Is that improving for you guys versus the hyperscalers? Thanks.
Tom Leighton (CEO)
You cut out on the first part of the question, the competitive dynamic in what area?
Arti Vula (Equity Research Associate)
Against the hyperscalers.
Tom Leighton (CEO)
Well, we've competed with the hyperscalers in delivery and security for over a decade. You know, I don't see any fundamental change there. We compete very successfully against them. In fact, two of the three big hyperscalers are large Akamai customers for delivery and security. And of course, now we're adding compute into the mix, and already all three are using us for our compute capabilities. And again, it's not an issue of cost for them, it's an issue of better performance, at least in part, because of our distributed nature. We can get their compute logic closer to their users where they want it.
Ed McGowan (CFO)
Yeah, one thing I'd add, it's not necessarily that the only way we win is by taking business away from them. In a lot of cases, we're seeing new workloads, especially when, as inference becomes a much bigger part of the equation in AI, you know, we're a good spot to go to when customers have challenges where either latency needs to be, you know, very, very low and you need to be super close. You know, we've seen some customers tell us that even being in a different state in the U.S. gives them too much latency. They need to be, you know, within a couple hundred miles, which is, you know, different than what you'd typically seen in even the CDN world.
So, it's not a question of a zero-sum game where we win, they lose. It's, you know, we do some from time to time, take some workloads. We do go head-to-head in competition, where we go in a bake off and sometimes we'll perform better, et cetera. So the market is just growing so fast that there's plenty of room here for us. I think we're starting to demonstrate that we're becoming a real player here.
Arti Vula (Equity Research Associate)
Very insightful. Thanks.
Operator (participant)
The next question comes from Jeff Van Rhee with Craig-Hallum. Please proceed.
Vijay Homan (Equity Research Associate)
Hey, guys, this is Vijay Homan on for Jeff. Thanks for taking the question. Just one from me. I know you mentioned the impact of AI on the cloud segment. I was hoping you could just expand on the impacts of AI on security and delivery revenue, maybe to the extent that that's driving traffic, and how it's changing the demands of your customers for your services. Thanks.
Tom Leighton (CEO)
Yeah. So, there's a variety of impacts with AI on security. You know, one of them is that AI really helps enable the attacker. And so we're seeing, you know, much larger botnets out there because the attacker can use AI to take over a lot more, devices. They can use the AI to train malware to get around known defenses, and so you see more penetrations. You've seen the AI with deepfakes you couldn't possibly know were fake. So in a lot of ways, it's making the attack environment, much harder to defend against. Also, as enterprises adopt a lot of AI apps and agents, that's a whole new attack surface, and you need special defenses. Like, for example, our new, you know, firewall for AI. Also, you know, today, enterprises are in a tough shape.
They don't even know all the shadow AI they have. And so we have new capabilities there with our API Security to extend it, to identify, the AI applications they have exposed. So you need to know what AI you've got out there, and you need to defend it with special, you know, firewall capabilities, which we do. So I think AI is having, you know, and will continue to have a, a positive impact for our security business in terms of our revenue, even though the attack landscape is nastier, and, that in some ways, it's more need for Akamai services. You know, in terms of delivery, we are, of course, seeing a rise in the, the scraper bots. And so, you know, if left undefended, that would create a need for more traffic.
Now, for our customers, through our bot management solutions, we actually help them to deflect a lot of the scraper bots, give them visibility into what the various bots are, what they're doing, and then our customers decide, okay, which ones do they want to block, which ones do they want to do special things for? So I'd say on balance, yeah, probably a traffic, you know, increase to an extent, but again, there, it's more creating more of a need for our bot management so that our customers can handle the various scraper bots in the way that makes sense for their business.
Vijay Homan (Equity Research Associate)
Got it. Very helpful. Thank you.
Operator (participant)
This does conclude today's question and answer session, as well as today's conference. Thank you for attending today's presentation. You may now disconnect your line.