Sign in

You're signed outSign in or to get full access.

Nebius Group - Q3 2024

October 31, 2024

Transcript

Yulia Baumgaertner (Investor Relations Representative)

Hello everyone, and welcome to the Nebius Group's Third Quarter 2024 earnings call, our first results call since we are back to Nasdaq. My name is Yulia Baumgaertner, and I represent the Investor Relations team. You can find our earnings release published on our IR website earlier today. Now, let me quickly walk you through the Safe Harbor Statement. Various remarks that we make during the call regarding our financial performance and operations may be considered forward-looking, and such statements involve a number of risks and uncertainties that could cause actual results to differ materially. For more information, please refer to the Risk Factors section on our most recent annual report on Form 20-F filed with the SEC. You can find a full forward-looking statement in our press release. During the call, we'll be referring to certain non-GAAP financial measures.

You can find a reconciliation of Non-GAAP to GAAP measures in the earnings release we published today. With that, let me turn the call over to our host, Tom Blackwell, our Chief Communications Officer.

Tom Blackwell (Chief Communications Officer)

Thanks very much, Yulia, and hello to everyone. So let me quickly introduce our other speakers that we have on the line today. So I'm pleased to have here with me in San Francisco this morning, our CEO, Arkady Volozh, and Chief Business Officer, Roman Chernin. And also joining us from our office in Amsterdam, we have Ophir Nave, our COO, Andrey Korolenko, our Chief Product and Infrastructure Officer, and our CFO, Ron Jacobs. So you've probably seen we put out quite a lot of material on our business a couple of weeks ago ahead of the resumption of trading on Nasdaq, and you've hopefully had a chance to have a quick look at our Q3 results released this morning. So some of you have actually already sent through questions, for which thank you.

And for those who haven't, you can submit any further questions via the special Q&A tab below at any point during our call, and we'll get to them. So my suggestion is that we keep our opening remarks relatively brief to make sure we have plenty of time for Q&A. And so on that note, let me hand straight over to Arkady.

Arkady Volozh (CEO)

Thank you, Tom, and welcome everyone. I'm very excited to be having this first earnings call since our resumption of trading on Nasdaq almost two weeks ago. Let me briefly highlight our key points to set the scene for today's discussion. Our ambition is to build one of the world's largest AI infrastructure companies. This entails building data centers, providing AI compute infrastructure, and a wide range of value-added services to the global AI industry. We have a proven track record with significant expertise in running data centers with hundreds of megawatts power loads, so we know what we're doing here. We're already pushing full steam ahead. We're securing plots for new data centers, ensuring we have stable power supplies, confirming orders for the latest GPUs, while also launching new software and other value-added services, addressing the needs of the AI industry.

In short, we're working hard to rapidly put in place the infrastructure that will underpin our future success. Let me turn briefly to today's financial and operating results before we go to Q&A. First, I will start with our core infrastructure business. As you can see, we're growing rapidly. Revenue grew 2.7 times compared to the previous quarter. We have a strong cash position. Cash and cash equivalents as of September 30 stood at around $2.3 billion. Capital expenditures totaled around $400 million for the first nine months of 2024. Looking forward, we anticipate capital expenditures in the fourth quarter to exceed this amount, as we plan to accelerate investments in GPU procurement and data center capacity expansion. This includes tripling the capacity of our existing data center in Finland.

We also recently announced a new colocation data center facility in Paris, with more to come, and we announced very soon. We expect to have deployed more than 20,000 GPUs by the end of this year. But really, we're just warming up, as you understand. While our financial performance has been strong, what's even more important at this stage is what we have been doing on the product side. In the last quarter, we introduced a number of strategic product developments. For example, we launched the first cloud computing platform built from scratch specifically for the Age of AI. This platform offers increased flexibility and performance and will help us to expand further our customer base. This model assumes selling GPU time by hours, and now customers can buy both managed services and self-service with the latest H200 GPUs.

We also launched our Nebius AI Studio, a high-performance, cost-efficient self-service inference platform for users of foundational models and AI application developers. This allows businesses of all sizes, big and small, to use generative AI quickly and easily. Here, we provide the full-stack solution and use another business model. We sell access to GenAI models by tokens. Outside of our core AI infrastructure business, our other businesses also perform well. Toloka offers solutions that provide high-quality expert data at scale for the GenAI industry, and they grew revenue around four times year-on-year. Aviride is one of the most experienced teams developing autonomous driving technology, both for self-driving cars and delivery robots. Earlier this month, Aviride announced a multi-year strategic partnership with Uber in the U.S., and we also just rolled out our new generation of delivery robots, offering improved energy efficiency and maneuverability.

TripleTen is a leading educational technology player. In the last quarter, they increased three times year-over-year in the number of students enrolled in this bootcamp across the key markets, U.S. and Latin America. In summary, we have been busy developing strong results, but this is just the start of our journey. The big opportunities are still to come, and with that, let me wrap up and hand back over to Tom for the Q&A.

Tom Blackwell (Chief Communications Officer)

Thank you very much, Arkady. And so just a reminder to everyone that you can submit your questions through the Q&A function below. But we have a few questions that have come in already, so we'll get going. First question actually relates to the latest status on the buyback and whether that's something that's still under consideration. Ophir, can I suggest that you take that question?

Ophir Nave (COO)

Yes, sure. Thanks, Tom. This is actually a great question because it has a direct impact on our 2025 guidance. But first, maybe it will be a good idea just to take a step back and to remind all of us where the idea of the buyback actually came from. After the divestment of our Russian business, we viewed a potential buyback as an instrument to provide our legacy shareholders an opportunity to exit our business, especially in the absence of trading. And as we all know, at our latest AGM, our shareholders authorized a potential buyback within certain parameters. One of them is a maximum price of $10.5 per share, which represents the pro rata of the net cash proceeds of the divestment transaction at closing. It does not put any value whatsoever to the business that we are actually building.

Our shares resumed trading on Nasdaq about two weeks ago, and we are very happy to see the investors' interest in our story. We also see strong liquidity levels. We hope that this is a sign that our investors see the great opportunity in our business. And if this is the case and the market for our shares remains strong, actually a buyback may not be required to accomplish the idea for which it was originally planned. In this case, we may actually have the opportunity to allocate much more capital to our AI infrastructure and deliver our plans even faster. But let's try to put this into actual numbers. As probably everyone knows, we originally provided a $500 million-$1 billion ARR guidance range for the year of 2025.

In this scenario, where buyback will ultimately not be required, we actually estimate that we will be able to deliver above the midpoint of this range, the range of $500 million-$1 billion ARR, and we think that this is actually very exciting.

Tom Blackwell (Chief Communications Officer)

Great. Thank you very much for that, Ophir. So the next question is really around competition and sort of how we're seeing the market. And Arkady, I'll come to you on this. And specifically, the question is, so how does Nebius differentiate against the hyperscalers and the other private competitors in the GPU cloud space?

Arkady Volozh (CEO)

Yeah, that's a great quick question. Because I will. So why actually the question is why we believe Nebius will be among the strongest leading independent GPU cloud providers, right?

Tom Blackwell (Chief Communications Officer)

Yeah, absolutely.

Arkady Volozh (CEO)

And actually, I have usually several answers to this question. First, we provide a full-stack solution, which means that we build data centers, we build hardware, servers and racks inside the data centers, we build an AI cloud platform on top of it, full cloud, and we build services, and we have expertise for those who build models and train these models and build applications, and we have our own expertise in this area. So we have a full-stack solution. And this translates into better operational efficiency. We believe that our costs may be lower because of this, and our product portfolio is much stronger. So this is full-stack gives us the first block of differentiation. Then this is a platform, the solution we're providing, actually, this is a solution which was built from scratch in the last several months.

This is the first full integrated solution for AI infrastructure built like a new, without any need to support any unnecessary functions or old code. It's a brand new solution, and what is more important, it's specifically targeted for the tasks of a very dynamic AI industry, so we offer much more flexible. We can offer better pricing and so on, and the third block of differentiation, I would say, comes from the fact that we have a very strong and very good team. This team of people, more than 500 specialists, 400 AI specialists, specifically in cloud engineers, who are ready to support our growth, who have experience building systems which are even larger than what we have today and have ambitions to build much larger systems.

And this actually translates into faster time to market when we launch new products and better customer support and better understanding of our clients because we are the same. We are like them. So these are actually three major things which we see as our competitive advantage: full-stack solution, brand new, specifically tailored platform which we just launched, and us, the people and engineers who really understand the area.

Tom Blackwell (Chief Communications Officer)

Thank you very much, Arkady. So actually, the next question is around data center expansion strategy. So Andrey, I'll come to you on this. And specifically, how do we think about it overall? How do we think about geographic locations and what are potential constraints in terms of rolling out the strategy?

Andrey Korolenko (Chief Product and Infrastructure Officer)

Yep. Thanks, Tom. Hi, everyone. Andrey Korolenko speaking. So we have three ways to source the data center capacity. These are colocations. So just co-locating in someone else's data center, build-to-suit projects is when someone else builds the data center on their site with their CapEx according to our design for the long lease contracts from ourselves. And the greenfield projects, when we build the data center from scratch and operate it ourselves. Basically, our preference would be greenfield. This allows us to realize value from the full-stack approach and reach maximum efficiencies as the team has decades of experience operating the data centers, building, designing, building, and operating the data centers. But that's subject to the availability of the capital, and the greenfield projects are longer in terms of delivery times.

So we're going to use all three of these sources of ways to get the capacity, but we view the colocation as shorter-term solutions, mostly speaking about the next few quarters, while the other two will be kicking in. But our first location, our first data center, is in Finland, which we already commenced the capacity expansion for it. We actually are tripling the capacity for the next during the next year. Maybe the first phase will be in mid next year, and the last phase of that site will be later 2025. Also wanted to mention that it's fully capable of the liquid cooling technologies for the newer GPUs and the newer trends in that area as well. In the midterm, we plan and actively engage in build-to-suit arrangements.

It's a less capital-intensive alternative to the greenfield, but still allows us to stick to our design and maintain most of our operational effectiveness, I would say, but still keeping us more flexible in terms of capital. Talking about the geographical locations, so the first, the Finland was our home base, the first data centers. A couple of months ago, I believe, we announced the Paris location, which is kicking in operations as we speak. The next one will be announced in the US. Looking forward, we will be building infrastructure both in Europe and in the U.S. mostly. That's it.

Tom Blackwell (Chief Communications Officer)

Okay.

Andrey Korolenko (Chief Product and Infrastructure Officer)

I think I covered.

Tom Blackwell (Chief Communications Officer)

No, I think that's very good. Thank you very much, Andrey, and so just for people's reference, Andrey referred to our Finnish data center, and there was a press release a few weeks ago with more specific detail around the expansion plans there, and there's also an announcement a few weeks ago about the Paris data center if people want to refer to that with some more additional detail, so the next question is actually we've received a few questions around the NVIDIA partnership and relationship, so I'm going to combine them into one if that's okay, and actually, Andrey, maybe I can stick with you here.

So first of all, the questions are really, what's the history as well as current status of the NVIDIA partnership and what that brings to Nebius, current status of NVIDIA orders, shipments, and as well as our ability to secure new GPU generations going forward, including the Blackwells. So let me give you that set of questions if I can.

Andrey Korolenko (Chief Product and Infrastructure Officer)

Yeah, thanks, Tom. So talking about our collaboration with NVIDIA, we have a long-term experience. A team actually has the long-term experience working with NVIDIA for more than a decade, building the GPU clusters and running them at a pretty significant scale. About the partnership, we are cloud and OEM partner, official cloud and NCP and OEM partner of NVIDIA. That helps us to develop the data center design, the rack design, to get all the advantages of the NVIDIA software part and just to collaborate on both technical and business sides. About the future shipments, well, the GPU availability is always a tricky one, but I'm quite sure that we have a good track of shipments throughout this year. And we are feeling confident talking with NVIDIA that the next Q1, Q2 shipments of the newer generations are secured for us.

Tom Blackwell (Chief Communications Officer)

Okay. Thank you, Andrey. So Roman, I'll come to you. So we've had a couple of questions around GPU pricing and sort of generally how we see the evolution of GPU pricing and overall how we think about the sustainability of pricing in light of the regular flow of new generation launches and so on. So Roman, perhaps I can come to you to address that.

Roman Chernin (CBO)

Yeah, thank you, Tom. So talking about the pricing, I think it's important to talk in respect to generations. So for the next generation, GB200, Blackwell's next year, I think the pricing is not fully set, but we can expect that there will be the premium and margin as it normally happens at the beginning of generation. For Hopper, which is current active generation, everybody is talking about it. I would say that for H100s, the most popular model today, the pricing came to some stable situation at the moment. So there was obviously very high premium, which now reduced, but it's still the prices that let have the healthy unit economics. And we also have H200s, which is not such a big volume on the market. And for them, we see the prices also pretty healthy.

Important to mention here that we were kind of a little bit late on Hopper's generation compared to some of our competitors, and next year, most of our fleet will be with Blackwells. That kind of gives us the advantage to benefit from being in the beginning of this generation without the large legacy, and looking forward, I would say that it's normal that when the generations of chips are shifting, the prices kind of go down, but what we do is what is our angle. We invest a lot on the software layer to kind of prolong the life cycle of the previous generation when we can provide the service to the customer not as a raw compute of some chips, but as a service.

Like Arkady mentioned that we launched our token as a service platform for inference, and there could be down the road new services that kind of hide the specific models of the chips under the hood, and we can extract the value for the customers.

Tom Blackwell (Chief Communications Officer)

Great. Thank you very much, Roman. And actually, Roman, I'll stay with you because the next question is really around the clients as of today and what are the sort of typical contract terms and durations for current customers and how we think that will evolve over time.

Roman Chernin (CBO)

Yeah. So just to remind, we started less than a year ago. Our first commercial customer started in Q4 2023. As of now, we have something like 40 managed clients. And it's important to mention that the customer base is pretty much diversified. We don't have any single dominating client. If we talk about the customer profile, most of our customers are AI-centric, like GenAI developers, people for whom AI is kind of bread and butter. And we also see that there is step-by-step growing our exposure to more enterprise customers. Talking about the contracts, as of today, since the fleet is H100s, most of our customers, most of our contracts are under one year. This is the reason for that, that on the market, customers don't feel comfortable to commit for H100s for more than a year, which is natural.

Again, I want to remind that next year, the fleet will be mostly around Blackwells, and we expect that the contracts for Blackwells in the next year will be again coming to more long-term arrangements, and also important to say that for H200s, still Hoppers, but we see a lot of discussions in the pipeline of one- to two-year contract, healthy price, healthy duration. We also have a lot of on-demand customers, and this is honestly the part of our strategy to position us as the most flexible GPU cloud provider as of today, so we really want our customers to benefit from the platform that let them combine reserves and pay as you go, be more flexible in their capacity planning because this is a real pain of the market that we address, and again, so with the Blackwells, we anticipate the duration of contracts will significantly grow.

When most of the fleet of chips will shift to Blackwell, the mixed contract structure in the portfolio will be shifted to more long-term.

Tom Blackwell (Chief Communications Officer)

Great. Thank you very much, Roman. So actually, the next question is about Arkady, but since we have Arkady on the line, I'll let him field it. But the question was, how engaged is Arkady with the business today, and what are his plans for the future? I guess with respect to Nebius.

Arkady Volozh (CEO)

How much engaged? Well, I'm fully engaged. If you ask my family, maybe too much engaged. But seriously, it's a totally new venture, and it's a new startup. If you look at the team and the enthusiasm and the mood, it's just a nice feeling to be there. But it's not just a startup. It's a very unusual startup. It's a new project on one hand. On the other hand, we start with a huge amount of resources. It's not just the team. It's also the platform, which we have, hardware and software, and a lot of capital. And this is a huge opportunity to build something really, really big here, an AI infrastructure space, which will go for long and will be visible. So I'm engaged. It's a very interesting new game, new startup.

Aside from just a few enthusiasts, just to remind you that I personally made a big bet on this. I would say, I don't know, something like maybe 90% of my personal wealth is in this company. So I truly believe that we're building something big and serious here, and we have great prospects, and we're very much enthused, and we want to make this thing going.

Tom Blackwell (Chief Communications Officer)

Thank you very much, Arkady. So actually, Ophir, I'll come to you. We have a question around sort of margins and unit economics. So specifically, the question is, can you elaborate on Nebius's gross margin and unit economics for the GPUs, and what are the returns on invested capital or payback expectations that we see?

Ophir Nave (COO)

This is actually three questions.

Tom Blackwell (Chief Communications Officer)

Yes, sorry.

Ophir Nave (COO)

So let's start with the unit economics. Our unit economics is actually different from most of the reference points that investors have. We see that investors compare us to data centers providers on the one hand, or plain vanilla GPU as a service players, bare metals, as we call them, on the other hand. Most of them are actually, most of these players are actually sitting on very long-term contracts with fixed unit economics margins. We are neither of these two. We are actually a truly full-stack provider. So what does it mean? Regarding our unit economics, obviously, we do not disclose specific numbers, but let me try and share with you how we think about this. To start with, we believe we are more efficient than our peers. Why? We have efficient data centers. We have in-house designed hardware, and we have full-stack capabilities.

But furthermore, we also create value from our core GPU cloud. This is already part of our unit economics, and we anticipate that this part will grow. But this is actually another benefit. It allows us to serve a wider customer base, and we believe that this customer base will drive the demand in the space in the future. So these are a few words about the unit economics on our potential returns. Our returns are already solid, but we are yet to get access to the new generations of GPUs for which we see a huge demand. And with our intention to continue developing our software stack and value-added services, we believe that we will be able to even improve our returns on the invested capital. And I think that you also asked about the payback expectations. So the payback period obviously depends on the generation of the GPUs.

It can be somewhere around two years for the old ones and much less for the new ones. So it's a little bit, I would say, premature to talk about it, but we will be probably in a much better position to share specifics once we deploy and sell our first GB200s. Fortunately, we expect to be among the first to do so. So hopefully, it will not be too long. I hope that I answered the questions.

Tom Blackwell (Chief Communications Officer)

No, I think you did. There was indeed a lot in the question, but well unpacked. Thank you. So Roman, let me come back to you. So there's a question which is basically in the context of our 2025 guidance range. Can you help us to sort of understand what is already contractually secured versus elements that might still be uncertain? So I guess this relates to things like power supply, GPUs, procurement, clients, contracts, etc.

Roman Chernin (CBO)

Yeah. Thank you, Tom. I think there is really three lines of the things that determine the growth. One is DC capacities, data center capacities, and access to the power as a part of it. As Andrey shared before, we secured the growth for our core facility in Finland. We are now in advanced discussions to add more colocation capacities in the U.S. I expect that till the end of the year, we'll disclose more. On GPU side, we already mentioned today also that there is a long-standing relationship with NVIDIA, and that let us be in the first line to bring the state-of-the-art new NVIDIA Blackwell platform to the customers, and we expect to double down on it in 2025. Client-wise, demand side, I think we have quite a good visibility for the end of this year.

For the next year, our forecast is mostly based on the capacity available. We believe that it's still a more supply-driven model because given our current size and given the total addressable market, we don't see real limitations to secure enough demand during the next year. That's, again, three lines. To grow, you need enough DC space. You need secure GPU supply, and you need demand. We feel kind of on three lines pretty comfortable.

Tom Blackwell (Chief Communications Officer)

No, that's great, and I suppose I could point out that we have one Blackwell already secured, but definitely more to come in 2025. So anyway, so the next question is around what are the remaining links back into Russia following the divestment. So actually, probably I can take that one. I think the simple answer is that, well, the remaining links are no longer there. In reality, when thinking about it, actually, the separation started back in early 2022 when we embarked on the process to do the divestment. But when that divestment came to a completion in July of this year, that sort of severed all of the remaining links at that stage. So just to kind of put that into some context, what that means. So we don't have any assets in Russia. We don't have any revenue in Russia. We don't have any employees in Russia.

At a sort of technological data level and so on, all of the links are broken at this stage. Effectively, it was a clean and comprehensive break. I think it's also just, it's probably good to just point out that this is not just our own assessment or self-assessment here. First of all, the divestment transaction, which I referred to, which was the largest corporate exit from Russia since the start of the war, this had broad support from Western regulators and so on. Also the resumption of trading on Nasdaq, which that followed a fairly extensive review process. Eventually, they concluded that we were in full sort of compliance of the listing criteria, which in other words, the Russian nexus was considered to be gone at that point.

So our Russia chapter is over, but we look forward to the next chapter, and it's one that we're very excited about. So next question, Ophir, I'll come back to you. So it's, how long will your cash balances last? And what are your investment plans for 2016 and beyond? And will you need to raise more external capital beyond that? And how and in what form? And again, apologies, it's a few questions packed into one.

Ophir Nave (COO)

I guess that you meant 2026, not 2016.

Tom Blackwell (Chief Communications Officer)

Yes. Sorry, 2026 indeed.

Ophir Nave (COO)

We have no plans for 2016, actually, but for the future, it's clear our first priority by far is our CapEx investments into our core Nebius business by far, so for this reason, our cash efficiency period at the end of the day is basically a function of how aggressive we want to be in our investment in data centers and in GPUs. Now, given the strong demand for our product and services that we see in the market, our plan is actually to invest aggressively, but on the other end, it is important for us, and we actually make sure that we maintain sufficient liquidity to cover our cash burn for a reasonable period of time.

I think that it's worth mentioning in this context, as we actually previously disclosed, that we are exploring, together with Goldman Sachs, our financial advisor, different strategic options to even accelerate our investment in our AI infrastructure. Actually, our public status provides us with access to a wide range of instruments and options. To summarize, we plan to move aggressively into the AI infrastructure at the same time to keep sufficient cash for our burn rates and while exploring other potential options to actually even move faster in our plans.

Tom Blackwell (Chief Communications Officer)

That's great. Thank you, Ophir. So actually, the next question is around the ARR guidance range of $500 million-$1 billion for next year. I think Ophir actually, to some degree, covered this in his first answer, but let me just kind of add a couple of points on top of that. So again, that guidance took into a range of possible scenarios, including timing of the GB200 delivery, but also actually the key factor here is availability of capital. So there's a couple of things to think about here. So as of now, we have around $2 billion on the balance sheets, and the question is how much of this we can allocate to CapEx. And so here, there are a couple of points. We've made reference to a potential withholding tax that we have that we may have to cover.

Depending on how the discussions of the Dutch tax authorities go, there could be a reasonable share of that that has been allocated for a potential tax payment that could be reallocated towards CapEx. There's also, as Ophir pointed out earlier, I think the key point here is that there could be a scenario when we don't have any impact from a potential buyback, which would mean that we have an opportunity to allocate a lot more capital to the AI infrastructure CapEx and deliver on our plans even faster. But again, I think Ophir covered that well in his first answer. In that scenario where we're able to reallocate, we estimate that we'll be able to deliver above the midpoint of the $500 million-$1 billion ARR guidance range that we gave for a year in 2025. Okay. Very good.

So moving on, I think the next question, actually, Ophir, if I can come back to you. So it's around really what's the strategy for the, let's say, non-core business units, and are there any monetization plans for the portfolio company? So again, just for clarity here, we're probably talking about some Aviride, TripleTen, and Toloka.

Ophir Nave (COO)

Yes. So first, we truly believe that each one of these businesses is among the leaders in its field. Each one of them has great prospects. That said, as we said time and again, the majority of our focus and capital is being allocated toward our core AI infrastructure. And for this reason, we are very flexible in our strategic development of our other businesses. So as one example, this may include with respect to some of the businesses joining forces with strategic partners or seeking external investments, etc. So again, we truly believe that these businesses will do great, will be profitable to us, but our focus, our main focus, both from business attention and capital is in our core AI infrastructure business.

Tom Blackwell (Chief Communications Officer)

Very good. Thank you. So actually, next question is, which I can take, which is around sort of our thinking around investor relations going forward. And so specifically, some points, do we expect to see broker research coming out soon? And just more generally, what are the plans around investor relations over the coming months to sort of as we reintroduce our company to the markets? So indeed, we had a fairly lengthy, sort of slightly strange period where we were dark while we were finishing the divestment and putting in place all of the infrastructure for this new company. And we were very pleased to get back onto Nasdaq, and that sort of sparks very much a new chapter and a return to, I would say, slightly more normal life. Exciting, but maybe a bit more normal.

So definitely, we're starting to reengage with the various banks in terms of trying to sort of reinitiate sort of sell-side coverage, research coverage. That's a process that's underway right now, and so expected to see more coming out sort of over the coming months.

Get back into a more normalized sort of IR period. The quarterly reporting going forward, we're going to be looking out for us at conferences, investor conferences, and so on sort of over the coming months. And so yeah, so I think, apologies for the sort of the blackout period for some time, but we're back, and we'll be doing that. And we also had a lot of inbound interest from investors, and so we're going to be doing a lot of one-on-one calls also over the coming weeks. So feel free to get in touch with all of us, and we'll engage as much as we can. It's a busy time, but we'll find time. So that's on the IR side.

Ophir, I'll come to you perhaps. So there's a question about the ClickHouse stake, which I'll remind people we have an approximately 28% stake in ClickHouse. The question is, can we give more details on that business, how it's performing, and what is the revenue of ClickHouse, and is there a plan to go public? So probably not all of that we can address, but Ophir, perhaps that you can comment on that to the extent that we can.

Ophir Nave (COO)

Yeah, actually, it's very simple. We treat our second ClickHouse as a passive investment. So first of all, we don't have any immediate plans for it. Right now, we continue to own it. Now, as a minority shareholder, we are not in a position to provide any more details on the business, not about its projections, plans, business plans, etc. It's not for us to say, but we can say, and we are actually very happy to say that, that to our understanding, the business is well regarded by partners and other market participants. So we are very happy about that.

Tom Blackwell (Chief Communications Officer)

Thank you, Ophir. So actually, the next question, Andrey, I'll come to you on this one. It's around actually power supply and power access, and do we think that we have sufficient access to support sort of future computing needs and GPU requirements? As Andrey Korolenko.

Andrey Korolenko (Chief Product and Infrastructure Officer)

Yep. Thanks, Tom. Well, I would say that short term, as I just mentioned, short term, we are relying on the rented capacity plus the expansion of our Finnish data center and then switching to the greenfield and build-to-suit projects, again, subject to the available capital. But generally, midterm, we don't see problems for the growth to support the growth, even if we are talking just growing the magnitudes. So the only challenging times would be, might be the next three, four quarters, but I truly believe that we are in a very good position not to be blocked by the data center capacity availability. That's in short.

Tom Blackwell (Chief Communications Officer)

Great. Thank you, Andrey. So actually, the next question is about the U.S. And so what expansion plans do you have in the United States? Do you already have corporate customers in the USA? And generally, how do you see development there? Roman, maybe I can come to you to have a crack at that.

Roman Chernin (CBO)

Yeah, thank you. It's super relevant since we are in San Francisco now. I think that we can say that we have a very great focus on the U.S. already. We see that organically many of our customers coming from the U.S. We don't have yet so much awareness here. We just started, but already the big portion of our customers are coming from the U.S., and in the previous questions, we said that the big part of our customers are AI-focused companies. It's like obviously many of them here, so we are developing the team. We are planning to expand capacity here, as Andrey mentioned already, so yeah, I think it will be a super important part of the game.

Tom Blackwell (Chief Communications Officer)

Fantastic. And actually, the next question flips to the other side of the pond, to Europe. And so Arkady, maybe I can come to you on this one, which is what's the rationale for building the infrastructure currently in Europe? And just generally, how do we sort of see the opportunity in Europe?

Arkady Volozh (CEO)

First of all, from a corporate side, we are a Dutch company traded on Nasdaq, so we're a European company historically. Then after this big split, we inherited a big data center, which is in Finland, which is, as you know, also in Europe, which we're now tripling and will be a pretty big facility. We also recently launched Paris. Paris is also in Europe. And we're discussing several greenfield projects, which will be also in particular in Europe, not only. So Europe definitely has some advantages for us in terms of competition and easy access to stable power supplies and cheap power supplies. But at the same time, although we started our infrastructure in Europe, we are building a global business. First of all, we have global customers. More than half of our customers today, I think, come outside of Europe, US, first of all.

Going forward, we definitely are looking not looking, just actually, we act to expand our geography to become a really global AI infrastructure provider, and in particular in the U.S. There will be some news following very soon about our expansions here in terms of infrastructure, but also we already announced that we follow our customers and we opened several offices in the U.S. and San Francisco and Dallas, and New York is coming soon. It's mostly sales and services offices. Again, the infrastructure are moving to the U.S., not only Europe and customer services moving to the U.S. But again, it's not only just Europe, not only just the U.S. It will be a global network of data centers and a global service provider. We are looking into other regions as well pretty actively. So yeah.

Tom Blackwell (Chief Communications Officer)

Great. Sorry, Akhari, I didn't mean to cut you off, but.

Arkady Volozh (CEO)

Yeah, no, no, that's actually it. Yeah. So just watch our coming announcements, which will come very soon.

Tom Blackwell (Chief Communications Officer)

Very good. Very good. So actually, Roman, maybe I can come to you. There's a question about how you see customer needs and sort of use cases evolving. So it's obviously this is a rapidly developing industry, so any color that you can add around.

Roman Chernin (CBO)

Yeah, this is actually the brilliant question I love to answer. So I think that the main, the most significant shift that we see now is a lot of inference scenarios coming. So if some time ago, most of GPU hours were consumed for the large training jobs, so like developing the products, now we see that a lot of compute consumed to serving customers, which is like we consider as a great development of the market and we're moving forward in general as an industry. And for us, I believe this shift is also super important because since we are very much in the software platform, when it comes to more complex scenarios, we can create much more value for our customers. Another thing to mention is that the number of scenarios, like the verticals, also diversifying.

So we see, for example, a lot of customers coming from life science, biotech, health tech. We see a lot of interest in robotics. Other like video generation now is blooming. So we think that there will be a lot of sectors and niches where AI is penetrating. And our mission here is to support those people who build the products with the infrastructure and develop the platform together with them.

Tom Blackwell (Chief Communications Officer)

Great. Thank you. And actually, maybe, Roman, let me stay with you because there's a kind of a follow-on from this one, which is sort of how you think about the evolution of the customer base going forward sort of over time.

Roman Chernin (CBO)

Yeah, so I think that mostly we already covered it. And there are a few dimensions, like one, the structure of the contracts. We said that down the road with the new generation of chips, we see that it will be again more long-term contracts, again, large training jobs will come and so on. And then on a scenario perspective, we see that shift to inference is the most important. And from the market sectors perspective, I think that again, we see more and more diversified portfolio of scenarios and types of kind of types of tasks that people address with AI. And I think, again, the next big thing is when AI will start to be adopted more in enterprises, like the market will go from AI, like now most of the customers are AI native. And then we'll see a lot of adoption in enterprises.

It will be an important shift maybe during the next year.

Tom Blackwell (Chief Communications Officer)

Okay. Very good. So Andrey, maybe I'll come to you. There's a specific question here about do you have the capability to support heterogeneous GPUs?

Andrey Korolenko (Chief Product and Infrastructure Officer)

Shortly, yes. So first of all, I would like just to mention that we are following the demand that we see on the market. And as the market develops, we will be developing, we are developing what we can provide. So at this point of time, NVIDIA is the state-of-the-art solution, but in our R&D, we have a lot of different options in development. And we'll just follow the demand and we'll try to deliver the best possible solution for the customers.

Tom Blackwell (Chief Communications Officer)

Great. So we're kind of coming up at time here. There were a few questions remaining that were around this of GPUs, how many we have in operation now, how many we anticipate to have by year-end, and what the outlook is going into 2025. I'll refer people to the various materials that we disclosed a couple of weeks ago because we go into quite a bit of detail around some of the specific capacity numbers there. So I think you'll find the answers to those questions there, but if you have any follow-ups, don't hesitate to come to us. But otherwise, we're coming up at 6:00 A.M. in San Francisco time. So let me thank everybody, management and all of our investors, current and potential, for joining us. We're very happy to be back into the public markets. And as I think Arkady said, this is really just the beginning.

We're very excited about continuing the discussion with all of you. With that, thank you very much and wish everybody a good remaining day afternoon.