Sign in

You're signed outSign in or to get full access.

IREN - Earnings Call - Q4 2025

August 28, 2025

Transcript

Speaker 5

Good day and thank you for standing by. Welcome to the Iris Energy FY 2025 results conference call. At this time, all participants are in a listen-only mode. Please be advised that today's conference is being recorded. After the speaker's presentation, there will be a question and answer session. To ask a question, please press Star one one on your telephone and wait for your name to be announced. To withdraw your question, please press Star one one again. I would now like to hand the conference over to your speaker today, Mike Power, Vice President of Investor Relations.

Speaker 0

Thank you, Operator. Good afternoon and welcome to Airen's FY 2025 results presentation. My name is Mike Power, Vice President of Investor Relations, and with me on the call today are Daniel Roberts, Co-Founder and Co-CEO, Belinda Nucifora, Chief Financial Officer, Anthony Lewis, Chief Capital Officer, and Kent Draper, Chief Commercial Officer. Before we begin, please note that this call is being webcast live with a presentation for those that have dialed in via phone. You can elect to ask a question via the moderator after our presentation. Before we begin, I would like to remind you that certain statements that we make during the conference call may constitute forward-looking statements, and Airen cautions listeners that forward-looking information and statements are based on certain assumptions and risk factors that could cause actual results to differ materially from the expectations of the company.

Listeners should not place undue reliance on forward-looking information or statements. Please refer to the disclaimer on slide 2 of the accompanying presentation for more information. Thank you, and I will now turn the call over to Daniel Roberts.

Speaker 3

Thanks, Mike. Good afternoon, everyone, and thank you for joining our FY 2025 earnings call. Today we will provide an update on our financial results for the fiscal year ended June 30 along with some operational highlights and strategic updates across our business verticals. We'll then end the call with Q&A. FY 2025 was a breakout year for us both operationally and financially. We delivered record results across the board, including 10x EBITDA growth year on year and strong net income, which Belinda will discuss shortly. Operationally, we scaled at an unprecedented pace. We increased our contracted grid-connected power by over a third to nearly 3 gigawatts and more than tripled our operating data center capacity to 810 megawatts, all at a time when power, land, and data center shortages continue to persist across the industry.

We expanded our Bitcoin mining capacity 400% to 50 exahash and in the process cemented our position as the most profitable large-scale public Bitcoin miner. At the same time, we made huge strides in AI, scaling GPU deployments to support a growing roster of customers across both training and inference workloads. We also commenced construction of Horizon 1, our first direct-to-chip liquid-cooled AI data center, and Sweetwater, our 2 gigawatt data center hub in West Texas, one of the largest data center developments in the world and a cornerstone of our future growth plans. These achievements underscore the strength of our execution and the earnings potential of our expanding data center and compute platform. We expect this momentum to carry into FY 2026 and beyond as we realize the revenue potential of our 50 exahash platform and advance our core AI growth initiatives.

Reflecting on current operations, our AI cloud business is scaling rapidly with more than 10,000 GPUs online or being commissioned in the coming months, backed by multiple tranches of non-dilutive single-digit GPU financing. This rollout will feature next-generation liquid-cooled NVIDIA GB300 NVL72 systems at our Prince George campus. This strengthens our position as a leading AI cloud provider and a newly designated NVIDIA Preferred Partner. In parallel, while we have paused meaningful mining expansion, our 50 exahash platform continues to generate meaningful cash flow, over $1 billion a year in annualized revenue at the current economics, supporting our continued growth in AI. Together, these operations are approaching annualized revenue of $1.25 billion. That's the scale we're delivering today. However, the clear visibility to continue growth ahead is something we're quite excited about.

On that note, our strategy is focused on scaling across the full AI infrastructure stack from grid-connected transmission line all the way down to the digital world compute. With a strong track record building power-dense data centers and operating GPU workloads, we are uniquely positioned to serve AI customers end to end from cloud services to turnkey colocation, capturing a broad and growing addressable market. Beyond the current expansion to 10,000 GPUs, our 160 MW of operating data center capacity in British Columbia provides a path to deploy more than 60,000 NVIDIA GB300s, with Horizon 1 then offering the potential to scale that further to nearly 80,000 as we continue to assess market demand. This gives line of sight to billions in annualized revenue from our AI cloud services business alone.

In terms of AI data centers, we're progressing three major data center projects to drive this revenue growth as well as provide scope for future expansion. At Prince George in British Columbia, we're continuing to transition existing capacity from Bitcoin mining to AI workloads with retrofits for air-cooled GPUs and the construction of a newly announced liquid-cooled data center underway to support our GB300 NVL72 deployment. At Childress, Texas, Horizon 1 continues to remain on schedule for Q4 2025 given strong demand signals. We've also begun site works and long lead procurement for Horizon 2, a sequenced liquid-cooled facility. Together these projects can support over 38,000 NVIDIA GB300s. At Sweetwater, our flagship 2 GW data center hub in West Texas, Sweetwater 1 remains on track for energization in April 2026. Construction is progressing well with key long lead equipment either on site already or on order.

Upgrades to the utility substation have now commenced. In summary, we've delivered record performance this year. We've got a clear AI growth path with near-term milestones and most excitingly, we continue to position our platform ahead of the curve to monetize substantial opportunities in the AI infrastructure and compute markets. I'll now hand over to Belinda, who will walk through the FY2025 results in more detail.

Speaker 1

Thank you, Dan. Good morning to those in Sydney and good afternoon to those in North America. As noted in our recent disclosures, we've completed our transition to a U.S. domestic issuer status from July 1 this year, and as such we've reported our full year results for the period ended June 30, 2025, under U.S. GAAP and the required SEC regulations. For the fourth quarter of FY25, we delivered a record revenue of $187 million, being an increase of $42 million from the previous quarter, primarily due to the record mining Bitcoin of 180 billion as we operate at 50 exahash. During the quarter, we also delivered AI cloud revenue of $7 million. Our Bitcoin mining business continues to perform strongly, supported by best-in-class fleet efficiency at 15 Joules per Terahash and low net power costs being $0.035 per kilowatt hour in Q4.

Whilst our operating expenses increased to $114 million, primarily due to overheads and depreciation costs associated with our expanded data center platform and increased Bitcoin mining and GPU hardware, we've delivered a strong bottom line of $177 million. High margin revenues from our Bitcoin mining operations were a key driver of this profitability, with an all-in cash cost of $36,000 per Bitcoin mined versus an average realized price of $99,000. Noting that these all-in costs incorporate expenses across our entire business including the AI verticals. Underscoring the strength of our platform, we closed the financial year with approximately $565 million of cash and $2.9 billion in total assets, giving us a strong balance sheet to support the next stage of growth. I'll now hand back to Dan to discuss the exciting growth opportunities that continue for Iris Energy.

Speaker 3

Thanks, Belinda. I think it's fair to say that the market backdrop for our AI cloud business is pretty compelling. Industry reports demonstrate accelerating enterprise adoption of AI solutions and services, with the percentage of organizations leveraging AI in more than one business function growing from 55% to 78% in the last 12 months alone. As almost all of us would know, demand is accelerating faster than supply. New model development, sovereign AI programs, and enterprise adoption are driving a step up in GPU needs, and the constraint is infrastructure and compute, not customer interest. Power availability, GPU-ready high-density data center capacity remains scarce, with customers prioritizing speed to deploy and the ability to scale. IREN is uniquely positioned to meet this demand. Our vertical integration gives us control over the key bottlenecks, significant near-term grid-connected power, with data centers engineered for next-generation power-dense compute.

This enables accelerated delivery timelines and rapid, low-risk scaling. Because we own and operate the full end-to-end stack, we are able to deliver superior customer service, tighter control over efficiency, uptime, and service quality, translating directly into a better customer experience for our customers. We are leading our service with a bare metal service because it gives sophisticated developers, cloud providers, and hyperscalers what they want most: direct access to compute and the flexibility to bring their own orchestration as and when customer needs evolve. We have the flexibility to layer in software solutions to provide additional options to the customer. Our new status as an NVIDIA Preferred Partner is helpful in that regard. It enhances supply access and helps broaden our customer pipeline, supporting expansion across both existing relationships and new end users, platforms, and demand partners.

The market is large, it's accelerating, supply is constrained, and we have the platform to meet market demand for AI cloud and meet that reasonably quickly. That is why we're immediately scaling to more than 10,000 GPUs, but also now importantly focusing on what comes next. The 10,000 GPU expansion is underway. With it, we will be positioned at the front of the Blackwell demand curve, delivering first-to-market benefits. We saw this with our initial B200 deployment several weeks ago. Upon commissioning, it was immediately contracted on a multi-year basis. Importantly, we are funding growth in a CapEx efficient way. In the past week alone, we have secured two new tranches of financing which have funded 100% of the purchase price of new GPUs at single-digit rates. Anthony will touch on this shortly as well as what's next in terms of revenue.

These GPUs will be delivered and progressively commissioned over the coming months, targeting $200 million to $250 million of annualized revenue by December this year. Approximately one exahash of ASICs will be displaced as a result, which we plan to reallocate to sites with available capacity, minimizing any impact to the overall 50 exahash installed hash rate. Finally, we also expect the strong margin profile of our AI cloud business to continue, underpinned by low power costs but importantly full ownership of our AI data centers, eliminating any third-party colocation fees from our cost base. Our Prince George campus will anchor this next phase of our AI cloud growth. As I alluded to earlier, we're pleased to announce today that construction is well underway on a new 10 MW liquid-cooled data center at Prince George designed to support more than 4,500 Blackwell GB300 GPUs.

Following this buildout, half of Prince George's capacity will now be dedicated to AI cloud services. There is then clear runway to double capacity to more than 20,000 GPUs at this site alone. Procurement is also in progress to equip every GPU deployment at Prince George with backup generators and UPS systems. Beyond Prince George, McKenzie and Canal Flats, our data center campuses in each of these locations create an even larger opportunity. With powered shells existing and designed to the same architecture as Prince George, these sites offer a straightforward and replicable pathway to more than 60,000 GB300s. Horizon 1 and our broader portfolio of data center sites in Texas open up a further path to continued AI cloud growth. It's fair to say we're incredibly excited by the AI cloud opportunity.

It's a business line that many are simply unable to pursue due to the significant technical expertise and requirements involved. With two to three year payback periods and the low-cost GPU financing structures we are securing, we see this as a highly attractive pathway to continue compounding shareholder value. Our ability to build and operate world-class AI services all the way down from the transmission line down to the compute layer uniquely positions IREN at the forefront of this digital AI transformation. Now onto the major projects driving our AI expansions. Childress continues to show strong on-the-ground momentum with Horizon 1 construction progressing according to schedule and remaining on track for this year. As you can see in the progress photos, the data center buildings are nearing completion, and the installation of the liquid cooling plant on the south side of the halls is underway based on customer feedback.

We've also upgraded certain specifications, including introducing full Tier 3 equivalent redundancy across all critical power and cooling systems. Due to the expected timing gap before NVIDIA's GPUs are available, we have also reconfigured the design to be able to accommodate a wider range of rack densities while preserving the flexibility to accommodate next-generation systems when they are available. Even with these adjustments, we expect to remain at a very competitive build cost target, reflecting the efficiencies of our in-house design, procurement, and construction model. Finally, we're also moving ahead with certain tenant scope work to de-risk delivery timelines and provide additional flexibility, including the potential to monetize the capacity via our own AI cloud service. In that regard, engagement remains active with both hyperscaler and non-hyperscaler customers across both cloud and colocation opportunities. Site visits, diligence, commercial discussions, and documentation are ongoing.

Building on this strong customer traction at Horizon 1 and general overall market momentum, we're pleased to announce that we've commenced early works and long lead procurement for Horizon 2, a potential second 50 MW IT load liquid-cooled facility at Childress. Together, Horizons 1 and 2 will have capacity to support over 38,000 liquid-cooled GB300s, creating one of the largest clusters in the U.S. market. In saying that, it's still modest compared to the capacity of our Sweetwater hub, which could support over 600,000 GB300s, which is a good segue to Sweetwater. Both construction and commercial momentum continue to build at the 1.4 GW Sweetwater One site, still scheduled for energization in April 2026. As you can see in the progress photo, construction of the high voltage bulk substation is underway, and key long lead equipment continues to arrive at site.

On the commercial front, we're advancing discussions with prospective customers for different structures. The campus is inherently flexible by design, so we can meet demand across the entire AI infrastructure. Stack powered shells for partners who want to self operate, turnkey colocation for customers seeking speed, and cloud services for those who would like us to run it end to end. While we have a multitude of other exciting growth opportunities preceding this, Sweetwater's combination of scale, certainty, and flexibility positions it as yet another growth engine for Iris Energy in the accelerating wave of AI computing. Where do we sit today? Industry estimates call for more than 125 gigawatts of new AI data center capacity over the next five years, with hyperscale CapEx forecast supporting the credibility of that trajectory.

Yet as most of us know, existing grid capacity is well documented as being far from sufficient to meet this demand. Against that backdrop, we have expanded our secured power capacity more than 100x since IPO. We've built over 810 megawatts of operational next generation data centers in the process, demonstrating our ability to not only secure valuable powered land, but also deliver next generation data centers and compute at scale in some of the most demanding markets. It's a really exciting time for the industry and it's a really exciting time for us. With that hopefully providing a reasonably comprehensive overview of the opportunity in front of us, I'll hand over to our newly appointed Chief Capital Officer, Anthony Lewis, to discuss financing.

Speaker 2

Thanks Dan and good morning or good evening everyone. As the case may be, this slide highlights how we are funding growth across our AI verticals through a combination of strategic financings and strong cash flows from existing operations to the right. The table to the right, which many of you will be familiar with, shows the illustrative cash flows from our existing Bitcoin mining operations. At the current network hash rate and $115,000 Bitcoin price, we show over $1 billion in mining revenue and after subtracting all costs and overheads of our entire business, we arrive at close to $650 million of adjusted EBITDA. There is then a further $200 to $250 million of annualized revenue on top of this expected to come from the AI cloud business expansion with an increasing contribution from that business over time.

There is clearly some sensitivity to the relevant assumptions here, but the key message is we expect significant operating cash flow to invest in our growth initiatives over a range of operating conditions with our position enhanced by a low cost power and best-in-class hardware performance. These cash flows, together with existing cash and raising financing initiatives which I'll touch on shortly, fully fund our near-term CapEx, including the cloud expansion discussed with liquid cooling and power redundancy at Prince George, taking GPUs to 10,900, completing Horizon 1 and energizing Sweetwater 1 substations. Let me now turn to our funding strategy more generally. As a capital-intensive business growing quickly, we are clearly focused on diversifying our sources of capital so that we maintain a resilient and efficient balance sheet. The $200 million of GPU financings we announced this week are a recent example of that.

These transactions had 100% of the upfront GPU CapEx financed, allowing us to accelerate the growth flywheel for our AI cloud business at an attractive cost of capital and they pay down over two to three years, matching well against the accelerated paybacks on the underlying hardware end-of-lease term options. Instructions like these also give us added operational flexibility. We're also seeing strong institutional demand for asset-backed and infrastructure lending in the AI sector and with our existing portfolio of assets and the growth opportunities in front of us, we think Airen is well placed to access that capital. We are currently advancing a range of financing work streams which could support further growth. This could include further asset-backed financing, project-level and corporate-level debt.

We've also proven good access to the convertible bond market with two well-supported transactions over the course of the financial year, and that remains a further source of funding potential for us. Of course, we'll also be focused on maintaining a prudent level of equity capital as we continue to scale, ensuring continued balance sheet resilience. In closing, with a foundation in strong operating cash flow from existing operations and a broad range of capital sources available to us, we feel we are well placed to fund the next stage of growth. With that, I'll now turn the call over to Q and A.

Speaker 5

Thank you. As a reminder to ask a question, please press star one one on your telephone and wait for your name to be announced. To withdraw your question, please press star one one again. One moment for questions. Our first question comes from Paul Golding with Macquarie. You may proceed.

Thanks so much. I wanted to ask on efficiency at these sites. I noticed that PUE in the British Columbia sites is down at 1.1, which is a very impressive efficiency ratio versus Sweetwater being about 1.4. Those may be peak numbers as opposed to average, but was wondering if you could give some color around how that might influence the thought process around rollout or concentration of sites receiving GPUs initially versus others. As you think about the efficiency and then also just along the lines of this infrastructure being developed with PUE that low being cited, how are you thinking about the backup generation for the existing pods that you have? Outstanding. I only asked that question in relation to the on demand versus contracted customer dynamic and how you're seeing that evolve. Thank you so much.

Speaker 4

Hi Paul, happy to jump in and take that one. As you mentioned, across the British Columbia sites we're operating at a PUE 1.1 that's on an air-cooled basis. Once we install the liquid-cooled facilities there, we expect that to be operating on average slightly higher than that but still well under 1.2 PUE across the year. At Childress, the Horizon 1 liquid-cooled installation, the number that you mentioned is much closer to a peak PUE number, although we actually expect it to be less than 1.4 and the average PUE over the year to be around 1.2 in all cases. I think those are extremely competitive numbers across the industry. We are more led in terms of our deployments across the different sites by what our customers are ultimately demanding.

Within British Columbia, the ability to scale extremely quickly on an air-cooled basis has been a significant driver of demand for us. That PUE level is extremely competitive regardless. That is where we are seeing some of the primary interest from our customer base at Horizon 1. That liquid-cooled capacity in particular is extremely scarce in the industry at the moment and the ability to locate a single cluster of up to 19,000 or just over 19,000 GB300s is significantly attractive and driving high levels of customer interest. I think less driven by PUE overall in terms of deployments and more driven by the customer side of the equation. To your question on redundancy, as Daniel Roberts mentioned in his remarks, we're introducing redundancy across the entire fleet of GPUs that we have in our existing operating business, as well as for the new GPUs that we purchased.

While we believe that for many of the applications that these clusters are used for, it's not necessarily the case that it's required to have redundancy, we have seen some of our customers wanting that redundancy. For us, we ultimately want to be driven by providing the best customer service. That's really what's driving us to install that redundancy across the fleet.

Thanks, Kent. If I could ask one quick follow-up on the GB300 NVL72 capability that has been incorporated or retrofitted into the original plan for Horizon 1, I believe if you could just give us any incremental color around what that may have entailed and any impact that may have had on how financing availability or future financing plans may be impacted as you think about incremental costs for that density. In particular, maybe as you plan for Rubin, given this preferred partner status now. Thank you.

Yes, I think what you're referring to with Dan's comments around introducing flexibility for a wider range of densities and for us that actually comes more towards lower densities. Being able to operate at densities that are under what the Vera Rubins would require. The base design as we had, it could handle up to 200 kilowatts a rack, easily able to accommodate the next iteration of GPUs. What we're seeing in the market today is that many customers actually want flexibility to be able to operate not only at the rack densities for GB300s, which are around 135 kilowatts a rack, but actually at even lower densities to accommodate additional types of computers, the data center infrastructure. What we've done is gone back and reworked some of the electrical and mechanical equipment to be able to actually accommodate lower rack densities.

As it relates to accommodating Rubens in the future, you know, no change from our perspective.

Great, thanks so much and congratulations.

Speaker 5

Thank you. Our next question comes from John Todaro at Needham. You may proceed.

Hey guys, thanks for taking my question and congrats on a very strong quarter. First question on the cloud business and apologies if I missed this, but just the average duration of the contract, kind of trying to determine, you know, given the three year payback with the GPUs infrastructure, the overlap there with the customer contract duration. I also follow up on the HPC side of things.

Speaker 4

Yeah, we've got a range of contract lengths across our existing asset base today, all the way from one-month rolling contracts out to three-year contracts for the newer gen equipment, including the Blackwell purchases that we've made. We're typically seeing demand in slightly longer contract lengths whilst those Blackwells are new equipment on the market. A good indication of that is the initial portion of our B200s that, as Dan mentioned, as soon as they were installed, we were able to contract them on a multi-year basis. We do have contracts across the spectrum, but we are seeing for newer gen equipment often longer-term contracts being available.

Got it. That's great. With the success you're having so far in the cloud business, you could take a step back and think, you know, do we need to sign HPC colo capacity? Would you be more comfortable kind of continuing with this at even a bigger scale? As it relates to just kind of thoughts on the CapEx to get you there, any targeted leverage ratio or a threshold on debt too.

Yeah, we're constantly evaluating the opportunities as it relates to both colocation and cloud. I think we're uniquely positioned in the sense that we are able to take advantage of both opportunities, which we think is quite differentiated to a number of others in the industry. They obviously have very different profiles in terms of the risk-adjusted returns. Colocation has longer-dated contracts, typically in the range of 5 to 20 years, but lower payback periods, often higher than 7 years before you can get your capital back. In many cases, because of the nature of the debt financing associated with those, there's very little actual cash flow coming out of the business during that debt finance period. Cloud has shorter-dated contracts but much stronger margins and shorter overall payback period.

We typically see around 2-year payback periods on the GPUs alone and 3 to 4 years on the GPUs plus data center infrastructure. It is something that we're constantly evaluating, and overall we're looking to maximize risk-adjusted returns across both models. I think you can tell from the comments today, as it stands, we do find the cloud opportunity extremely compelling. Anthony, did you want to touch on the comments around financing?

Sure.

Speaker 2

Thanks, Kent. Yeah, I think obviously we have very modest debt servicing requirements today, and I guess as we scale the business, where those opportunities have developed and the nature of the cash flows, security of those cash flows will ultimately derive what an appropriate level of leverage is for the business. The capital structure will continue to evolve as we continue to grow, but we'll obviously be focused on maintaining a strong and resilient balance sheet as well as an efficient cost of capital.

Understood. Thank you, gentlemen.

Speaker 5

Thank you. Our next question comes from Darren Aftahi with ROTH. You may proceed.

Hey guys, good afternoon. Thanks for taking my questions and congrats on all the progress. If I may, on Horizon 1 and 2, I guess there's commentary in the press release about what theoretically Horizon 1 could support in terms of GPUs, but you kind of left the door open that there may be other uses. I'm kind of curious on strategic thinking there. Then on Horizon 2, I think if my math is right, you guys only have 25 MW left at Childress and you're talking about, I guess, 50 MW of critical load. Will you be borrowing from your Bitcoin mining business to get there and are there expansion opportunities beyond that? Second question, on slide nine you have one of your demand partners is Fluid Stack.

I'm more curious on the AI cloud side and maybe that entity in particular, given one of your peers signed a deal with them and another partner there, just what the demand drivers are, what Fluid Stack in particular. Thanks.

Speaker 3

Thanks Darren, appreciate that. Three questions I hear in there. Horizon, we mentioned 1,930 just tick over based on the NVL72 configuration GB300s. The project has been engineered specifically for liquid-cooled GPU. There is no other use case as an end market other than that. In saying that, there's a couple of different ways we might monetize that capacity. A is through different types of GPUs. As we mentioned during the presentation and Kent reiterated, we've now introduced the flexibility to accommodate a wider range of rack densities. We actually discovered building this that the issue is we're building rack densities that are too dense for where the industry is today. We've had to dial it back a little bit.

Accommodating lower rack density gives us the ability to accommodate a wider range of different GPUs whilst preserving the ability to service the Vera Rubens as and when they're released and potentially beyond that. That's exciting in terms of monetizing the capacity. There's then colocation versus cloud. We may buy, own, operate the 19,000 GPUs and we're having conversations with a variety of potential partners for that, including hyperscale customers. We're progressing financing work streams in parallel. That's a real option. If the risk return balance is right, as Kent mentioned, absolutely we're in a unique position where not many people can build, own and operate a cloud service. We're pursuing that and we're excited about that. Equally, we're seeing a lot of demand for colocation and that would deliver more of an infrastructure return on capital and will remain open to that structure.

We want to see a risk return framework that is compelling. To date, I guess we haven't yet seen that in terms of potentially displacing additional mining capacity. You referenced 25 MW potentially being displaced from Childress. Look, that's a cost of business. As we said seven years ago when we started this business, Bitcoin mining will help bootstrap the business, help us build out the data center capacity. As and when higher, better value use cases come along, then we have the ability and the flexibility to swap those in and monetize our data center capacity differently. In saying that, we don't envisage stopping building new data centers. We've got 2 gigawatts of Sweetwater, so it may simply be the case where we just reallocate capacity across different sites and perhaps there's some relocation of mining capacity to Sweetwater at some point, but that's something we're working through.

Finally, Fluidstack. Yes, we've known the Fluidstack guys for quite some time. We've got a good relationship there. We speak to them, we speak to Google. We know what deals are being done, we look at the deals. As of today, we find a three year payback on data center and GPU infrastructure pretty compelling, particularly when Anthony's lining up 100% GPU financing at single digit interest rates. We'll remain open to colocation opportunities, but the devil is in the detail and the high level is not always what you end up carrying over a longer period of time. I might leave that there.

Great, thanks for the insight. Best of luck.

Speaker 5

Thank you. Our next question comes from Joseph Vafi with Canaccord Genuity. You may proceed.

Hey everyone, good morning and congrats on all the progress here in fiscal Q4 and quarter to date. Really great progress. Just really one question for me, maybe just a two part but a single question. Just want to drill down a little bit more on the financing on the Blackwells. I know that you mentioned there's some optionality at the end of the lease financing period. I thought maybe we could kind of go into what you're thinking at the end of those, at the end of the lease financing period, what may be a factor in having you decide what to do next with those. Then just as a follow up, it does seem like at least initially, building your own clusters with this financing does look attractive on a payback and time value of money basis.

Just wondering how much financing do you think is available in this market versus the kind of project financing that maybe yourself and others have discussed for a broader colocation type project.

Thanks a lot.

Speaker 2

Yeah, thanks for the question. In terms of the, you're probably familiar with the various types of leasing structures you can see in the market. Some of them are structured as more classic full payout finance leases. Others are more tech rotation style where you have fixed committed lease payments and then you have an FMV option to acquire at the end, often capped at a percentage of the day one price. That obviously allows you the flexibility to potentially return the equipment if we wanted to reinvest in, for example, the next generation of GPUs at that time or continue to own and operate the equipment depending on the conditions that we see. Sorry, could you just remind me of the second part of your question?

Just, you know, the amount of financing capacity you see out there on the GPU side versus colocation.

Yeah, I think they're obviously quite different asset profiles, and the amount of leverage and the cost of that leverage depends greatly on the specific situation. On the cloud side, there's focus on the underlying portfolio of customers, the diversity in the customer mix, the credit quality, the duration of the contracts; that will all drive both the sort of pricing and leverage that you can secure. Similarly, on the colocation side, you can obtain very attractive cost of funds and very meaningful leverage against high quality offtakes such as hyperscale offtakes. As you come down the credit spectrum or the duration of the contract, that will flow through into the cost of the finance and the leverage that you can obtain.

Speaker 3

Maybe just to add to that, Anthony, the two are not mutually exclusive, cloud and colocation, in the sense that we are arranging these 100% financing lease structures as Anthony's mentioned over the GPUs, but that doesn't preclude us then financing the asset base and the infrastructure base at a data center level similar to how you would finance a colocation. It just happens to be the case that the colocation partner is an internalized Airen entity. That market is open. We're talking to a vast number of potential providers of capital for that. As Anthony's mentioned, we're looking up and down the entire capital stack to optimize cost of capital at a group level. You've got these asset level options, but then you've got corporate options as well. We mentioned the buoyant convertible note market that continues to look quite prospective.

We've been prosecuting bond type structures at a corporate level as well. There's a whole different array and every week depending on level of demand, our revenue profile, how we're building out different elements of the business. The jigsaw puzzle from a financing perspective kind of falls into place and helps support that. It's that reflexive wheel of sources and uses of capital. That's the benefit of now having Anthony on and dedicated full time to optimizing cost of capital while Kent runs around North America looking to deploy it.

Great. Thanks, Daniel. Thanks, Anthony.

Speaker 5

Thank you. Our next question comes from Reggie Smith with JPMorgan. You may proceed.

Hey everyone, this is Charlie on for Reggie. Thanks for taking the question. Can you talk a bit more about some of the key hires you've made in building out the cloud and colocation businesses, and where, if anywhere, there is still some room to go? As a follow up, digging in a bit more on the sales side, can you provide a bit more on how you were getting in front of and winning some of the AI clients that you called out in the slides? Thanks.

Speaker 4

Yeah, happy to jump in there on the resourcing question. We've been hiring across the stack, as Dan made clear. At a level of vertical integration that we have, we continue to need resources across all areas including data center operations, networking, InfiniBand experts, developers. On the software side, we also continue to build out our go to market function. That consists of hiring additional sales executives as well as solutions architects, and we're also expanding the marketing team in parallel with that. There is an ongoing level of hiring across the business to support the additional customer facing work that we're doing. Sorry, there was a last part to your question that I missed. It was breaking up a little.

Yeah, just more on the sales side, like how you're getting in front of a client, what are you competing on, why are they choosing Iris Energy? Things like that.

Yeah. We get a mix of inbound and outbound customer demand drivers. We have been active recently in the conference space, so we have been getting out, telling our story, showing why we are differentiated. As I mentioned, we've been expanding the marketing team and our efforts there to help drive inbound. Particularly, our activities across all social platforms have been ramping over the past 12 months in particular, and we're seeing a high degree of interest there. As that gets out into the public sphere, as well as our ongoing provision of cloud services and customer word of mouth, we are starting to see more inbound inquiries as well around both our cloud services platform and the potential colocation platform. It is a bit of a mix there in terms of what we're seeing.

Speaker 3

I think maybe just to add to that as well, this is exactly the point. The whole demand supply equation in this industry is imbalanced. There is little supply. The demand, when they need something, when people need something, they tend to find it, particularly when it's scarce. Word of mouth, through these demand brokers, conferences, existing customers, word does get out. We do have three pretty unique competitive advantages compared to other competition around AI cloud services at scale. We control the infrastructure end to end. We can scale up capacity up and down across our existing data center footprint, let alone the new footprint. Building into that growth performance, vertical integration is really important because it gives us direct oversight of every single layer in the stack. We've got tighter control over performance, reliability, service, and they get higher uptime as a result.

There are no colocation partners, there are no SLAs with data centers that restrain and constrain your ability to update GPUs and get your hands on them. Finally, from a cost perspective, we've got no colocation fees and greater operational efficiency as a result. We're in a really good spot. This also translates to salesforce and marketing support and general cloud support. Because we are in the industry, we are doing stuff, we've got available capacity, there's significant interest in joining Airen because we have capacity to sell as distinct from other providers who have no capacity and salespeople are sitting there with not a lot to do.

Perfect. Thank you for the question and congrats again.

Speaker 5

Thank you. Our next question comes from Brett Knoblauch with Cantor Fitzgerald. You may proceed.

Hi guys, thanks for taking my question. Maybe on the cloud services front, is the strategy to go out and order or purchase GPUs with a customer already in mind, or are you buying these GPUs and then trying to find a customer? Could you maybe just elaborate on the power dynamics per GPU? I think the 19,000 times GB300 for Horizon 1 implies it can be 380 of them per MW of critical IT load. Do you have maybe a similar metric or so for the 3002 or B200s? If you provide any color there, that'd be helpful as well.

Speaker 3

I might take the first half if you want to do the second half. The prospect of ordering GPUs before or after a contract is the nature of the industry. When companies want compute, they want it now, but they don't want to wait two to three months. You think about an enterprise that's made the decision. You think about an AI scale up or startup that's raised a bunch of capital. Very few companies are in a position where they can plan out and map out a two to three year timeline of GPU needs. Often it's we need GPUs, we need them for a project, we need them for today. The world wants on demand compute and we almost use this as a universal motherhood statement to guide what we do. The world doesn't really want data center infrastructure.

The world at its core wants compute, and it wants it now and when it needs it. That's the first element. The second element is I feel like it's Groundhog Day, we're back in this world. It takes me back to Bitcoin mining where every man and their dog promises certain amounts of capacity online by a certain date. No one does it, no one hits the schedules. Everyone revises them downwards, stretches them out, cost blowouts, et cetera, because the real world's hard dealing with large scale infrastructure projects, large scale workforces, complex project delivery, safety. It takes a lot of work and systems and structures to deliver that. This is why we're in such a good position. We never missed a milestone on Bitcoin mining. We're the most profitable, if only profitable, Bitcoin miner because we did things properly from the start.

We're now sitting here and as I said, it's Groundhog Day with the cloud business where again all these companies, neo clouds and otherwise, promise capacity online by a certain date and they rarely hit it and as a result customers get a bit gun shy. The best thing you can do is to continue ordering the hardware. If it's snapped up as soon as it's commissioned, that's a pretty good sign that you're doing the right thing. As and when we install hardware and the sales cycle starts slowing down, then you know, okay, maybe we've just got to slow down on the orders. Each incremental order from here is a relatively small portion of our overall risk, so we can afford to take it.

Speaker 4

Thanks Dan. With respect to the power question, yes, we do continue to see the overall power usage per GPU ticking up with each incremental release from NVIDIA and the other manufacturers. I think using some of the examples of the numbers that were presented early in the presentation, on an air-cooled basis for B200s, we can fit over 20,000 GPUs into the Prince George site, which is 50 MW at Horizon 1. 50 MW of IT load, you're looking at around 19,000 GB300s. It's not exact math there, but it does give you an idea of what we're seeing in terms of the amount of power per GPU going up over time.

Perfect. Thank you guys, really appreciate it.

Speaker 5

Thank you. Our next question comes from Nick Giles with B. Riley Securities. You may proceed.

Yeah, hi guys, thanks for taking my questions. I wanted to go back to how the Horizon 1 capacity will be utilized and you're closing in on that 4Q completion. At what point would you make the decision to fill Horizon 1 with your own GPUs versus pursue a colocation? Maybe said differently, and I think Dan alluded to this from a financing perspective, if you were to fill it with GPUs, should we expect that to be the case for the entire capacity or could we see you co-locate between your own GPUs and a third party? Thanks very much.

Speaker 4

Yeah, I think that's one of the advantages of where we're at is they're not mutually exclusive options for us. As we mentioned earlier, we are in a unique position that we can monetize that data center capacity in a number of ways. It doesn't have to be ones or zeros. We don't need to do all of it as cloud or all of it as colocation. It could be a combination within Horizon 1. As Dan mentioned, we've started building out Horizon 2 again. That gives us significant optionality where we could potentially do Horizon 1 under one methodology and one type of monetization, Horizon 2 under another. What we will continue to do over time is try and maximize the risk-adjusted returns for how we monetize the assets, and that may fluctuate over time.

We're in an obviously incredibly dynamic industry here, and at different points in time we may see very different risk-reward proposition in colocation versus cloud, but we do have significant flexibility as to how we utilize the capacity.

Thanks for that, Kent. You know, just on the AI cloud services, you're focused on bare metal today, but I think you did make some comments that you could expand your software offerings or integrate if needed. What should we be looking for there? What would the incremental revenue opportunities be if you were to integrate?

Today, as Dan mentioned, the vast majority of the customers that we are dealing with, which make up the majority of the compute market, are highly experienced AI players, hyperscalers, developers. They are for the most part demanding bare metal because it actually suits them better to be able to bring their own orchestration layer. Where we see benefits over time from adding incremental to the software layer is being able to serve a slightly different customer class, which might be smaller AI startups or enterprise customers who are looking for a simpler single click, spin up, spin down type service. Today, where we see the demand supply imbalance, that bare metal offering that we have has a significant level of demand for it, and we feel like we're well positioned where we're at today.

Speaker 3

I think, again, just to reiterate this notion that software is required and these large, sophisticated end users of GPUs want a third-party provider to staple their own software and make them use it. These guys are sophisticated; they just want compute, they want to run their own stuff, and at the end of the day, software is eating the world. We know that software is not difficult to overlay. The large customers don't want your software; they want their own software. We are hearing it also firsthand from executives and employees at some of these companies that offer their own software that it's a nightmare because every time the GPUs change, they need to update the software and rewrite it.

It's this constant evolution of code bugs, rewriting, updating, et cetera, all for an area of the market that, yes, it might seem good as a narrative, but fundamentally and substantially in terms of revenue opportunities, quite small today.

Great. Thanks for all the color, and keep up the good work.

Speaker 5

Thank you. Our next question comes from Stephen Glagola with JonesTrading. You may proceed.

Hi. Thanks for the question. Iris Energy is now recognized as a NVIDIA Preferred Partner on NVIDIA's website. I was hoping, Daniel, Kent, maybe you could provide more detail on your participation in the DGX Cloud Leptin Marketplace. Specifically, how did the economics of working through the Leptin Marketplace compare to maybe operating your own independent cloud offering? What advantages does Iris Energy get from being on that platform? Any insights into sort of NVIDIA's fee structure or take rate for participants there? Appreciate it. Thank you.

Speaker 4

Yeah, happy to give some more color there. We're not currently participating in the Leptin Marketplace, but as an NVIDIA Preferred Partner, we continue to evaluate platforms like that that could expand how we're able to get customers access to our infrastructure. It may offer us broader reach into developer communities, simpler onboarding. To come back to the previous comments that I made on software, it may open up some of the smaller areas of the market with smaller AI startups and enterprise customers who are looking for a simpler solution. We continue to monitor this. We are seeing an increasing number of these types of offerings coming to market, and for us, we think it'll be an additional demand driver for the underlying compute layer that we're providing.

Thank you, Kent. I could just ask one more. On Horizon 1, is the growth of your AI cloud services business, is that influenced which partners you're willing to consider for colocation potentially at Horizon 1, given arguably it can be competitors.

Yeah, I mean it's something that we continue to evaluate in terms of the mix. I think what you're probably referring to are your AI cloud customers on the colocation side. Now, the majority of AI cloud customers have a very different profile to hyperscalers in terms of colocation. Even within the broader colocation market, there is a significant degree of differentiation. If you think of hyperscalers, they're typically looking for longer term contracts, often 10 to 20 years, extremely creditworthy, but drive a hard bargain in terms of the financials and the economic returns that you're able to achieve. With AI cloud customers, we often see shorter term requirements, so typically might be 5 to 15 years, less creditworthy than the hyperscalers. It's all something that we factor in in terms of that risk-reward element that we discussed earlier.

Because we have heard from a number of people whether the fact that we're offering a cloud service limits our ability to do colocation, I would actually say quite the opposite. Most of the colocation customers that we're talking to significantly value the fact that we understand how to operate these clusters at scale, that we have the data center knowledge, we know how to design data centers to operate these clusters, and we proved out through our own cloud service that we can operate them at a very efficient level. I don't see any kind of conflict there and it hasn't been a particular issue for us over time.

Appreciate it, Kent, thank you.

Speaker 3

To jump in on the AI cloud as well, it hasn't really been live functionally. NVIDIA's been working through a number of items in relation to making that available. I think some of it's now live early access, and we're in direct conversation with them about integration at the moment. It is a demand partner that we can absolutely envisage using.

Speaker 5

Thank you. Our next question comes from Ben Summers with BTIG. You may proceed. Hey, good morning.

Good afternoon guys and thanks for taking my question. Kind of more on the colocation side, just curious what went into the decision to start developing Horizon 2 and if that was a lot of, you know, potential customers were thinking about potentially scaling beyond the initial 50 MW of Horizon 1. I think kind of more broader picture, as we progress towards getting Sweetwater online, what's the different customer profile, if any, for more larger scale sites versus potentially just wanting 50 MW or 100 MW, and just kind of any color on the counterparties that you're having conversations with. Thank you.

Speaker 3

We haven't committed the full CapEx to building out Horizon 2. Importantly, over the last seven years our whole business model has been around cheap optionality. Sitting here right now today looking at the bigger picture, and I can drill into that, it just makes sense to order long lead items and start moving the ball ahead on a potential commissioning of a Horizon 2 facility. A lot of the way the S curve works for CapEx in respect to these facilities is you've got long time and smaller cash outlays that build up over time before the larger CapEx commitments come in. It makes sense to put down deposits on long lead items, get the ball rolling so that we can maintain a really competitive fast time to power.

For Horizon 2 now sitting here today, relative to three to six months ago, we're seeing further validation of a decision to spec a relatively small amount of capital. We are seeing demand take up for AI cloud, we're seeing the number of inbounds for colocation, we're seeing better visibility on the overall demand supply imbalance for liquid-cooled chips. It's a bit of a no brainer to be honest. In terms of committing full CapEx to that, we've got time and we'll just continue to monitor the market live because things are changing week to week in this industry. That flexibility, having a governance structure that's founder led, the ability to make quick decisions, work with the board and adapt to where the market's going is really important because it is super dynamic.

Awesome. Thank you, guys.

Speaker 5

Thank you. I would now like to turn the call back over to Daniel Roberts for any closing remarks.

Speaker 3

Thank you very much. Thanks everyone for dialing in. It's obviously been an exciting quarter and exciting year. We're thrilled about expanding to 10,900 GPUs in the coming months and really putting our AI cloud services further on the map. For us, most of our time is now focused on what lies beyond that. Expanding our 3 GW power portfolio we're working hard on. That's exciting. That's many years away, but it was many years away, the 3 gigawatts, when we started seven years ago. Continuing to position ourselves ahead of the curve in every respect is just critical. It's really important when you're fighting this real world digital world imbalance where digital demand increases overnight, it goes exponential. Your ability to service that demand with real world infrastructure and compute works in a linear fashion. It's harder, it takes longer.

The ability to preempt those digital demands and build for tomorrow, position for tomorrow rather than where we are today, is a key competitive advantage and something we'll maintain. It manifests itself in us building 200 kilowatt racks, but the industry can't support 200 kilowatt racks. Horizon 1, we're having to reconfigure to make it smaller. We'll continue to keep that in mind. We're excited about the future. We appreciate all of your support and can't wait for the next quarterly earnings. Thanks, everyone.

Speaker 5

Thank you. This concludes the conference. Thank you for your participation. You may now disconnect.