Sign in

You're signed outSign in or to get full access.

Intel - Q2 2023

July 27, 2023

Transcript

Operator (participant)

Thank you for standing by, welcome to Intel Corporation's Q2 2023 Earnings Conference Call. At this time, all participants are in listen-only mode. After the speaker's presentation, there will be a question-and-answer session. To ask a question during the session, you'll need to press star one one on your telephone. To remove yourself from the queue, simply press star one one again. As a reminder, today's program is being recorded. Now I'd like to introduce your host for today's program, Mr. John Pitzer, Corporate Vice President of Investor Relations.

John Pitzer (Corporate VP of Investor Relations)

Thank you, Jonathan. By now, you should have received a copy of the Q2 Earnings Release and Earnings Presentation, both of which are available on our investor website, intc.com. For those joining us online, the Earnings Presentation is also available in our webcast window. I am joined today by our CEO, Pat Gelsinger, and our CFO, David Zinsner. In a moment, we will hear brief comments from both, followed by a Q&A session. Before we begin, please note that today's discussion does contain forward-looking statements based on the environment as we currently see it, and as such, are subject to various risks and uncertainties. Our discussion also contains references to non-GAAP financial measures that we believe provide useful information to our investors.

Our earnings release, most recent annual report on Form 10-K, and other filings with the SEC provide more information on specific risk factors that could cause actual results to differ materially from our expectations. They also provide additional information on our non-GAAP financial measures, including reconciliations where appropriate, to our corresponding GAAP financial measures. With that, let me turn things over to Pat.

Pat Gelsinger (CEO)

Thank you, John, and good afternoon, everyone. Our strong Q2 results exceeded expectations on both the top and bottom line, demonstrating continued financial improvement and confirmation of our strategy in the marketplace. Effective execution across our process and product roadmaps is rebuilding customer confidence in Intel. Strength in client and data center, and our efforts to drive efficiencies and cost savings across the organization all contributed to the upside in the quarter and a return to profitability. We remain committed to delivering on our strategic roadmap, achieving our long-term goals, and maximizing shareholder value. In Q2, we began to see real benefits from our accelerating AI opportunity. We believe we are in a unique position to drive the best possible TCO for our customers at every node on the AI continuum.

Our strategy is to democratize AI, scaling it and making it ubiquitous across the full continuum of workloads and usage models. We are championing an open ecosystem with a full suite of silicon and software IP to drive AI from cloud to enterprise, network, edge, and client across data prep, training, and inference in both discrete and integrated solutions. As we have previously outlined, AI is one of our five superpowers, along with pervasive connectivity, ubiquitous compute, cloud-to-edge infrastructure, and sensing, underpinning a $1 trillion semiconductor industry by 2030. Intel Foundry Services, or IFS, positions us to further capitalize on the AI market opportunity, as well as the growing need for a secure, diversified, and resilient global supply chain. IFS is a significant accelerant to our 2.0 strategy. Every day of geopolitical tension reinforces the correctness of our strategy.

IFS expands our scale, accelerates our ramps at the leading edge, and creates long tails at the trailing edge. More importantly, for our customers, it provides choice, leading-edge capacity outside of Asia and at 18A and beyond, what we believe will deliver leadership performance. We are executing well on our Intel 18A as a key foundry offering and continue to make substantial progress against our strategy. In July, we announced that Boeing and Northrop Grumman will join the RAMP-C program, along with IBM, Microsoft, and NVIDIA. The Rapid Assured Microelectronics Prototypes - Commercial, or RAMP-C, is a program created by the US Department of Defense in 2021 to assure domestic access to next-generation semiconductors, specifically by establishing and demonstrating a US-based foundry ecosystem to develop and fabricate chips on Intel 18A.

RAMP-C continues to build on recent customer and partner announcements by IFS, including MediaTek, Arm, and the leading cloud, edge, and data center solutions provider. We also made good progress on two significant 18A opportunities this quarter. We are strategically investing in manufacturing capacity to further advance our IDM 2.0 strategy and overarching foundry ambitions while adhering to our Smart Capital strategy. In Q2, we announced an expanded investment to build two leading-edge semiconductor facilities in Germany, as well as plans for a new assembly and test facility in Poland. The building out of Silicon Junction in Magdeburg is an important part of our go-forward strategy, and with our investment in Poland and the Ireland sites, we already operate at scale in the region. We are encouraged to see the passage of the EU Chips Bill supporting our building out an unrivaled capacity corridor in Europe.

In addition, a year after being signed into law, we submitted our first application for U.S. CHIPS funding for the on-track construction of our fab expansion in Arizona, working closely with the US Department of Commerce. It all starts with our process and product roadmaps, I am pleased to report that all our programs are on or ahead of schedule. We remain on track to 5 nodes in 4 years and to regain transistor performance and power performance leadership by 2025. Looking specifically at each node, Intel 7 is done, and with the second half launch of Meteor Lake, Intel 4, our first EUV node, is essentially complete with production ramping. For the remaining three nodes, I would highlight Intel 3 met defect density and performance milestones in Q2, released PDK 1.1, and is on track for overall yield and performance targets.

We will launch Sierra Forest in first half 2024, with Granite Rapids following shortly thereafter, our lead vehicles for Intel 3. On Intel 20A, our first node using both RibbonFET and PowerVia, Arrow Lake, a volume client product, is currently running its first stepping in the fab. In Q2, we announced that we will be the first to implement backside power delivery in silicon 2-plus years ahead of the industry, enabling power savings, area efficiency, and performance gains for increased compute demands, ideal for use cases like AI, CPUs, and graphics. In addition, backside power improves ease of design, a major benefit not only for our own products, but even more so for our foundry customers. On Intel 18A, we continue to run internal and external test chips and remain on track to being manufacturing-ready in the second half of 2024.

Just this week, we were pleased to have announced an agreement with Ericsson to partner broadly on their next-generation optimized 5G infrastructure. Reinforcing customer confidence in our roadmap, Ericsson will be utilizing Intel's 18A process technology for its future custom 5G SoC offerings. Moving to products, our client business exceeded expectations and gained share yet again in Q2 as the group executed well, seeing a modest recovery in the consumer and education segments, as well as strength in premium segments where we have leadership performance. We have worked closely with our customers to manage client CPU inventory down to healthy levels. As we continue to execute against our strategic initiatives, we see a sustained recovery in the second half of the year as inventory has normalized.

Importantly, we see the AI PC as a critical inflection point for the PC market over the coming years that will rival the importance of Centrino and Wi-Fi in the early 2000s, and we believe that Intel is very well positioned to capitalize on the emerging growth opportunity. In addition, we remain positive on the long-term outlook for PCs as household density is stable to increasing across most regions and usage remains above pre-pandemic levels. Building on strong demand for our 13th-gen Intel processor family, Meteor Lake is ramping well in anticipation of a Q3 PRQ and will maintain and extend our performance leadership and share gains over the last four quarters.

Meteor Lake will be a key inflection point in our client processor roadmap as the first PC platform built on Intel 4, our first EUV node, and the first client chiplet design, enabled by Foveros advanced 3D packaging technology, delivering improved power, efficiency, and graphics performance. Meteor Lake will also feature a dedicated AI engine, Intel AI Boost. With AI Boost, our integrated neural VPU, enabling dedicated low-power compute for AI workloads, we will bring AI use cases to life through key experiences people will want and need for hybrid work, productivity, sensing, security, and creator capabilities, many of which were previewed at Microsoft's Build 2023 Conference. Finally, while making the decision to end direct investment in our Next Unit of Computing, or NUC business, this well-regarded brand will continue to scale effectively with our recently announced Asus partnership.

In the data center, our 4th Gen Xeon Scalable processor is showing strong customer demand despite the mixed overall market environment. I am pleased to say that we are poised to ship our 1 millionth 4th Gen Xeon unit in the coming days. This quarter, we also announced the general availability of 4th Gen cloud instances by Google Cloud. We also saw great progress with 4th Gen's AI acceleration capabilities, and we now estimate more than 25% of Xeon data center shipments are targeted for AI workloads. Also in Q2, we saw third-party validation from MLCommons when they published MLPerf training performance benchmark data, showing that 4th Gen Xeon and Habana Gaudi 2 are two strong open alternatives in the AI market that compete on both performance and price versus the competition.

End-to-end AI-infused applications like DeepMind's AlphaFold and algorithm areas such as graph neural networks show our 4th Gen Xeon outperforming other alternatives, including the best published GPU results. Our strengthening positioning within the AI market was reinforced by our recent announcement of our collaboration with Boston Consulting Group to deliver enterprise-grade, secure, and responsible generative AI, leveraging our Gaudi and 4th Gen Xeon offerings to unlock business value while maintaining high levels of security and data privacy. Our data center CPU roadmap continues to get stronger and remains on or incrementally ahead of schedule, with Emerald Rapids, our 5th Gen Xeon Scalable, set to launch in Q4 of 2023. Sierra Forest, our lead vehicle for Intel 3, will launch in first half of 2024. Granite Rapids will follow shortly thereafter. For both Sierra Forest and Granite Rapids, volume validation with customers is progressing ahead of schedule.

Multiple Sierra Forest customers have powered on their boards. Silicon is hitting all power and performance targets. Clearwater Forest, the follow-on to Sierra Forest, will come to market in 2025 and be manufactured on Intel 18A. While we performed ahead of expectations, the Q2 consumption TAM for servers remained soft on persistent weakness across all segments, particularly in the enterprise and rest of world, where the recovery is taking longer than expected across the entire industry. We see the server CPU inventory digestion persisting in the second half. Additionally, impacted by the near-term wallet share focus on AI accelerators rather than general purpose compute in the cloud. We expect Q3 server CPUs to modestly decline sequentially before recovering in Q4.

Longer term, we see AI as TAM expansive to server CPUs, and more importantly, we see our accelerator product portfolio as well positioned to gain share in 2024 and beyond. The surging demand for AI products and services is expanding the pipeline of business engagements for our accelerator products, which includes our Gaudi, Flex, and Max product lines. Our pipeline of opportunities through 2024 is rapidly increasing and is now over $1 billion and continuing to expand, with Gaudi driving the lion's share. The value of our AI products is demonstrated by the public instances of Gaudi at AWS and the new commitments to our Gaudi product line from leading AI companies such as Hugging Face and Stability AI, in addition to emerging AI leaders, including Indian Institute of Technology, Madras, Pravartak, and Genesis Cloud.

In addition to building near-term momentum with our family of accelerators, we continue to make key advancements in next-generation technologies, which present significant opportunities for Intel. In Q2, we shipped our test chip, Tunnel Falls, a 12-qubit silicon-based quantum chip, which uniquely leverages decades of transistor design and manufacturing investments and expertise. Tunnel Falls fabrication achieved 95% yield rate with voltage uniformity, similar to chips manufactured under the more usual CMOS process, with a single 300 millimeter wafer providing 24,000 quantum dot test chips. We strongly believe our silicon approach is the only path to true cost-effective commercialization of quantum computing. A silicon-based qubit approach is a million times smaller than alternative approaches. Turning to PSG, NEX, and Mobileye, demand trends are relatively stronger across our broad-based markets like industrial, auto, and infrastructure.

Although, as anticipated, NEX did see a Q2 inventory correction, which we expect to continue into Q3. In contrast, IFS and Mobileye continue on a solid growth trajectory, and we see the collection of these businesses in total growing year-on-year in calendar year 2023. Much better than third-party expectations for a mid-single-digit decline in the semiconductor market, excluding memory. Looking specifically at our programmable solutions group, we delivered record results for a third consecutive quarter. In Q2, we announced the Intel Agilex 7 with the R-Tile chiplet is shipping production-qualified devices in volume to help customers accelerate workloads with seamless integration and the highest bandwidth processor interfaces. We have now PRQ'd 11 of the 15 new products we expected to bring to market in calendar year 2023.

For NEX, during Q2, Intel, Ericsson and HPE successfully demonstrated the industry's first vRAN solution running on the 4th Gen Intel Xeon Scalable processor with Intel vRAN Boost. In addition, we will enhance the collaboration we announced at Mobile World Congress to accelerate industry scale Open RAN, utilizing standard Intel Xeon-based platforms as telcos transform to a foundation of programmable software-defined infrastructure. Mobileye continued to generate strong profitability in Q2 and demonstrated impressive traction with their advanced product portfolio by announcing a Supervision eyes-on, hands-off design win with Porsche and a mobility-as-a-service collaboration with Volkswagen Group that will soon begin testing in Austin, Texas. We continue to drive technical and commercial engagement with them, co-developing leading FMCW LiDAR products based on Intel Silicon Photonics technology, and partnering to drive the software-defined automobile vision that integrates Mobileye's ADAS technology with Intel's cockpit offerings.

Additionally, in the Q2, we executed the secondary offering that generated meaningful proceeds as we continue to optimize our value creation efforts. In addition to executing on our process and product roadmaps during the quarter, we remain on track to achieve our goal of reducing costs by $3 billion in 2023 and $8 billion-$10 billion exiting 2025. As mentioned during our internal foundry webinar, our new operating model establishes a separate P&L for our manufacturing group, inclusive of IFS and TD, which enables us to facilitate and accelerate our efforts to drive best-in-class cost structure, de-risk our technology for external foundry customers, and fundamentally changes incentives to drive incremental efficiencies. We have already identified numerous gains in efficiency, including factory loading, test and sort time reduction, packaging cost improvements, litho field utilization improvements, reductions in steppings, expedites, and many more.

It is important to underscore the inherent sustained value creation due to the tight connection between our business units and TD manufacturing and IFS. Finally, as we continue to optimize our portfolio, we agreed to sell a minority stake in our IMS nanofabrication business to Bain Capital, who brings a long history of partnering with companies to drive growth and value creation. IMS has created a significant market position with multi-beam mask writing tools that are critical to the semiconductor ecosystem for enabling EUV technology and is already providing benefit on our 5 nodes, 4 years efforts. Further, this capability becomes even more critical with the adoption of high NA EUV in the second half of the decade. As we continue to keep Moore's Law alive and very well, IMS is a hidden gem within Intel, and the business's growth will be exposed and accelerated through this transaction.

While we still have work to do, we continue to advance our IDM 2.0 strategy. 5 nodes in 4 years remains well on track. Our product execution and roadmap is progressing well. We continue to build out our foundry business, and we are seeing early signs of success as we work to truly democratize AI from cloud to enterprise, network, edge, and client. We also saw strong momentum on our financial discipline and cost savings as we return to profitability. Are executing our internal foundry model by 2024, and are leveraging our Smart Capital strategy to effectively and efficiently position us for the future. With that, I will turn it over to Dave.

David Zinsner (CFO)

Thank you, Pat, good afternoon, everyone. We drove stronger than expected business results in the Q2, comfortably beating guidance on both the top and bottom line. While we expect continued improvement to global macroeconomic conditions, the pace of recovery remains moderate. We will continue to focus on what we can control, prioritizing investments critical to our IDM 2.0 transformation, prudently and aggressively managing expenses near term, and driving fundamental improvements to our cost structure longer term. Q2 revenue was $12.9 billion, more than $900 million above the midpoint of our guidance. Revenue exceeded our expectations in CCG, DCAI, IFS, and Mobileye, partially offset by continued demand softness and elevated inventory levels in the network and edge markets, which impacted NEX results. Gross margin was 39.8%, 230 basis points better than its guidance on stronger revenue.

EPS for the quarter was $0.13, beating guidance by $0.17 as our revenue strength, better gross margin, and disciplined OpEx management resulted in a return to profitability. Q2 operating cash flow was $2.8 billion, up $4.6 billion sequentially. Debt inventory was reduced by $1 billion, or 18 days in the quarter, and accounts receivable declined by $850 million, or 7 days, as we continue to focus on disciplined cash management. Net CapEx was $5.5 billion, resulting in an adjusted free cash flow of -$2.7 billion, and we paid dividends of $500 million in the quarter.

Our actions in the last few weeks, the completed secondary offering of Mobileye shares and the upcoming investment in our IMS Nanofabrication business by Bain Capital, will generate more than $2.4 billion of cash and help to unlock roughly $35 billion of shareholder value. These actions further bolster our strong balance sheet and investment-grade profile with cash and short-term investments of more than $24 billion exiting Q2. We'll continue to focus on avenues to generate shareholder value from our broad portfolio of assets in support of our IDM 2.0 strategy. Moving to Q2 business unit results, CCG delivered revenue of $6.8 billion, up 18% sequentially and ahead of our expectations for the quarter as the pace of customer inventory burn slowed.

As anticipated, we see the market moving toward equilibrium and expect shipments to more closely align to consumption in the second half. ASPs declined modestly in the quarter due to higher education shipments and sell-through of older inventory. CCG showed outstanding execution in Q2, generating operating profit of $1 billion, an improvement of more than $500 million sequentially on higher revenue, improved unit costs, and reduced operating expenses, offsetting the impact of pre-PRQ inventory reserves in preparation for the second half launch of Meteor Lake. DCAI revenue was $4 billion, ahead of expectations and up 8% sequentially, with the Xeon business up double digits sequentially. Data center CPU TAM contracted meaningfully in the first half of 2023.

While we expect the magnitude of year-over-year declines to diminish in the second half, a slower-than-anticipated TAM recovery in China and across enterprise markets has delayed a return of CPU TAM growth. CPU market share remained relatively stable in Q2, and the continued ramp of Sapphire Rapids contributed to CPU ASP improvement of 3% sequentially and 17% year-over-year. DCAI had an operating loss of $161 million, improving sequentially on higher revenue and ASPs and reduced operating expenses. Within DCAI, our FPGA products delivered a third consecutive quarter of record revenue, up 35% year-over-year, along with another record quarterly operating margin. We expect this business to return to more natural demand profile in the second half of the year as we work down customer backlog to normalized levels.

NEX revenue was $1.4 billion, below our expectations in the quarter, and down significantly in comparison to a record Q2 2022. Network and edge markets are slowly working through elevated inventory levels, elongated by sluggish China recovery, telcos have delayed infrastructure investments due to macro uncertainty. We see demand remaining weak through at least the Q3. Q2 NEX operating loss of $187 million improved sequentially on lower inventory reserves and reduced operating expenses. Mobileye continued to perform well in Q2. Revenue was $454 million, roughly flat sequentially and year-over-year, with operating profit improving sequentially to $129 million. This morning, Mobileye increased fiscal year 2023 outlook for adjusted operating income by 9% at the midpoint.

Intel Foundry Services revenue was $232 million, up 4x year-over-year, nearly doubling sequentially on increased packaging revenue and higher sales of IMS nanofabrication tools. Operating loss was $143 million, with higher factory startup costs offsetting stronger revenue. Q2 was another strong quarter of cross-company spending discipline, with operating expenses down 14% year-over-year. We're on track to achieve $3 billion of spending reductions in 2023. With the decision to stop direct investment in our client NUC business earlier this month, we have now exited nine lines of business since Pat rejoined the company with a combined annual savings of more than $1.7 billion.

Through focused investment prioritization and austerity measures in the first half of the year, some of which are temporary in nature, OpEx is tracking $200 million better than our $19.6 billion 2023 committed goal. Turning to Q3 guidance. We expect Q3 revenue of $12.9 billion-$13.9 billion. At the midpoint of $13.4 billion, we expect client CPU shipments to more closely match sell-through. Data center, network, and edge markets continue to face mixed macro signals and elevated inventory levels in the Q3, while IFS and Mobileye are well positioned to generate strong sequential and year-over-year growth. We're forecasting gross margin of 43%, a tax rate of 13%, and EPS of $0.20 at the midpoint of revenue guidance.

We expect sequential margin improvement on higher sales and lower pre-PRQ inventory reserves. While we're starting to see some improvement in factory underload charges, most of the benefit will take some time to run through inventory and positively impact cost of sales. Investment in manufacturing capacity continues to be guided by our Smart Capital framework, creating flexibility through proactive investment in shells and aligning equipment purchases to customer demand. In the last few weeks, we have closed agreements with governments in Poland and Germany, which include significant capital incentives, and we're well positioned to meet the requirements of funding laid out by the U.S. CHIPS Act. Looking at capital requirements and offsets made possible by our Smart Capital strategy, we expect net capital intensity in the mid-30s as a percentage of revenue across 2023 and 2024 in aggregate.

While our expectations for growth CapEx have not changed, the timing of some capital offsets is uncertain and could land in either 2023 or 2024, depending on a number of factors. Having said that, we're confident in the level of capital offsets we will receive over the next 18 months and expect offsets to track to the high end of our previous range of 20%-30%. Our financial results in Q2 reflect improved execution and improving macro conditions. Despite a slower-than-expected recovery in key consumption markets like China and the enterprise, we maintain our forecast of sequential revenue growth throughout the year. Accelerating AI use cases will drive increased demand for compute across the AI continuum, and Intel is well positioned to capitalize on the opportunity in each of our business units.

We remain focused on the execution of our year and long-term product, process, and financial commitments, and the prioritization of our owners' capital to generate free cash flow and create value for our stakeholders. With that, let me turn the call back over to John.

John Pitzer (Corporate VP of Investor Relations)

Thank you, Dave. We will now transition to the Q&A portion of our earnings presentation. As a reminder, we would ask each of you to ask one question with a brief follow-up question where appropriate. With that, Jonathan, can we have the first caller, please?

Operator (participant)

Certainly. Our first question comes from the line of Ross Seymore from Deutsche Bank. Your question, please.

Ross Seymore (Managing Director)

Hi, guys. Thanks for letting me ask the question. Congrats on the strong results. wanted to focus, Pat, on the data center, the DCAI side of things. strong upside in the quarter, but it sounds like there's still some mixed trends going forward. I guess a two-part question: Can you talk about what drove the upside and where the concern is going forward? Part of that concern, that crowding out potential that you just discussed with accelerators versus CPUs, how is that playing out, and when do you expect it to end?

Pat Gelsinger (CEO)

Yeah, thanks, Ross, and, you know, thanks for the, congrats on the quarter as well. I'm super proud of my team for the great execution this quarter. Top, bottom line beats, raise, you know, and just great execution across every aspect of the business, both financially as well as roadmap execution. You know, with regard to the data center, you know, obviously, the good execution, I'll just say we executed well. You know, winning designs, fighting hard, in the market, regaining our momentum, good execution. As you said, we'll see the Sapphire Rapids hit the millionth unit, in the next couple days, our Xeon Gen4. Overall, it's feeling good. Roadmap's in very good shape, so we're feeling very good about the future outlook of the business as well.

You know, as we look to 5th-gen, E-core, P-core with Sapphire and Granite Rapids. All of those, I'll just say we're performing well. You know, that said, we do think that the next quarter, at least will show some softness. There's some inventory burn that we're still working through. We do see that big cloud customers in particular, have put a lot of energy into building out their high-end AI training environments, and that is putting more of their budgets focused or prioritized into the AI portion of their build-out. You know, that said, we do think this is a near term, right, to surge, you know, that we expect will balance over time. We see AI as a workload, not as a market, right?

Which will affect every aspect of the business, whether it's client, whether it's edge, whether it's standard data center, on-premise, enterprise, or cloud. You know, we're also seeing that Gen4 Xeon, then we'll be enhancing that in the future roadmap, has significant AI capabilities. As you heard in the prepared remarks, we expect about 25% today and growing, of our Gen4 is being driven by AI use cases. Obviously, we're gonna be participating more in the accelerator portion of the market with our Gaudi, Flex and Max product lines. Particularly, Gaudi is gaining a lot of momentum. In my formal remarks, we said we now have, you know, over $1 billion of pipeline, 6x in the last quarter. We're gonna participate in the accelerator portion of it.

You know, we're seeing, real opportunity for the CPU as that workload balances, over time between CPU and accelerator. Obviously, you know, we have a strong position to democratize AI across our entire portfolio of products.

John Pitzer (Corporate VP of Investor Relations)

Ross, do you have a quick follow-up?

Ross Seymore (Managing Director)

I do. I just wanted to pivot to Dave on a question on the gross margin side. Nice beat in the quarter and, and the sequential increase for the Q3 as well. Beyond the revenue increase side, which I know is important, can you just walk us through some of the pluses and minuses sequentially into the Q3 and even into the back half, some of the pre-PRQ reversals, underutilization, any of those kind of idiosyncratic blocks that we should be aware of as we think about the gross margin in the second half of the year?

David Zinsner (CFO)

Good question, Ross. In the Q2, just to repeat what I said in the prepared remarks, you know, that was largely a function of revenue. We had, obviously beat revenue significantly, and we got a good fall through, given the fixed cost nature of our business, that really was what helped us really outperform significantly on the gross margin side in the Q2. In the Q3, we do, obviously, at the midpoint, see revenue growth sequentially, that will be helpful in terms of gross margin improvement. We expect, again, pretty good fall through, as we get that incremental revenue. We're also gonna see underloadings come down, I would say modestly come down, for two reasons.

One, we get that period charge for some of our underloading, but some of our underloading is actually just a function of the cost of the inventory, so that will take some time to flow through. It'll be a modest decline, but nevertheless, helpful on the gross margin front. Then, as you point out, we will have pre-PRQ reserves in the Q3, but they're meaningfully down from the Q2. You know, Meteor Lake will not be a pre-PRQ reserve in the Q3 because we expect to launch that this quarter. You know, we have Emerald Rapids that will, that will certainly have some impact, then, you know, some of the other SKUs will also impact it. You know, coming down, but, you know, not, not to zero.

We have an opportunity actually to perform better in the Q4, you know, obviously dependent on the revenue and so forth. Given that pre-PRQ reserves are likely to come off again in the Q4, we should improve on the loading front in the Q4 as well. There's some, I think, some good tailwinds on the gross, on the gross margin front. You know, I'll just take an opportunity to talk longer term. You know, we will continue to be weighed down for some quarters on underload because of the nature of just having it cycle through inventory and then come out through cost of sale. For multiple quarters, we'll have some underloading charges that we'll, that we'll see.

As we talked about, you know, since really Pat joined and, and we, you know, kind of launched into the 5 nodes in 4 years, we're gonna have a significant amount of startup costs that will hit gross margins, that will affect us, you know, for a couple of years. We're really optimistic about where gross margins are going over the, over the long term. You know, ultimately, we will get back to process parity and leadership, and that will enable us to not have these startup costs be a headwind. Of course, as you bring out products, you know, at a high performance, in terms of process and in terms of product, that shows up in terms of our margins.

As Pat mentioned, he went through a laundry list in the prepared remarks of areas of benefit that the internal foundry model will give us. You know, we expect a pretty meaningful amount of that to, to come out in, by the time we hit 2026, but we won't be done there. I mean, I think there'll be multiple opportunities over the course of multiple years to improve, the gross margin. You know, Pat has, you know, talked about, a pretty significant improvement in gross margins over time. I think, you know, what we're seeing today is the beginnings of seeing that improvement show up in the, in the P&L.

John Pitzer (Corporate VP of Investor Relations)

Perfect. Ross, thanks for the question. Jonathan, can we have the next one, please?

Operator (participant)

Certainly. One moment for our next question. Our next question comes from the line of Joe Moore from Morgan Stanley. Your question please.

Joe Moore (Semiconductor Industry Analyst)

Great, thank you. Dave, I think you said in your prepared remarks that data center pricing was up 17% year-on-year, and that Sapphire Rapids was a factor there. Can you just talk to that and kind of, obviously, Sapphire Rapids is going to get bigger, you know, can you talk about what you expect to see with platform costs in DCAI?

David Zinsner (CFO)

Platform costs. Okay. Well, first of all, you know, ASC is obviously improving as we increase core count. You know, as we get more competitive on the product offerings, that enables us to, you know, to have more confidence in the market in terms of our pricing, so that, that's certainly helpful. Obviously, with the increase in core count, that, that affects the cost as well. Cost is obviously goes up. You know, the larger drivers of our cost structure will be around, you know, what we do in terms of the internal foundry model as we get up in terms of scale and get away from these underloading charges. You know, as we get past the start-up costs on 5 nodes in 4 years, which, you know, data center is certainly getting hit with.

Those things, I think, longer term, will be the biggest drivers of gross margin improvement. You know, as we get, launch Sierra Forest in the first half of next year and Granite, later and later, thereafter, and start to, to produce products on the data center side that are really competitive, you know, that enables us to even be stronger in terms of our, our margin outlook and, it should help improve the, the overall P&L of data center.

John Pitzer (Corporate VP of Investor Relations)

Joe, do you have a follow-up question?

Joe Moore (Semiconductor Industry Analyst)

Sure. Just also on servers, as you look to Q3, I think you talked about some of the, the, the, the cautious trends there. Can you talk to enterprise versus cloud? Is it, is it different between the two? Also, you know, are you seeing anything different in China for data center versus what you're seeing in North America?

Pat Gelsinger (CEO)

Yeah, as we said, Joe, and thanks for the question. You know, as we said in the prepared remarks, we do expect, you know, to be seeing the TAM down in Q3, somewhat driven by all of it. It's a little bit of data center digestion for the cloud guys, a bit of enterprise weakness, you know, and some of that is more inventory. You know, the China market, I think, this has been well reported, you know, hasn't come back as strongly as people would have expected overall. Then the last factor was the one of the first question from Ross around the pressure from accelerator spend, you know, being stronger. I think those four, somewhat together, right, are leading to a bit of weakness, at least through Q3.

You know, that said, our overall position is strengthening, and we're seeing our products improve, right? We're seeing the benefits of, you know, the AI capabilities in our Gen 4 and beyond products improving. You know, we're also starting to see some of the use cases like, you know, graph neural networks, Google's AlphaFold, you know, showing best results on CPUs as well, which is increasingly gaining, you know, momentum in the industry as people look for different aspects of data preparation, data processing, you know, different innovations in AI. All of that taken together, we feel optimistic about the long-term opportunities that we have in data center, and of course, the strengthening accelerator roadmap with Gaudi 2, 3, Falcon Shores being now well executed. Also, our first wafers are in hand for Gaudi 3.

You know, we see a lot of long-term optimism, even as, near term, we're working through some of the challenging environments of the market not being as strong as we would have hoped.

John Pitzer (Corporate VP of Investor Relations)

Joe, thanks for the question. Jonathan, can we have the next question, please?

Operator (participant)

Certainly. Our next question comes from the line of CJ Muse from Evercore ISI. Your question please.

CJ Muse (Senior Managing Director)

Yeah, good afternoon. Thank you for taking the question. I guess first question, in your prepared remarks, you talked about AI being a TAM expander for servers. I guess I was hoping you could elaborate that on that, given, you know, the productivity gains through acceleration. Would love, you know, to hear why, why you think that will, will grow units, and, and particularly if you could bifurcate your commentary across both training and inference.

Pat Gelsinger (CEO)

Yeah, you know, and thanks, CJ. You know, generally, you know, there, there are great analogies here that's, you know, from, from history we point to. You know, cases like virtualization was going to destroy, you know, the CPU TAM, and then ended up driving new workloads, right? You know, if you think about a DGX platform, the leading edge AI platform, it includes CPUs, right? Why? Right, head nodes, data processing, data prep, you know, dominate certain portions of the workload. You know, we also see, as we said, AI as a workload, where, you know, you might spend, you know, 10 MW and months training a model, you know, but then you're going to use it very broadly for inferencing.

You- we do see with Meteor Lake ushering in the AI PC generation, where you have 10s of W, you know, being, responding, in 1 or 2 seconds, and then, AI is going to be in every hearing aid, in the future, including mine, where it's, you know, 10 µW and, instantaneous. You know, we do see as AI drives workloads across the full spectrum of applications.... For that, we're going to build AI into every product that we build. You know, whether it's a client, whether it's an edge platform, you know, for, retail and manufacturing and industrial use cases, whether it's an enterprise data center, where they're not going to stand up a dedicated 10-MW farm, but they're not going to move, you know, their private data off premises, right?

Use foundational models that are available in open source, as well as in the big cloud and training environments, as well. You know, we firmly believe this idea of democratizing AI, opening the software stack, creating and participating with this broad industry ecosystem that's emerging, it was a great opportunity and one that Intel is well positioned to, you know, participate in. You know, we've seen that the AI TAM, right, is part of the semiconductor TAM. You know, we've always described this $1 trillion semiconductor opportunity and AI being one of those Superpowers, as I call it, of driving it, but it's not the only one, and one that, you know, we're going to participate in broadly across our portfolio.

John Pitzer (Corporate VP of Investor Relations)

CJ, do you have a follow-up question?

CJ Muse (Senior Managing Director)

Yeah, please. You know, you, you, you talked a little bit about 18A and, and backside power. Would love to hear, you know, what you're seeing today in terms of both scaling and power benefits and how your potential foundry customers, you know, are, are looking at, at that technology in particular.

Pat Gelsinger (CEO)

Thank you. You know, so, you know, we continue to make good progress on our 5 nodes in 4 years. With that, you know, that culminates in 18A. 18A is, you know, proceeding well, and we got a particularly good response this quarter, you know, to PowerVia, the backside power that we believe is a couple of years ahead, you know, as the industry measured it, against any other alternative in the industry. You know, we're very affirmed by the Ericsson announcement, which is, you know, reinforcing the strong belief they have in 18A.

Over and above that, you know, I mentioned the, the, in the prepared remarks, the two major significant opportunities that we made very good progress on as a big 18A foundry customers this quarter, and an overall growing pipeline of potential foundry customers, test chips, and process as well. You know, we feel 5 nodes in 4 years is on track. 18A is the culmination of that and good interest from the industry across the board. You know, I'd also say that, you know, as part of the overall strength in the foundry business as well, and maybe tying the first part and the second part of your question together, you know, is that our packaging technologies are particularly interesting in the marketplace, an area that Intel never stumbled, right?

You know, this is an area of sustained leadership that we've had. Today, many of the big AI machines are packaging limited, and because of that, we're finding a lot of interest for our advanced packaging, and this is an area of immediate strength for the foundry business. You know, we set up a specific packaging business unit within our foundry business and finding a lot of great opportunities for us to pursue there as well.

John Pitzer (Corporate VP of Investor Relations)

CJ, thanks for the questions. Jonathan, can we have the next caller, please?

Operator (participant)

Certainly. Our next question comes from the line of Timothy Arcuri from UBS. Your question, please.

Timothy Arcuri (Managing Director)

Thanks a lot. First, Dave, I had 1 for you. If I look at the third-party contributions, they were down a little bit, which was a little bit of a surprise. You did say that the Arizona fab is on track. Can you sort of talk about that? I know last quarter you said gross CapEx would be first half weighted and the offsets would be back half weighted. Is that, is that still the case?

David Zinsner (CFO)

Yeah. We did, you know, manage CapEx a bit better than I was hoping. We thought it would be more front-end loaded. It's looking like it's going to be a lot more evenly distributed first half versus the second half. We managed CapEx, in particular, this quarter really well, which I think, you know, obviously helped on the, on the free cash flow side. You know, it's kind of a, when you manage the CapEx, you get less offsets, and so, you know, that kind of drove the lower capital offsets for the quarter. For the year, we're still on track to get the same amount of capital offsets through SCIP that we had anticipated, and that's really where most of the capital offsets have come so far.

Obviously, you know, as we get into, you know, chips incentives that, that should be coming here in the not-too-distant future, you know, that will add to the offsets that we get. We go into next year, we start getting the investment tax credit, that will help on the capital offsets. There'll be more things that come, you know, in the future, but right now it's largely SCIP, and it's SCIP1, and, and that's a function of, you know, where the spending lands quarter-to-quarter.

Pat Gelsinger (CEO)

Yeah, just maybe to pile onto that a bit. You know, obviously, getting EU Chips Act approved, we're excited about that for the Germany and Poland projects. You know, we'll go for formal DG COMP approval. We're also very happy we submitted our first proposal, the on-track Arizona facility, but we'll have 3 more proposals going in for U.S. CHIPS Act this quarter. We're now at pace for those. Everything there is feeling exactly as we said it would, and super happy with the great engagement, both in Europe as well as with the US Department of Commerce, as we're working on those application processes.

John Pitzer (Corporate VP of Investor Relations)

Tim, do you have a follow-up question?

Timothy Arcuri (Managing Director)

I do. Yeah, Pat. You talked about an accelerated pipeline of, you know, more than $1 billion, I think Sandra's been recently implying that you could do over $1 billion in Gaudi next year. The question is: Is that the commitment? Then also at the, you know, data center day, you had, you know, talked about merging the, you know, GPU and the Gaudi, you know, roadmaps into Falcon Shores, but that's not going to come out until, you know, 2025. The question really there is wondering where that leaves customers in terms of their commitment to your roadmap, given those changes?

Pat Gelsinger (CEO)

Yeah, let me take that, and Dave can add. You know, overall, you know, as we said, the accelerator pipeline is now well over $1 billion and growing rapidly, about 6x this past quarter. That's led by, but not exclusively, Gaudi. You know, that also includes the Max and Flex product lines as well. The lion's share of that is Gaudi. Gaudi 2 is, you know, the shipping volume product today. Gaudi 3 will be the volume product for next year, and then Falcon Shores in 2025, and we're already working on Falcon Shores 2 for 2026. We have a simplified roadmap as we bring together our GPU and our accelerators into a single offering. The progress that we're making with Gaudi 2, it becomes more generalized with Gaudi 3.

The software stack, our oneAPI approach, you know, that we're taking, will give customers confidence that they have forward compatibility into Gaudi 3 and Falcon Shores, and we'll just be broadening the flexibility of that software stack. You know, we're adding FP8. We just added PyTorch 2 support. Every step along the way, it gets better and broader use cases, more language models are being supported, more programmability is being supported in the software stack, and we're building that full right to solution set as we deliver on the best of GPU and the best of matrix acceleration in the Falcon Shores timeline. Every step along the way, it just gets better. Every software release gets better, every hardware release gets better along the way to cover more of the overall accelerator, you know, marketplace.

As I said, we now have Gaudi 3 wafers, first ones are in hand, so that program is looking very good. With this rapidly accelerating pipeline of opportunity, you know, we expect that we'll be giving you, you know, very positive updates there in the future with both customers as well as expanded business opportunities.

John Pitzer (Corporate VP of Investor Relations)

Tim, thanks for the question. Jonathan, can we have the next caller, please?

Operator (participant)

Certainly. Our next question comes from the line of Ben Reitzes from Melius Research. Your question, please.

Ben Reitzes (Partner & Head of Technology Research)

Yeah, thanks a lot. Appreciate the question. Pat, you caught my attention with your comment about PCs next year, or with AI having a Centrino moment. Do, do you mind just talking about that, and when Centrino took place, you know, it, it was very clear we, we unplugged from the wires, and investors really grasped that. What is the aha moment with AI that's gonna accelerate the client business, and benefit Intel?

Pat Gelsinger (CEO)

Yeah, you know, I think, you know, the real question is what applications are gonna become AI-enabled? Today, you're starting to see that, you know, people are going to the cloud and, you know, goofing around with the ChatGPT, writing a research paper, and, you know, that's, like, super cool, right? Kids are, of course, you know, simplifying their homework assignments that way. You're not gonna do that for every client becoming AI-enabled. It must be done on the client for that to occur, right? You can't go to the cloud, you can't round trip to the cloud.

You know, all of the new effects, real-time language translation in your Zoom calls, you know, real-time transcription, automation, inferencing, you know, relevance, portraying, you know, generated, content and gaming environments, real-time creator environments being done, you know, through Adobes and others that are doing those as part of the client. New productivity tools, you know, being able to do local, you know, legal brief generations on clients, one after the other, right across every aspect of consumer, of, you know, of, a developer and enterprise efficiency use cases. We see that there's gonna be a raft of AI enablement, and those will be client-centered. Those will also be at the edge. You can't round trip to the cloud.

You don't have the latency, the bandwidth, or the cost structure to round trip, let's say, inferencing in a local convenience store to the cloud. It will all happen at the edge and at the client. With that in mind, we do see this idea of bringing AI directly into the client and Meteor Lake, right, which we're bringing to the market in the second half of the year, is the first major client product that includes native AI capabilities, you know, the neural engine that we've talked about. This will be a volume delivery, you know, that we will have, and we expect that Intel, as the volume leader, you know, for the client footprint, is the one that's gonna truly democratize AI at the client and at the edge.

We do believe that this will become a driver of the TAM because people will say, "Oh, I want those new use cases. They make me more efficient and more capable, just like Centrino made me more efficient because I didn't have to plug into the wire," right? "Now, I don't have to go to the cloud to get these use cases. I'm gonna have them locally on my PC in real time and cost-effective." We see this as a true AI PC moment, you know, that begins with Meteor Lake in the fall of this year.

John Pitzer (Corporate VP of Investor Relations)

Ben, do you have a follow-up question, please?

Ben Reitzes (Partner & Head of Technology Research)

Yeah. Thanks, John. I wanted to double-click on your sequential guidance in the client business. You know, there's some concerns out there with investors that there was some demand pull-in in the Q2, given some comments from some others. Just wanted to talk about your confidence for sequential growth in that business based on what you're seeing, and if there was any more color there? Thanks.

Pat Gelsinger (CEO)

Yeah. Let, let me start on that, and Dave can jump in. You know, the biggest change quarter and quarter that we see is now that we're now at healthy inventory levels. You know, we worked through inventory Q4, Q1, and some in Q2. You know, we now see the OEMs and the channel at healthy inventory levels. We continue to see solid demand signals, you know, for the client business from our OEMs, and even some of the end-of-quarter and early quarter sale through are clear indicators of, you know, good strength in that business. You know, obviously, we combine that with gaining share again in Q2, you know, so we come into the second half of the year with good momentum and a very strong product line. We feel quite good about the client business outlook.

David Zinsner (CFO)

I'd just add, you know, normally, over the last few quarters, you've seen us identify in the 10-Q, strategic sales that we've made where we've negotiated kind of attractive deals, which have accelerated demand, let's call it. When you look at our 10-Q, which will either be filed late tonight or early tomorrow, you'll see that we don't have a number in there for this quarter, which is an indication of how little we did in terms of strategic versus purchases. To your question of did we pull in demand, I think that's probably give you a pretty good assessment of that.

John Pitzer (Corporate VP of Investor Relations)

Ben, thanks for the questions. Jonathan, can we have the next caller, please?

Operator (participant)

Certainly. Our next question comes from the line of Srini Pajjuri from Raymond James. Your question, please.

Srini Pajjuri (Managing Director and Senior Research Analyst)

Thank you. Pat, I have a question on AI as it relates to custom silicon. It's great to see that you announced a, you know, customer for 18A, on custom silicon, but there's, there's a huge demand, it seems like, you know, for custom silicon on the AI front. I think some of your hyperscale customers are already successfully using custom silicon for, as, as an AI accelerator. I'm just curious what your, your strategy for that market is. Is, is that a focus area for you? If so, do you have any engagements with customers, right now?

Pat Gelsinger (CEO)

Yeah, yeah. Thank you, Srini. The simple answer is yes, and I have, you know, multiple ways to play in, in this market. Obviously, one of those is foundry customers. You know, we have a good pipeline of foundry customers for 18A, foundry opportunities, and several of those opportunities that we're investigating are exactly what you described. You know, people looking to do their own unique versions of their AI accelerator components, and we're engaging with a number of those. Some of those are gonna be variations of Intel standard products, and this is where the IDM 2.0 strength, you know, really comes to play, where they could be using some of our silicon, combining it with some of their silicon designs. Given our advanced packaging, you know, strength, that gives us another way to be participating in those areas.

Of course, that reinforces some of the near-term opportunities will just be packaging, right? Where they're already have designed with one of the other foundry, but we're going to be able to augment their capacity opportunities with immediately being able to engage with packaging opportunities, and we're seeing pipeline of those opportunities. Overall, we, we agree that this is, you know, clearly going to be a market. You know, we also see that some of the ones that you've seen most in the press are about, you know, particularly high-end training environments. As you said, we see AI being infused in everything, and there's going to be AI chips for the edge, AI chips for the communications infrastructure, AI chips for sensing devices, for automotive devices.

You know, we see opportunities for us, both as a product provider and as a foundry and technology provider across that spectrum, and that's part of the unique positioning that IDM 2.0 gives us for the future.

John Pitzer (Corporate VP of Investor Relations)

Srini, do you have a follow-up question?

Srini Pajjuri (Managing Director and Senior Research Analyst)

Yeah, it's for Dave. Dave, it's good to see the progress on the working capital front. I think previously you said your expectation is that, you know, free cash flow would turn positive sometime in second half. Just curious if that's still the expectation. Also, on the gross margin front, is there any, I guess, you know, PRQ charges that we should be aware of as we go into Q4? Thank you.

David Zinsner (CFO)

Let me, let me just take a moment just to give the team credit on the Q2 in terms of working capital, because we brought inventory down by $1 billion. Our day sales outstanding on the AR front is down to 24 days, which is exceptional. A lot of what you saw in terms of the improving free cash flow from Q1 to Q2 was working capital. I think the team's done an outstanding job just really focusing on all the elements that drive free cash flow. Our expectation is still by the end of the year, to get to break even free cash flow. There's no reason why we shouldn't achieve that.

You know, obviously, the net CapEx might be a little different this year than we thought coming into the year. As we talked about, it's just the focus on free cash flow, the improved outlook in terms of the business. You know, we think we can get to break even by the end of the year. As it relates to pre-PRQ reserves in the Q4, we're likely to have some. It should be a pretty good quarter-over-quarter improvement from the Q3, which was obviously a good quarter-over-quarter improvement from the Q2.

John Pitzer (Corporate VP of Investor Relations)

Srini, thanks for the questions. Jonathan, I think we have time for one last caller, please.

Operator (participant)

Certainly. Our final question for today then comes from the line of Aaron Rakers from Wells Fargo. Your question, please.

Aaron Rakers (Managing Director and Technology Analyst)

Yeah, thanks for taking the question. I, you know, I do have a quick follow-up as well. Just kind of going back to the gross margin a little bit. You know, I, I, I think, you know, Dave, when you guided this quarter, you talked about just looking backwards, you know, the PRQ impact was going to be about 250 basis points. I think there was also an underload, you know, impact that I think you guided to around 300 basis points. I'm just curious if, you know, what was-- what were those numbers in this most recent quarter relative to kind of as we try and frame what the expectation is going forward?

David Zinsner (CFO)

Yeah, they were largely as expected, although, you know, it was off of a lower revenue number. The absolute dollars were as expected. They had a little bit of a less of an impact, given the revenue, the revenue was higher. Both of those numbers, like I said, will be lower, in the Q3.

John Pitzer (Corporate VP of Investor Relations)

Aaron, do you have a quick follow-up?

Aaron Rakers (Managing Director and Technology Analyst)

I, I do. Just real quickly on just kind of the AI narrative. You know, we talk about Gaudi a lot in, in the pipeline build-out. I'm curious, as you look forward, you know, as part of that pipeline, you know, Pat, do you expect to see deployments in some of the hyperscale cloud guys and, and competing against, you know, directly, you know, some of the, the large competitors on the GPU front with Gaudi in cloud?

Pat Gelsinger (CEO)

Simple answer, yes, right? You know, everyone is looking for alternatives. You know, clearly, the MLPerf numbers that we posted recently with Gaudi 2, you know, show very competitive numbers. You know, significant TCO benefits for customers. They're looking for alternatives. They're also looking for more capacity, and so we're definitely engaged. You know, we already have Gaudi instances on AWS, that's available today already, and some of the names that we described in our earnings call, Stability AI, Genesis Cloud. You know, so some of these are the proven, I'll say, at scale, tier one cloud providers, but some of the next-generation ones are also engaging. Overall, you know, absolutely we expect that to be the case. You know, we're also on our own dev cloud.

We're making it easier for customers to test Gaudi more quickly, and with that, we now have 1,000 customers now who are taking advantage of the Intel Development Cloud. You know, we're building a 1,000-node Gaudi cluster so that they can be at scale, you know, with their testing a very large training environment. Overall, you know, the simple answer is yes, very much so, and we're seeing a good pipeline of those opportunities. With that, let me just wrap up our time together today. You know, thank you. You know, we're grateful that you would join us today, and we're thankful that we have the opportunity to update you on our business. You know, simply put, it was a very good quarter. You know, we exceeded expectations on top line, on bottom line.

We raised guidance, and we look forward to the continued opportunities that we have of accelerating our business and seeing the margin improvement that comes in the second half of the year. Even more important to me was the operational improvements that we saw, good fiscal discipline, cost-saving discipline, and best of all, the progress that we've made, right, on our execution, our process execution, product execution, the transformational journey, you know, that we're in. You know, I just want to say a big thank you to my team for having a very good quarter that we could tell you about today. We look forward to talking to you more, you know, particularly at our innovation in September. You know, we'll be hosting an investor Q&A track, and we hope to see many, if not all of you there. It'll be a great time. Thank you.

Operator (participant)

Thank you, ladies and gentlemen, for your participation in today's conference. This does conclude the program. You may now disconnect. Good day!