Earnings Call Transcript

ADVANCED MICRO DEVICES INC (AMD)

Earnings Call Transcript 2024-03-31 For: 2024-03-31
View Original
Added on April 02, 2026

Earnings Call Transcript - AMD Q1 2024

Mitch Haws, Vice President, Investor Relations

Thank you, and welcome to AMD's First Quarter 2024 Financial Results Conference Call. By now, you should have had the opportunity to review a copy of our earnings press release and the accompanying slides. If you have not had the chance to review these materials, they can be found on the Investor Relations page of amd.com. We will refer primarily to non-GAAP financial measures during today's call, and the full non-GAAP to GAAP reconciliations are available in today's press release and the slides posted on our website. Participants on today's call are Dr. Lisa Su, our Chair and Chief Executive Officer; and Jean Hu, our Executive Vice President, Chief Financial Officer and Treasurer. This is a live call and will be replayed via webcast on our website. Before we begin, I would like to note that Mark Papermaster, Executive Vice President and Chief Technology Officer, will attend the TD Cowen Technology, Media and Telecom Conference on May 29; and Jean Hu, Executive Vice President, Chief Financial Officer and Treasurer, will attend the JPMorgan Global Media and Communications Conference on Tuesday, May 21; the Bank of America Global Technology Conference on Wednesday, June 5; and the Jefferies Nasdaq Investor Conference on Tuesday, June 11. Today's discussion contains forward-looking statements based on current beliefs, assumptions and expectations, speak only as of today and as such, involve risks and uncertainties that could cause actual results to differ materially from our current expectations. Please refer to the cautionary statement in our press release for more information on the factors that could cause actual results to differ materially. With that, I will hand the call over to Lisa.

Lisa Su, CEO

Thanks, Mitch, and good afternoon to all those listening today. This is an incredibly exciting time for the industry as the widespread deployment of AI is driving demand for significantly more compute across a broad range of markets. Under this backdrop, we are executing very well as we ramp our data center business and enable AI capabilities across our product portfolio. Looking at the first quarter, revenue increased to $5.5 billion. We expanded gross margin by more than 2 percentage points and increased profitability as Data Center and Client segment sales each grew by more than 80% year-over-year. Data Center segment revenue grew 80% year-over-year and 2% sequentially to a record $2.3 billion. The substantial year-over-year growth was driven by the strong ramp of AMD Instinct MI300X GPU shipments and a double-digit percentage increase in server CPU sales. We believe we gained server CPU revenue share in the seasonally down first quarter, led by growth in enterprise adoption and expanded cloud deployments. In cloud, while the overall demand environment remains mixed, hyperscalers continued adopting fourth-gen EPYC processors to power more of their internal workloads and public instances. There are now nearly 900 AMD-powered public instances available globally as Amazon, Microsoft and Google all increased their fourth-gen EPYC processor offerings with new instances and regional deployments. In the enterprise, we have seen signs of improving demand as CIOs need to add more general-purpose and AI compute capacity while maintaining the physical footprint and power needs of their current infrastructure. This scenario aligns perfectly with the value proposition of our EPYC processors. Given our high core count and energy efficiency, we can deliver the same amount of compute with 45% fewer servers compared to the competition, cutting initial CapEx by up to half and lowering annual OpEx by more than 40%. As a result, enterprise adoption of EPYC CPUs is accelerating, highlighted by deployments with large enterprises, including American Airlines, DBS, Emirates Bank, Shell and STMicro. We're also building momentum with AMD-powered solutions powering the most popular ERP and database applications. As one example of the latest generation of Oracle Exadata, the leading database solution used by 76 of the Fortune 100, is now powered exclusively by fourth-gen EPYC processors. Looking ahead, we're very excited about our next-gen Turin family of EPYC processors featuring our Zen 5 core. We're widely sampling Turin and the silicon is looking great. In the cloud, the significant performance and efficiency increases of Turin position us well to capture an even larger share of both first and third-party workloads. In addition, there are 30% more Turin platforms in development from our server partners compared to fourth-gen EPYC platforms, increasing our enterprise SAM with new solutions optimized for additional workloads. Turin remains on track to launch later this year. Turning to our broader Data Center portfolio, we delivered our second straight quarter of record data center GPU revenue as MI300 became the fastest-ramping product in AMD history, passing $1 billion in total sales in less than two quarters. In cloud, MI300X production deployments expanded at Microsoft, Meta and Oracle to power Generative AI training and inferencing for both internal workloads and a broad set of public offerings. For the enterprise, we're working very closely with Dell, HPE, Lenovo, Super Micro and others as multiple MI300X platforms enter volume production this quarter. In addition, we have more than 100 enterprise and AI customers actively developing or OpenAI for MI300X. On the AI software front, we made excellent progress adding upstream support for AMD hardware in the OpenAI Triton compiler, making it even easier to develop highly performant AI software for AMD platforms. We also released a major update to our ROCm software stack that expanded support for open source libraries, including vLLM, and frameworks, including JAKs, adds new features like video decode and significantly increases generative AI performance by integrating advanced attention algorithm support for sparsity and FPA. Our partners are seeing very strong performance in their AI workloads. As we jointly optimize for their models, MI300X GPUs are delivering leadership inferencing performance and substantial TCO advantages compared to H100. For instance, several of our partners are seeing significant increases in tokens per second when running their flagship LLMs on MI300X compared to H100. We're also continuing to enable the broad ecosystem required to power the next generation of AI systems, including as a founding member of the Ultra Ethernet Consortium, working to optimize the widely adopted Ethernet protocol to run AI workloads at data center scale. MI300 demand continues to strengthen. And based on our expanding customer engagements, we now expect data center GPU revenue to exceed $4 billion in 2024, up from the $3.5 billion we guided in January. Longer term, we are increasingly working closer with our cloud and enterprise customers as we expand and accelerate our AI hardware and software roadmaps and grow our data center GPU footprint. Turning to our Client segment, revenue was $1.4 billion, an increase of 85% year-over-year, driven by strong demand for our latest-generation Ryzen mobile and desktop processors with OEMs and in the channel. Client segment revenue declined 6% sequentially. We saw strong demand for our latest-generation Ryzen processors in the first quarter. Ryzen desktop CPU sales grew by a strong double-digit percentage year-over-year, and Ryzen mobile CPU sales nearly doubled year-over-year as new Ryzen 8040 notebook designs from Acer, Asus, HP, Lenovo and others ramped. We expanded our portfolio of leadership enterprise PC offerings with the launch of our Ryzen Pro 8000 processors earlier this month. Ryzen Pro 8040 mobile CPUs delivered industry-leading performance in battery life for commercial notebooks. And our Ryzen Pro 8000 series desktop CPUs are the first processors to offer dedicated, on-chip AI accelerators in commercial desktop PCs. We see clear opportunities to gain additional commercial PC share based on the performance and efficiency advantages of our Ryzen Pro portfolio and an expanded set of AMD-powered commercial PCs from our OEM partners. Looking forward, we believe the market is on track to return to annual growth in 2024, driven by the start of an enterprise refresh cycle and AI PC adoption. We see AI as the biggest inflection point in PC since the Internet, with the ability to deliver unprecedented productivity and usability gains. We're working very closely with Microsoft and a broad ecosystem of partners to enable the next generation of AI experiences, powered by Ryzen processors, with more than 150 ISVs on track to be developing for AMD AI PCs by the end of the year. We will also take the next major step in our AI PC roadmap later this year with the launch of our next-generation Ryzen mobile processors codenamed Strix. Customer interest in Strix is very high based on the significant performance and energy efficiency uplifts we are delivering. Now turning to our Gaming segment. Revenue declined 48% year-over-year and 33% sequentially to $922 million. First quarter semi-custom SoC sales declined in line with our projections as we are now in the fifth year of the console cycle. In Gaming Graphics, revenue declined year-over-year and sequentially. We expanded our Radeon 7000 Series family with the global launch of our Radeon RX 7900 GRE and also introduced our driver-based AMD fluid motion frames technology that can provide large performance increases in thousands of games. Turning to our Embedded segment. Revenue decreased 46% year-over-year and 20% sequentially to $846 million as customers remain focused on normalizing their inventory levels. We launched our Spartan UltraScale+ FPGA family with high I/O counts, power efficiency and state-of-the-art security features, and we're seeing a strong pipeline of growth for our cost-optimized embedded portfolio across multiple markets. Given the current embedded market conditions, we're now expecting second quarter embedded segment revenue to be flat sequentially, with a gradual recovery in the second half of the year. Longer term, we see AI at the edge as a large growth opportunity that will drive increased demand for compute across a wide range of devices. To address this demand, we announced our second generation of Versal adaptive SoCs that deliver a 3x increase in AI tops per watt and a 10x greater scalar compute performance compared to our prior generation of industry-leading adaptive SoCs. Versal Gen 2 adaptive SoCs are the only solution that combine multiple compute engines to handle AI preprocessing, inferencing and post-processing on a single chip, enabling customers to rapidly add highly performant and efficient AI capabilities to a broad range of products. We were pleased to be joined at our launch by Subaru, who announced they adopted Versal AI Edge series Gen 2 devices to power the next generation of their EyeSight ADAS system. Embedded Design win momentum remains very strong as customers adopt our full portfolio of FPGAs, CPUs, GPUs and adaptive SoCs to address a larger portion of their compute needs. In summary, we executed well in the first quarter, setting us up to deliver strong annual revenue growth and expanded gross margin, driven by growing adoption of our Instinct, EPYC and Ryzen product portfolios. Our priorities for 2024 are very clear: accelerate our Data Center growth by ramping Instinct GPU production and gaining share with our EPYC processors; launch our next-generation Zen 5 PC and server processors that extend our leadership performance; and expand our adaptive computing portfolio with differentiated solutions. Looking further ahead, AI represents an unprecedented opportunity for AMD. While there has been significant growth in AI infrastructure build-outs, we are still in the very early stages of what we believe is going to be a period of sustained growth, driven by an insatiable demand for both high-performance AI and general-purpose compute. We have expanded our investments across the company to capture this large growth opportunity, from rapidly expanding our AI software stack to accelerating our AI hardware road maps, increasing our go-to-market activities and partnering closely with the largest AI companies to co-optimize solutions for their most important workloads. We are very excited about the trajectory of the business and the significant growth opportunities ahead. Now I'd like to turn the call over to Jean to provide some additional color on our first quarter results. Jean?

Jean Hu, CFO

Thank you, Lisa, and good afternoon, everyone. I'll start with a review of our financial results and then provide our current outlook for the second quarter of fiscal 2024. We delivered strong year-over-year revenue growth in our Data Center and Client segments in the fourth quarter and grew 230 basis points of gross margin expansion. For the first quarter of 2024, revenue was $5.5 billion, up 2% year-over-year as revenue growth in the Data Center and the Client segment was partially offset by lower revenue in our Gaming and Embedded segments. Revenue declined 11% sequentially as higher Data Center revenue resulting from the ramp of our AMD Instinct GPUs was offset by lower Gaming and Embedded segment revenues. Gross margin was 52%, up 230 basis points year-over-year, driven by higher revenue contribution from the Data Center and Client segments, partially offset by lower Embedded and Gaming segment revenue contribution. Operating expenses were $1.7 billion, an increase of 10% year-over-year, as we continued investing aggressively in R&D and marketing activities to address the significant AI growth opportunities ahead of us. Operating income was $1.1 billion, representing a 21% operating margin. Taxes, interest expense and other was $120 million. For the fourth quarter of 2024, diluted earnings per share was $0.62, an increase of 3% year-over-year. Now turning to our Reportable segment, starting with the Data Center. Data Center delivered record quarterly segment revenue of $2.3 billion, up 80%, a $1 billion increase year-over-year. Data Center accounted for more than 40% of total revenue, primarily led by the ramp of AMD Instinct GPUs from both cloud and enterprise customers and a strong double-digit percentage growth in our server process revenue as a result of growth across our sample products. On a sequential basis, revenue increased 2%, driven by the ramp of our AMD Instinct GPUs, partially offset by a seasonal decline in server CPU sales. Data Center segment operating income was $541 million or 23% of revenue compared to $148 million or 11% a year ago. Operating income was up 266% year-over-year due to operating leverage even as we significantly increased our investment in R&D. Client segment revenue was $1.4 billion, up 85% year-over-year, driven primarily by Ryzen 8000 series processors. On a sequential basis, Client revenue declined 6%. Client segment operating income was $86 million or 6% of revenue compared to an operating loss of $172 million a year ago, driven by higher revenue. Gaming segment revenue was $922 million, down 48% year-over-year and down 33% sequentially due to a decrease in semi-custom and Radeon GPU sales. Gaming segment operating income was $151 million or 16% of revenue compared to $314 million or 18% a year ago. Embedded segment revenue was $846 million, down 46% year-over-year and 20% sequentially as customers continue to manage their inventory levels. Embedded segment operating income was $342 million or 41% of revenue compared to $798 million or 51% a year ago. Turning to the balance sheet and cash flow. During the quarter, we generated $521 million in cash from operations, and free cash flow was $379 million. Inventory increased sequentially by $301 million to $4.7 billion, primarily to support the continued ramp of data center and client products in advanced process nodes. At the end of the quarter, cash, cash equivalent and short-term investment was $6 billion. As a reminder, we have $750 million of debt maturing this June. Given our ample liquidity, we plan to retire that utilizing existing cash. Now turning to our second quarter 2024 outlook. We expect revenue to be approximately $5.7 billion, plus or minus $300 million. Sequentially, we expect Data Center segment revenue to increase by a double-digit percentage, primarily driven by the Data Center GPU ramp; client segment revenue to increase; embedded segment revenue to be flat; and in the Gaming segment, based on current demand signals, revenue to decline by a significant double-digit percentage. Year-over-year, we expect our Data Center and Client segment revenue to be up significantly, driven by the strength of our product portfolio; the Embedded and the Gaming segment revenue to decline by a significant double-digit percentage. In addition, we expect second quarter non-GAAP gross margin to be approximately 53%. Non-GAAP operating expenses to be approximately $1.8 billion. Non-GAAP effective tax rate to be 13% and the diluted share count is expected to be approximately 1.64 billion shares. In closing, we started the year strong. We made significant progress on our strategic priorities, delivering year-over-year revenue growth in our Data Center and Client segments and expanded the gross margin. Looking ahead, we believe the investments we are making will position us very well to address the large AI opportunities ahead.

Mitch Haws, Vice President, Investor Relations

Thank you, Jean. Paul, we're happy to poll the audience for questions.

Toshiya Hari, Analyst

Lisa, my first question is on the MI300. You're taking up the full year outlook from $3.5 billion to $4 billion. I'm curious what's driving that incremental $500 million in revenue? Is it new customers? Is it additional bookings from existing customers? Is it more cloud? Is it more enterprise? If you could provide color there, that would be helpful. And then on the supply side, there's been headlines or chatter that CoWoS and/or HBM could be a pretty severe constraining factor for you guys. If you can speak to how you're handling the supply side of the equation, that would be helpful, too. And then I have a quick follow-up.

Lisa Su, CEO

Thank you, Toshiya, for the question. The MI300 ramp is progressing very well. Over the past 90 days, we've closely collaborated with our customers to qualify MI300 in their production data centers, focusing on both hardware and software aspects. Results have been positive so far, and we currently see increased interest from both existing and new customers committing to MI300. This encourages us to raise our forecast from $3.5 billion to $4 billion. The market is dynamic, and we are engaging with over 100 customers in both development and deployment. Overall, the ramp is strong. Regarding the supply chain, I am pleased with how supply has increased. This is the fastest product ramp we have undertaken, despite the complexity involved with chiplets, CoWoS, 3D integration, and HBM. We have received great support from our partners, and we outperformed our initial expectations for the last quarter. I believe Q2 will also see significant ramping. We plan to increase supply every quarter this year. While we are currently tight on supply, there is strong demand for the product. We will continue to address these aspects throughout the year. Overall, I am very satisfied with how both demand and supply are progressing.

Toshiya Hari, Analyst

I would like to hear about your Data Center GPU roadmap beyond the MI300. We notice that your nearest competitor has been quite open about their roadmap extending into 2025 and often into 2026. Maybe this isn’t the best time to share too much, but how should we view your roadmap and your competitiveness in the Data Center beyond the MI300?

Lisa Su, CEO

Yes, sure. So look, Toshiya, when we start with the roadmap, I mean, we always think about it as a multi-year, multigenerational roadmap. So we have the follow-ons to MI300 as well as the next, next generations well in development. I think what is true is we're getting much closer to our top AI customers. They're actually giving us significant feedback on the roadmap and what we need to meet their needs. Our chiplet architecture is actually very flexible. And so that allows us to actually make changes to the roadmap as necessary. So we're very confident in our ability to continue to be very competitive. Frankly, I think we're going to get more competitive. Right now, I think MI300X is in a sweet spot for inference, very, very strong inference performance. I see as we bring in additional products later this year into 2025, that, that will continue to be a strong spot for us. And then we're also enhancing our training performance and our software roadmap to go along with it. So more details to come in the coming months, but we have a strong roadmap that goes through the next couple of years, and it is informed by just a lot of learning in working with our top customers.

Ross Seymore, Analyst

The non-AI side of the Data Center business, it sounds like the enterprise side has some good traction even though the sequential drop happened seasonally, Lisa. But I was just wondering what's implied in your second quarter guidance for the Data Center CPU side of things? And generally speaking, how are you seeing that whole kind of GPU versus CPU crowding out dynamic playing out for the rest of 2024?

Lisa Su, CEO

Yes, absolutely, Ross, I appreciate the question. Our EPYC business has actually been performing quite well. The market is somewhat mixed, with some cloud customers still working on their optimizations. It varies by customer. In the first quarter, we observed some very promising early indicators in the enterprise segment, with large customers beginning refresh programs. The value proposition of Genoa is exceptionally strong, and we are noticing that it is being embraced across the enterprise. In the second quarter, we anticipate that overall Data Center growth will be strong, with double-digit increases. Additionally, we expect server performance to be up as well. As we approach the second half of the year, we foresee a couple of factors driving growth. We expect improved overall market conditions for the server business, and our Turin launch in the latter half of the year is also expected to strengthen our leadership in the server market. Overall, I believe the business is performing well, and we are confident that we will continue to be well-positioned to capture market share throughout the year.

Matthew Ramsay, Analyst

Lisa, I have a longer-term question followed by a shorter-term follow-up. One question I've been hearing frequently is about your primary competitor announcing a multi-year roadmap. We also keep hearing from others about internal ASIC programs at some of your main customers, whether for inference, training, or both. It would be really helpful if you could discuss how your conversations with those customers go, their level of commitment to your long-term multigeneration roadmap, how they weigh investing in their internal silicon versus using a supplier like you, and what advantages your experience across a wide customer base might provide that those focusing on internal ASICs may not have.

Lisa Su, CEO

Yes, Matt, thank you for the question. One thing we have observed is that the total addressable market for AI compute is expanding rapidly, and we see this trend continuing in all our discussions. We've previously mentioned a TAM of around $400 billion by 2027, which some considered ambitious at that time. However, the demand for AI compute from our customers remains very robust, as evidenced by recent announcements from large cloud companies. We maintain strong relationships with major AI companies, aiming to innovate collectively. When considering large language models and the requirements for training and inference, there will be a variety of solutions; no single solution will fit all needs. The GPU remains the favored architecture, particularly as algorithms and models advance, which benefits our architecture and our capability to optimize CPU with GPU. I believe we are in a strong position with our partnerships and see a significant opportunity for collaborative innovation. There is a strong commitment to work together over several years, which is a reflection of the successes we have achieved previously, including our work on the EPYC roadmap.

Matt Ramsay, Analyst

Lisa, as a follow-up, there's been consistent noise around the stock price, whether it was $2 or $200, but the last month and a half has been particularly intense. I've received various reports about changes in demand from some of your MI300 customers or their planned consumption of your product. I know you addressed the supply situation and your collaboration with partners earlier. However, has there been any update from the customers you are currently ramping up with or those you will soon be working with regarding their demand intentions? Alternatively, has their demand perhaps even increased in recent times since I keep getting inquiries about it?

Lisa Su, CEO

Sure, Matt. Look, I think I might have said it earlier, but maybe I'll repeat it again. I think the demand side is actually really strong. And what we see with our customers and what we are tracking very closely is customers moving from, let's call it, initial POCs to pilots to full-scale production to deployment across multiple workloads. And we're moving through that sequence very well. I feel very good about the deployments and ramps that we have ongoing right now. And I also feel very good about new customers who are sort of earlier on in that process. So from a demand standpoint, we continue to build backlog as well as build engagements going forward. And similarly, on the supply standpoint, we're continuing to build supply momentum. But from a speed of ramp standpoint, I'm actually really pleased with the progress.

Aaron Rakers, Analyst

I apologize if I missed this earlier, but I know last quarter, you mentioned securing enough capacity to support significant growth in the ramp of the MI300. I understand you've raised your guidance to $4 billion. I'm curious how you would describe the supply in relation to the context provided last quarter as we consider this new target. Would you say there is still potential for supply capacity growth?

Lisa Su, CEO

Yes, Aaron. So we've said before that our goal is to ensure that we have supply that exceeds the current guidance, and that is true. So as we've upped our guidance from $3.5 billion to $4 billion, we have supply visibility significantly beyond that.

Aaron Rakers, Analyst

Yes. Okay. And then as a quick follow-up, going back to an earlier question on server demand, more traditional server. As you see the ramp of maybe share opportunities in more traditional enterprise, I'm curious how you would characterize the growth that you expect to see a more traditional server CPU market as we move through '24 or even longer term, how you'd characterize that growth trend?

Lisa Su, CEO

Yes, I think there is definitely a need to refresh older equipment, and we anticipate a refresh cycle ahead. Additionally, we see growth opportunities in AI head nodes within the more traditional SSD market. Our focus is on delivering high performance in high core count and energy efficiency, which is progressing well. Historically, we've been strong in cloud first-party workloads, and now this is expanding to cloud third-party workloads as enterprises in hybrid environments adopt AMD solutions both in the cloud and on-premises. Overall, we view this as a positive trajectory for our server business as we head into 2024 and beyond.

Vivek Arya, Analyst

Lisa, I just wanted to go back to the supply question and the $4 billion outlook for this year. I think at some point, there was a suggestion that the $4 billion number, right, that there are still supply constraints. But I think at a different point, you said that you have supply visibility significantly beyond that. Given that we are almost at the middle of the year, I would have thought that you would have much better visibility about the back half. So is the $4 billion number a supply-constrained number, or is it a demand-constrained number? Or alternatively, if you could give us some sense of what the exit rate of your GPU sales could be. I think on the last call, $1.5 billion was suggested. Could it be a lot more than that in terms of your exit rate of MI for this year?

Lisa Su, CEO

Yes, let me clarify this question. Our $4 billion target for the year is not limited by supply. We have the capability to supply more than that, but the availability is more weighted toward the second half of the year. In the near term, particularly in the second quarter, demand is currently outpacing supply, and we are actively working to increase our supply. This is an industry-wide issue, not specific to our company. AI demand for 2024 has surpassed expectations, as indicated by various stakeholders in the industry. Everyone is increasing capacity as we progress through the year. Regarding visibility, we have good insight into the current situation, and we are engaged with our customers. My aim is to ensure we meet all the milestones as we ramp up our products. As we achieve these milestones, we will incorporate that information into our full-year guidance for AI. Customer engagement is progressing well, and we continue to onboard new customers and expand existing workloads. I hope that clarifies your question.

Jean Hu, CFO

Vivek, thank you for the question. I think the Embedded business declined a little bit more than expected, really due to the weaker demand in some of the markets, very specifically, communication has been weak. And some pockets of industrial and automotive, as you mentioned, it's actually quite consistent with the peers. Second half, we do think the first half is the bottom of Embedded business and will start to see gradual recovery in the second half. And going back to your gross margin question, when you look at our gross margin expansion in both Q1 and the guide at Q2, the primary driver is the strong performance on the Data Center side. The Data Center will continue to ramp in the second half. I think that will continue to be the major driver of gross margin expansion in the second half. Of course, if Embedded is doing better, we'll have a more tailwind in the second half.

Timothy Arcuri, Analyst

I also wanted to ask about your data center GPU roadmap. The customers that we talk to say that they're engaged, not just because of MI300, but really because of what's coming. And it seems like there's a big demand shift to rack scale systems that try to optimize performance per square foot given some of the data center and power constraints. So can you just talk about how important systems are going to be in your roadmap? And do you have all the pieces you need as the market shifts to rack scale systems?

Lisa Su, CEO

Yes, sure, Timothy. Thanks for the question. For sure, look, our customers are engaged in the multigenerational conversation. So we're definitely going out over the next couple of years. And as it relates to the overall system integration, it is quite important. It is something that we're working very closely with our customers and partners on. That's a significant investment in networking, working with a number of networking partners as well to make sure that the scale-out capability is there. And to your question of do we have the pieces? We do absolutely have the pieces, I think the work that we've always done with our Infinity Fabric as well as with our Pensando acquisition that's brought in a lot of networking expertise. And then we're working across the networking ecosystem with key partners like Broadcom and Cisco and Arista, who are with us at our AI data center event in December. So our work right now in future generations is not just specifying a GPU, it is specifying, let's call it, full system reference designs. And that's something that will be quite important going forward.

Timothy Arcuri, Analyst

And then just as a quick follow-up. I know this year it looks like it's going to be pretty back-half loaded in your server CPU business, just like it was last year. I know you kind of held our hands at about this time last year sort of on what the full year could look like and how back-end loaded it could be. So I kind of wonder, could you give us some milestones in terms of how much server CPU could grow this year, how back-end loaded it could be? Is it like up 30% this year for your server CPU business year-over-year? Is that a reasonable bogey? I just wonder if you can kind of give us any guidance on that piece of the business?

Lisa Su, CEO

Yes. I mean, I think, Tim, I think the best way to say it is our Data Center segment is on a very, very strong ramp as we go through the back half of the year. Server CPUs, certainly, Data Center GPUs, for sure. So I don't know that we're going to get into specifics, but I could say, in general, you should expect overall at the segment level to be very strong double digits.

Joseph Moore, Analyst

I wonder if you could address the profitability of MI300. I know you said a couple of quarters ago that it would eventually be above corporate average, but it would take you a few quarters to get there. Can you talk about where you are in that?

Jean Hu, CFO

Yes. Thank you, Joe. Our team has done an incredible job to ramp MI300. As you probably know, it's a very complex product, and we are still at the first year of the ramp, both from yield, the testing time and the process improvement, those things are still ongoing. We do think over time, the gross margin should be accretive to corporate average.

Lisa Su, CEO

Yes. Joe, I think from what we see, look, think Turin is the same platform so that does make it an easier ramp. I do think that Genoa and Turin will coexist for some amount of time because customers are deciding when they're going to bring out their new platforms. We expect Turin to give us access to a broader set of workloads. So our SAM actually expands with Turin, both in enterprise and cloud. And from our experience, I think you'll see a faster transition than, for example, when we went from Milan to Genoa.

Stacy Rasgon, Analyst

For my first one, I wanted to address the MI300 ramp into Q2. So you said you've done $1 billion, give or take, in cumulative sales, which puts it at maybe, I don't know, maybe $600 million in Q1. You're guiding total revenues up about $225 million into Q2, but you've got Client up, you've got traditional Data Center up, you've got Embedded flat. Gaming is going to be down, but I'd hazard a guess that the client and traditional Data Center offset it, if not more. Does the MI300 ramp into Q2? Is it more or less than the total corporate ramp that you've got built into guidance right now that you're expecting?

Jean Hu, CFO

Stacy, thanks for the question. You always ask a math question. So I think, in general, it is more. The Data Center GPU ramp will be more than the overall company's $200-some million ramp.

Stacy Rasgon, Analyst

Okay. So that means Gaming must be down a lot, right, if the client is not performing well.

Jean Hu, CFO

Yes, you're correct. The Gaming sector is experiencing a decline similar to the first quarter. To provide some insight into the Gaming business, the demand has been fairly weak, and inventory levels are also affected. Based on what we see, both the first and second quarters are projected to be down sequentially by over 30%. We anticipate that the second half will be lower than the first half for the Gaming business. Furthermore, Gaming's gross margin is not meeting our company's average, which will impact the overall gross margin mix. So, you are right, Gaming in Q2 is significantly down.

Stacy Rasgon, Analyst

Got it. That's helpful. For my second question, I wanted to look at the near-term Data Center profitability. So operating profit was down 19% sequentially on 2% revenue growth. Is that just the margins of the GPUs filtering in relative to the CPUs? And I know you said GPUs would eventually be above corporate average. Are they below the CPU average? I mean they clearly are, I guess, in the near term, but are they going to stay that way?

Jean Hu, CFO

Yes, you're correct. The GPU gross margin is currently lower than the Data Center gross margin. There are two main reasons for this. The primary reason is that we have significantly increased our investment to expand and accelerate our AI roadmap, which is one of the key factors contributing to a slight decline in our operating income. Regarding your question about gross margin, we have previously stated, and still believe, that over time, the gross margin for Data Center GPUs will exceed the corporate average. However, it will take some time to reach the gross margin levels seen in Servers.

Harlan Sur, Analyst

On your Data Center GPU segment and the faster time to production shipments, given you just upped your full year GPU outlook, how much of it is faster bring-up of your customers' frameworks driven by your latest ROCm software platform and maybe stronger collaboration with your customers' engineers just to get them to call faster? And how much of it is just a more aggressive build-out plan by customers versus their prior expectations given what appears to be a pretty strong urgency for them to move forward with their important AI initiatives?

Lisa Su, CEO

Yes. Harlan, thank you for the question. What it really is, is both us and our customers feeling confident in broadening the ramp? Because if you think about it, first of all, the ROCm stack has done really well. And the work that we're doing is hand in hand with our customers to optimize their key models. And it was important to get sort of verification and validation that everything would run well, and we've now passed some important milestones in that area. And then I think the other thing is, as you said, there is a huge demand for more AI compute. And so our ability to participate in that and help customers get that up and running is great. So I think, overall, as we look at it, this ramp has been very, very aggressive as you think about where we were just a quarter ago. Each of these are pretty complex bring-ups. And I'm very happy with how they've gone. And by the way, we're only sitting here in April. So there's still a lot of 2024 to go, and there's great customer momentum in the process.

Harlan Sur, Analyst

Yes, absolutely. Just going back, just kind of rewinding back to the March quarter. So similar to the PC Client business, right, which declined at the low end of the seasonal range, if I make certain assumptions around your Data Center GPU business, x that out of Data Center, it looks like your Server CPU business was also down at the lower end of the seasonal range. By my math, it was down like 5%, 6% sequentially. Is that right? And that's less than half the decline of your competitor. And if so, like what drove the less-than-seasonal declines? I assume some of it was share gains. It sounds like Enterprise was also better. Looks like you guys did drive a little bit more cloud instance adoption, but anything else that drove to a slightly better seasonal pattern in March for Data Center? Server?

Jean Hu, CFO

Yes. Harlan, this is Jean. I think the Server business has been performing really well. Year-over-year, it actually increased a very strong double digit. I think, sequentially, it is more seasonal, but we feel pretty good about continuing to gain share there.

Lisa Su, CEO

Yes. To add to your question, we did observe strength in enterprise during the first quarter, which helped counterbalance some of the typical seasonal trends.

Thomas O'Malley, Analyst

I just wanted to ask on the competitive environment. Obviously, on the CPU side, you had a competitor talk about launching a high core count product in the coming quarter, kind of ramping now and more so into Q3. You've seen really good pricing tailwinds as a function of the higher core capital. Can you talk about what you're seeing in that market? Do you think that there's any risk for more aggressive pricing, which would impact your ASP ramp for the rest of the year?

Lisa Su, CEO

Yes. When we examine our server CPU average selling prices, they are quite stable. We are particularly focused on higher core counts. Overall, I would say the pricing environment remains stable. This relates to total cost of ownership for our customers, as well as our performance and energy efficiency, which typically results in a cost advantage for our customers. I think it's very important to say we are very supportive of the open ecosystem. We're very supportive of the Ultra Ethernet Consortium. But I don't believe that, that is a limiter to our ability to build large-scale systems. I think Ethernet is something that many in the industry feel will be the long-term answer for networking in these systems, and we have a lot of work that we're doing with internally as well as with our customers and partners to enable that.

Harsh Kumar, Analyst

Lisa, I had two. One is for you and one perhaps for Jean. So we recently hosted a very large custom GPU company for a call. And they talked about kind of mega data centers coming up in the near to midterm, talking about nodes potentially in the 100,000-plus range and maybe up to 1 million. So as we look out at these kinds of data centers, from an architectural standpoint, it's not a situation where winner takes all, where if somebody gets in, they kind of get all the sockets? Or will there reliance where your chip perhaps or your board can be placed right next to somebody else's board maybe on a separate line? Just help us understand how something like that would play out if there's a chance for more than one competitor to play in such a large data center?

Lisa Su, CEO

Yes. So I'll talk maybe a little bit more at the strategic level. I think as we look at sort of how AI shapes up over the next few years, there are customers who would be looking at very large training environments and perhaps that's what you're talking about. I think our view of that is, number one, we view that as a very attractive area for AMD. It's an area where we believe we have the technology to be very competitive there. And I think the desire would be to have optionality in terms of how you build those out. So obviously, a lot has to happen between here and there. But I think your overarching question of. Is it winner takes all? I don't think so. That being the case, we believe that AMD is very well positioned to play in those, let's call it, of very large scale systems.

Harsh Kumar, Analyst

That's great. I have a quick question for Jean. Based on the model you discussed for June, I estimate an increase of approximately $400 million in the June quarter compared to March. You mentioned that both MI300 and EPYC will experience growth. Could you provide insights into how those two segments compare in terms of size within that growth? I'm estimating around $900 million for MI300 for June. Am I close to the mark, or is my estimate off?

Jean Hu, CFO

Harsh, we're not going to guide a specific segment below the segment revenue. I think the most important thing is that we did say Data Center is going to grow double digits sequentially. I will leave it over there.

Mitch Haws, Vice President, Investor Relations

There are no further questions at this time. I'd like to hand the floor back over to management for any closing comments.

Lisa Su, CEO

Thanks.

Operator, Operator

This concludes today's conference. You may disconnect your lines at this time. Thank you for your participation.