Earnings Call Transcript

CADENCE DESIGN SYSTEMS INC (CDNS)

Earnings Call Transcript 2025-12-31 For: 2025-12-31
View Original
Added on April 02, 2026

Earnings Call Transcript - CDNS Q4 2025

Operator, Operator

Ladies and gentlemen, good afternoon. My name is Abby, and I'll be your conference operator today. At this time, I would like to welcome everyone to the Cadence Fourth Quarter and Fiscal Year 2025 Earnings Conference Call. Thank you. And I will now turn the call over to Richard Gu, Vice President of Investor Relations for Cadence. Please go ahead.

Richard Gu, Vice President of Investor Relations

Thank you, operator. I would like to welcome everyone to our fourth quarter of 2025 earnings conference call. I'm joined today by Anirudh Devgan, President and Chief Executive Officer; and John Wall, Senior Vice President and Chief Financial Officer. The webcast of this call and a copy of today's prepared remarks will be available on our website, cadence.com. Today's discussion will contain forward-looking statements, including our outlook on future business and operating results. Due to risks and uncertainties, actual results may differ materially from those projected or implied in today's discussion. For information on factors that could cause actual results to differ, please refer to our SEC filings, including our most recent Forms 10-K and 10-Q, CFO commentary, and today's earnings release. All forward-looking statements during this call are based on estimates and information available to us as of today, and we disclaim any obligation to update them. In addition, all financial measures discussed on this call are non-GAAP unless otherwise specified. The non-GAAP measures should not be considered in isolation from or as a substitute for GAAP results. Reconciliations of GAAP to non-GAAP measures are included in today's earnings release. Now I'll turn the call over to Anirudh.

Anirudh Devgan, President and Chief Executive Officer

Thank you, Richard. Good afternoon, everyone, and thank you for joining us today. I'm pleased to report that Cadence delivered excellent results for the fourth quarter, closing an outstanding 2025 with 14% revenue growth and a 45% operating margin for the year. We finished 2025 with a record backlog of $7.8 billion, well ahead of plan, reflecting broad-based portfolio strength and increasing contributions from our AI solutions. I would like to emphasize the essential nature of Cadence's engineering software. As I have stated previously, our platform is best viewed as a 3-layer cake framework, accelerated compute being the base layer, principal simulation and optimization as the critical middle layer, and AI as the top layer to drive intelligent exploration and generation. This holistic approach ensures that our AI solutions are not just fast, but physically accurate and grounded in scientific truth. Building on this foundation, we are deploying Agentic AI workflows powered by intelligent agents that autonomously call our underlying tools. AI flows act as a force multiplier, enabling our customers to significantly expand design exploration and accelerate time to market, while driving increased product usage and deeper engagement across our entire platform. We see growing momentum on both AI for design and design for AI fronts. On AI for design, our Cadence AI portfolio continues to gain traction with market-shaping customers. Last week, we launched ChipStack AI Super Agent, the world's first Agentic AI solution for automating chip design and verification. It is built upon our proven physically accurate product and provides up to 10x productivity improvement for various tasks, including design coding, generating test benches, and debugging. ChipStack has received compelling endorsements from Qualcomm, NVIDIA, Altera, and Tenstorrent, among others. Our other AI products such as Cadence Cerebrus, Verisium, and Allegro X AI are proliferating at scale. And our LLM-based design agents powered by the JedAI data platform are delivering impressive results. On design for AI, the infrastructure AI phase is in full swing with AI architectures growing in scale and complexity. Customers are increasingly standardizing on Cadence's full flows to address their performance, power, and time-to-market challenges. We continue to closely collaborate with market leaders on their next-generation AI designs spanning training, inference, and scaling. We deepened our long-standing partnership with Broadcom through a strategic collaboration to develop pioneering Agentic AI workflows to help design Broadcom's next-generation products. We also expanded our footprint at multiple marquee hyperscalers across our EDA, hardware, IP, and system software solutions. And we are particularly excited by the emerging physical AI opportunity, and our broad-based portfolio uniquely positions us to enable autonomous driving and robotic companies to address multimodal silicon and system challenges. In addition, we are increasingly applying AI internally to improve efficiency across engineering, go-to-market, and operations. In 2025, we also furthered our partnerships with leading foundries. We expanded our collaboration with TSMC to power next-gen AI flows on TSMC’s N2 and A16 technologies. We strengthened our engagement with Intel Foundry by officially joining the Intel Foundry Accelerator Design Services Alliance. Rapidus made a wide-ranging commitment to our core EDA software portfolio across digital, custom analog, and verification solutions. And Samsung Foundry expanded its collaboration with Cadence, leveraging our AI-driven design solutions and IP solutions. Now turning to product highlights for Q4 and 2025. Accelerating compute demand driven by the AI infrastructure build-out and demanding next-generation data center requirements continue to create significant opportunities for our core EDA portfolio. Our core EDA business delivered strong performance with revenue growing 13% in 2025. Our recurring software business reaccelerated to double-digit growth in Q4, a testament to the strength and durability of our model. Our hardware business delivered another record year with over 30 new customers and substantially higher repeat demand from AI and hyperscalers. Seven out of the top ten customers in 2025 were Dynamic Duo customers, underscoring the differentiated value provided by our hardware systems. With a strong backlog entering 2026, we expect this year to be yet another record year for hardware. Our digital portfolio delivered a strong year, driven by continued proliferation of our full flow solutions as we added 25 new digital full flow logos in 2025. We expanded our footprint at a top hyperscaler, growing our AI-driven synthesis and implementation solutions, including our 3D-IC platforms. A marquee hyperscaler embraced the Cadence digital full flow for its first full customer-owned tooling AI chip tape-out. Broad proliferation of Cadence Cerebrus continues and adoption of our Cadence Cerebrus AI Studio is accelerating. Recently, Samsung U.S. used it to tape out a SF2 design, achieving a 4x productivity improvement. In custom and analog, our Spectre circuit simulator saw significant growth at leading AI and memory companies. Our flagship Virtuoso Studio, the industry standard for custom and mixed-signal design, saw continued traction in AI-driven design migration across its vast installed base. A top multinational electronics and EV customer reported a 30% layout efficiency gain using our AI-driven design migration. Our IP business saw strong momentum with revenue growing nearly 25% in 2025, reflecting both the strength of our expanding IP portfolio and the critical role our STAR IP solutions play in the AI, HPC, and automotive verticals. We achieved both significant expansions and meaningful competitive wins at marquee customers, demonstrating the superior performance and capabilities of our IP solutions across HBM, UCIe, PCIe, DDR, and SerDes titles. We are seeing particularly strong adoption of our industry-leading memory IP solutions, including our groundbreaking LPDDR6 memory IP, which is enabling customers to achieve the memory performance and efficiency required for next-generation AI workloads. In Q4, we launched our Tensilica HiFi IQ DSP, offering up to 8x higher AI performance and more than 25% energy savings for automotive infotainment, smartphone, and home entertainment markets. Our System Design and Analysis business delivered 13% revenue growth in 2025. Earlier in the year, we introduced the new Millennium M2000 AI supercomputer featuring NVIDIA Blackwell, which is ramping nicely and with growing customer interest across multiple end markets. Our 3D-IC platform has become a key enabler for the industry's transition to multichip architectures, which are increasingly critical for next-generation AI infrastructure, HPC, and advanced mobile applications. Adoption of our AI-driven Allegro X platform is accelerating. Earlier in Q3, Infineon standardized on Allegro X and in Q4, STMicroelectronics decided to adopt our Allegro X solution to design printed circuit boards. Our reality data center digital twin solution continued its strong momentum and was deployed at several leading hyperscalers and marquee AI companies. BETA CAE continues to unlock tremendous opportunities, particularly in the automotive segment. With our previously announced acquisition of Hexagon's D&E business, we'll be poised to accelerate our strategy around physical AI, including in autonomous vehicles and robotics. In closing, I'm pleased with our strong performance in 2025, and I'm excited about the strong momentum across our business. As the AI era continues to accelerate, our AI-driven EDA, SDA, and IP portfolio, powered by new AI agents and accelerated computing positions Cadence extremely well to capture these massive opportunities. Now I will turn it over to John to provide more details on the Q4 results and our 2026 outlook.

John Wall, Senior Vice President and Chief Financial Officer

Thanks, Anirudh, and good afternoon, everyone. I'm pleased to report that Cadence delivered an excellent finish to 2025 with broad-based momentum across all our businesses. Robust design activity and strong customer demand drove 14% revenue growth and 20% EPS growth for the year. Productivity improvement across the company helped us achieve an operating margin of 44.6% for the year. Fourth quarter bookings were exceptionally strong, and we began 2026 with a record backlog of $7.8 billion. Here are some of the financial highlights from the fourth quarter and the year, starting with the P&L. Total revenue was $1.440 billion for the quarter and $5.297 billion for the year. GAAP operating margin was 32.2% for the quarter and 28.2% for the year. Non-GAAP operating margin was 45.8% for the quarter and 44.6% for the year. GAAP EPS was $1.42 for the quarter and $4.06 for the year. Non-GAAP EPS was $1.99 for the quarter and $7.14 for the year. Next, turning to the balance sheet and cash flow. Our cash balance was $3.01 billion at year-end, while the principal value of debt outstanding was $2.5 billion. Operating cash flow was $553 million in the fourth quarter and $1.729 billion for the full year. DSOs were 64 days, and we used $925 million to repurchase Cadence shares during the year. Before I provide our outlook for 2026, I'd like to share that it contains our usual assumption that export control regulations that exist today remain substantially similar for the remainder of the year. And our current 2026 outlook does not include our pending acquisition of Hexagon's design and engineering business. For our outlook for 2026, we expect revenue in the range of $5.9 billion to $6 billion, GAAP operating margin in the range of 31.75% to 32.75%, non-GAAP operating margin in the range of 44.75% to 45.75%; GAAP EPS in the range of $4.95 to $5.05, non-GAAP EPS in the range of $8.05 to $8.15, operating cash flow of approximately $2 billion, and we expect to use approximately 50% of our free cash flow to repurchase Cadence shares in 2026. For Q1, we expect revenue in the range of $1.420 billion to $1.460 billion. GAAP operating margin in the range of 30% to 31% non-GAAP operating margin in the range of 44% to 45%; GAAP EPS in the range of $1.16 to $1.22 and non-GAAP EPS in the range of $1.89 to $1.95. And as usual, we published a CFO commentary document on our Investor Relations website, which includes our outlook for additional items as well as further analysis and GAAP to non-GAAP reconciliations. In conclusion, I am pleased that we delivered strong top line and earnings growth for 2025, and we finished the year with a record backlog and ongoing business momentum, setting ourselves up for a great 2026. As always, I'd like to thank our customers, partners, and our employees for their continued support. And with that, operator, we will now take questions.

Operator, Operator

And our first question comes from Vivek Arya with Bank of America Securities.

Vivek Arya, Analyst

Anirudh, I'm curious, have you seen any disruption or change of thinking whatsoever at your customers in terms of them using AI to reduce or eliminate demand for EDA or IP or any other computer-aided engineering tools? Is there a scenario at all that you have discussed, right, or your customers might contemplate where they can use more of their internal tools or AI to displace what you're doing right now?

Anirudh Devgan, President and Chief Executive Officer

Yes, thank you for the question, Vivek. I understand this is a concern for investors. As I mentioned previously, we view our situation as a three-layer structure. There’s a lot of talk about whether AI will replace certain types of software, but it’s important to remember that there are various types of software. Our focus is on engineering software that performs highly complex physics-based mathematical operations. The AI tools we are developing, or that our customers are utilizing, ultimately rely on our software to effectively accomplish their tasks. Instead of seeing a reduction in demand, we're observing that as we shift towards these Agentic workflows, our software is being used more than before. For example, our own super agent, ChipStack, automates portions of the workflow that were previously manual. In traditional AI applications, there’s significant automation in coding, which, in chip design, correlates to the writing of RTL code that describes the chip or system. This coding process has largely remained manual, but once that's done, our tools come into play to optimize, simulate, and verify that RTL. With our AI workflows, we are introducing additional tools to automate RTL writing, while still relying on numerous middle and base layer tools for implementation and verification. Customers have expressed a desire to leverage more AI, indicating they will likely increase R&D investments and hire more engineers. However, as a portion of their overall spending, they will direct more resources towards automation and computing. The unique nature of our end market means that workloads are growing exponentially; for instance, if a chip's market cap skyrockets from $100 million to $1 trillion over the next few years, they will require extensive additional work, some of which will be carried out by AI agents using our foundational tools. In conclusion, we've had no discussions with customers about reducing their usage of our software. On the contrary, the implementation of AI tools is driving increased use of our products, especially as customers design more chips.

Operator, Operator

And our next question comes from the line of Joe Vruwink with Baird.

Joe Vruwink, Analyst

I maybe wanted to ask about how you're approaching the outlook for 2026. It looks like recurring revenue is set to accelerate, and that's normally well supported by backlog. Maybe can you talk about the key contributors to the recurring improvement? And then just on the 20% or so of revenues that come from upfront sources, you obviously had an incredible 2025 with your hardware platforms, and it sounds like you're expecting growth there again. I think we're in year 2 of that platform now. Can you kind of see a repeat of what you observed back in 2023? That was a very strong year 2 for the second-gen product. How maybe are you thinking about that product and just where it is in its life cycle?

John Wall, Senior Vice President and Chief Financial Officer

Yes, thank you for the question, Joe. This is John. At this time of the year, our guidance reflects what we believe to be a sensible and well-considered outlook for the year. We ended the year with significant momentum in our backlog, and that strength was evident across all areas of our business. As Anirudh mentioned, we view the AI era as one that accelerates workloads beyond the growth of headcount, and Cadence capitalizes on these workloads through a wide array of offerings in EDA, IP, hardware, and SDA. We are seeing this positive trend reflected in all parts of our business. Typically, our hardware business is pipeline-driven at this time of the year. We anticipate a robust first half for hardware, but since we usually only see two quarters in our pipeline, we are cautious regarding the second half of the year, which aligns with our usual practice. We often take steps to mitigate risks in our guidance for hardware and the China market at this time. In terms of China, its contribution was approximately 12% of our revenue in 2024 and 13% in 2025, and we expect it to remain in that range of 12% to 13% of our revenue this year as well. However, we are seeing significant strength overall and are pleased with our guidance. A key point to highlight from the CFO's commentary is that around 67% of our revenue for 2026 is coming from the beginning backlog, providing us with substantial visibility into recurring revenue over the years. We are very pleased to see our recurring base returning to double-digit growth in the low teen range.

Operator, Operator

And our next question comes from the line of Joe Quatrochi with Wells Fargo.

Joe Quatrochi, Analyst

Just kind of curious, maybe following up on that. On the verification and emulation hardware cycle, any sort of help on just kind of where you think you are at in that cycle? And then is there anything we should think about just in terms of memory availability from that perspective or just anything about margins given pretty significant price increases that we've seen across the DRAM spectrum?

Anirudh Devgan, President and Chief Executive Officer

Yes, that's a great question. Hardware continues to set records each year, and I anticipate this trend will persist. These hardware systems are essential for the design of complex chips and systems. No advanced AI chip or any mobile or automotive chip is created without these systems. We offer the best hardware systems available because we develop our own chips manufactured by TSMC, and we provide full racks equipped with trillions of transistors to emulate other chips. Although these systems are purchased upfront, customers typically use them for many years, with major clients buying new systems almost every year. I don’t expect this trend to change. Even with the launch of Z3, Z2 remains a robust system. It is now in its second year and still has the capacity to design systems with 1 trillion transistors, which should be viable for several years. We plan to introduce our next system in a few years, keeping us ahead of market needs. Regarding demand, there is no noticeable difference from last year – demand is actually stronger, as reflected in our backlog. As John mentioned earlier this year, we are being cautious with hardware, but we will provide updates midway through the year based on performance. Our hardware systems are operating well, and we are gaining market share across all major product segments. We're seeing growth in hardware and a strong upward trend in IP, which is encouraging after three years of consistent growth. As you may know, we typically do not consider one year's performance as a trend. However, after three years, I feel optimistic about our IP business. Our core EDA business is thriving, we are gaining share in 3D-IC, and we are first to market with Agentic AI, which has already attracted numerous customers. Overall, I am very confident about our hardware business and our entire portfolio's performance.

Operator, Operator

And our next question comes from the line of Jim Schneider with Goldman Sachs.

James Schneider, Analyst

I was wondering if you could talk about a little bit more about your AI workflows. And if it's possible to quantify any of the benefits that your customers are getting from those workflows today, whether that be time to market, enhanced productivity per seat, or so on? And maybe separately, kind of address how you're able to monetize that and how broad that is across your portfolio today?

Anirudh Devgan, President and Chief Executive Officer

Yes, Jim. First, the results we're seeing with AI are truly impressive. A few years ago, there were doubts about AI's benefits, but now we're seeing fantastic and tangible results in chip design. Unlike other industries, chip design has formal languages, specifically RTL, to ensure correctness. Over the last few decades, we have developed tools to verify, simulate, and optimize this RTL, making AI a powerful enabler in chip design. For example, Samsung reported a 4x increase in productivity, while Altera mentioned improvements of 7 to 10 times. We're also seeing significant gains in productivity for tasks like RTL writing, which has traditionally been manual. In physical design, we could see PPA improvements of 7% to 12%. The gains from using AI can be as impactful as transitioning to a smaller technology node, achieving 10% to 20% improvements. Our customers are eager to rapidly adopt AI in their R&D processes, primarily through our Cadence tools. There's a growing demand for engagement, and our monetization efforts are beginning to reflect in our results and record backlog. With Agentic AI, which automates tasks like RTL or test bench writing that were previously manual, we plan to introduce a new pricing model akin to virtual engineering, which will create additional revenue opportunities. Customers are willing to invest in these improvements as they enhance productivity. Furthermore, the use of Agentic AI allows for much greater experimentation versatility compared to traditional methods, leading to increased usage of our base tools. Our strategy for monetization is working well, and we will charge accordingly for both the Agentic AI and the base tools, anticipating strong customer interest in these innovations.

Operator, Operator

And our next question comes from the line of Gary Mobley with Loop Capital.

Gary Mobley, Analyst

Let me congratulate you on a strong finish to the year. John, I understand there's been an initiative to shift your SD&A customers to 1-year license terms. If I'm correct, this has been a barrier to growth. So, is this the reason why SD&A revenue increased only 13% in 2025? What are your considerations for 2026? Additionally, what are the plans for Hexagon when you integrate that business? I believe their revenue run rate was at $240 million. Is that figure constrained due to the transition to 1-year license terms?

John Wall, Senior Vice President and Chief Financial Officer

Thank you for the question, Gary. You're correct regarding SD&A; we faced some challenging comparisons in SD&A during Q4 2025, partly due to our multiyear business. In Q4 2024, we had some multiyear business through our BETA subsidiary, and we intentionally shifted to more annual subscription arrangements for BETA in 2025, which affects the year-over-year numbers. Despite this, we are very pleased with SD&A's strategic direction and its contribution to our chip-to-systems approach. In terms of revenue mix, SD&A accounted for about 16% of revenue in 2025, which is consistent with 2024. We expect it to grow, along with all product groups, although we are not providing guidance by segment. Regarding Hexagon, we have previously indicated that the annualized revenue for Hexagon is around $200 million. This means, similar to BETA, if a deal is completed by the end of Q1, it could yield approximately $150 million in revenue for the year. However, we are not providing guidance on Hexagon at this time.

Operator, Operator

And our next question comes from the line of Charles Shi with Needham.

Yu Shi, Analyst

Anirudh, I believe the standout achievement of the quarter was the announcement regarding a major hyperscaler customer adopting Cadence's digital full flow. You mentioned that this is for the first COT chip they plan to tape out, which suggests we can anticipate this hyperscaler releasing a COT chip in two to three years. I would like to know how many hyperscaler customers are currently working with COT. For that specific customer with the first COT chip, what do you expect the ramp-up to be? How will they expand the use of COT for the other chips they are developing? As I understand it, every hyperscaler typically has multiple chips. I'm curious about your perspective on the overall proliferation of COT. I think this is a significant narrative for Cadence and for EDA in general, and I would like to hear your thoughts on it.

Anirudh Devgan, President and Chief Executive Officer

Thank you for your question, Charles. While I can't discuss specifics about any one customer, I want to highlight that we've been working closely with our clients in a confidential manner. We share our roadmaps and collaborate with leading companies worldwide. I've pointed out before that large hyperscalers are increasingly developing their own chips, and this trend has become even more pronounced over the past couple of years. We see major hyperscalers achieving success with their in-house chips, particularly in the last six months. Initially, there was skepticism about whether companies would design their own chips, but it's clear now that they are. This does not suggest that the merchant semiconductor market won't thrive; rather, it indicates that major customers are stepping up their chip design efforts. Over time, we expect these customers to handle more aspects of production internally, starting with Application-Specific Integrated Circuits (ASICs) and moving to hybrid Chip-on-Trip (COT) and then full COT. Nowadays, chips incorporate multiple chiplets, offering customers the flexibility to design some chiplets in-house while outsourcing others, or even managing the entire process themselves. We believe this trend will continue, albeit at varying paces among different customers. Ultimately, many customers will have their own chips alongside significant general-purpose semiconductor options. As more companies shift towards COT and innovate with multiple chips for each hyperscaler, this is a positive development for us. It will lead to increased Electronic Design Automation (EDA) usage among system companies, more internal intellectual property, and a growing demand for hardware and system tools. We are committed to positioning ourselves well within this evolving landscape, and this trend is set to expand into other sectors, including automotive and robotics.

Operator, Operator

And our next question comes from the line of Siti Panigrahi with Mizuho.

Sitikantha Panigrahi, Analyst

You talked about robust design activity. Can you give us some color in any kind of improvement on your traditional semi segment versus AI or automobile? If you could give some color, that would be helpful. And Anirudh, on the physical AI side, that was a big focus at CES recently. Have you started seeing any traction in that space? When do you think that will be a significant contributor?

Anirudh Devgan, President and Chief Executive Officer

Yes. Thanks for the question, Siti. On both, I mean, the design activity is accelerating, like I was saying, and that's true for system companies and semi companies. And actually, I mean, a lot of the projections are that we might hit as the industry, semi might hit $1 trillion this year, which is like it used to be 2030, and we are like 4 years ahead of that. So this is very good news for the industry. And of course, we have deep partnerships with all the major semi players and definitely the AI leaders like with NVIDIA and with Broadcom. Actually, in this prepared remarks also, we highlighted our new collaboration with Broadcom, which are, of course, doing phenomenally well and so is NVIDIA. And then, of course, all the memory companies are doing phenomenally well. So overall, I think the semi companies, along with system companies are doing great. And I do see, especially in AI and memory, but we do see the general market, I'm sure you follow that, the mixed-signal companies, the regular, let's call it, the regular semi companies are also, I think, have a better outlook for '26 than '25. So it's good to see a broad-based strength in the semi business, which is about 55% of our business. And that just creates a better environment for us to deploy our new solutions. And they all want to deploy AI like we discussed earlier. And that's true for both semi and system companies. So overall, I feel that the environment is much more healthier starting '26 than it was like a year ago.

Operator, Operator

And our next question comes from the line of Lee Simpson with Morgan Stanley.

Lee Simpson, Analyst

I just wanted to go back to ChipStack, if I could. I mean, it seems relatively clear that you see the super agent as something that can transform from Verilog to RTL or the coding thereof at least. And then it would pull in basic layer tools for debug and optimization. So you get a more deterministic outcome for customers. But you teased us a little bit with the idea about where the further monetization would come. It didn't sound like it would be on a subscription basis; it would be on a sort of value to customer basis. So I wonder if you could maybe just expand a little on that and how that would be monetized? And maybe in particular, whether or not this would be margin accretive. You're at 45% now already. So could this help kick that on?

John Wall, Senior Vice President and Chief Financial Officer

That's an excellent question, Lee. I want to address the monetization aspect by stating that we don't anticipate a major shift from subscriptions to consumption due to AI. Our customers still prefer reliable access to validated tools and certified workflows, which is why multiyear subscriptions are central to our business model. AI influences the frequency with which customers utilize our tools and the areas where value is generated, leading to increased automation, iterations, and computing power. As a result, we will implement more usage-based pricing to accommodate additional capacity and AI enhancements. We offer various models to manage these complexities. Additionally, on the services front, we can provide outcome-focused packages designed around measurable improvements such as cycle times and productivity, all under clearly defined scopes and governance. This approach has proven effective for us and is evident in the positive shift in our recurring revenue. While we remain cautious in our forecasts and do not expect immediate growth, there is considerable potential for Cadence in the AI space. However, as Anirudh mentioned earlier, two key factors set Cadence apart. First, our engineering software is rooted in physics and rigorous mathematical optimization, which is essential for our customers as complexity increases. Second, AI is enhancing demand for our products rather than replacing them, which is reflected in our performance projections for 2025 and our guidance for 2026.

Operator, Operator

And our next question comes from the line of Jason Celino with KeyBanc Capital Markets.

Jason Celino, Analyst

Looks like IP had a phenomenal year. I know you have a slate of new exciting titles coming out, but I just wanted to ask how that translates to pipeline? Like does it take time to sell these new IP titles? And then with the guide overall, it looks mostly first half weighted. Does your visibility into the IP today look more first half or second half?

Anirudh Devgan, President and Chief Executive Officer

IP is performing exceptionally well. As I mentioned earlier, we want to observe performance over several years before making definitive statements. However, since last year, I've been confident in our outlook because we've seen consistent performance and a positive forecast extending into 2026, which I believe will materialize. Our starting backlog in IP is robust, and we are also exploring opportunities beyond our traditional partnership with TSMC, which is thriving, to engage with newer foundries. Overall, I anticipate a strong year for IP this year, and we will keep you informed on its progress. I expect 2026 will be particularly strong for IP.

Operator, Operator

And our next question comes from the line of Jay Vleeschhouwer with Griffin Securities.

Jay Vleeschhouwer, Analyst

Anirudh, if we think about what's currently occurring with the AI phenomenon in large EDA historical terms, the last time I would argue that there was a major let's call it, generational technical and procedural change in the industry was in the early 2000s. And I'd like to ask how this time might be different from that phenomenon in the sense that the last time, it was fairly narrowly based in terms of the number of products that grew or were newly adopted. We saw the very interesting phenomenon where average contract durations actually shrank. I think, as customers were looking to perhaps mitigate technical risk and wanted to retain some vendor flexibility or optionality, hence, the shorter durations at that time. Would you say that this time around, the adoption phenomenon might last longer than just a few years of the earlier generation I mentioned that there wouldn't be necessarily an adverse effect on contract durations, perhaps maybe even a lengthening with longer commitments from customers? And maybe talk about how in those big respects, this phenomenon might be broader and more long-lasting than what occurred, again, many years ago, but it has some similarities.

Anirudh Devgan, President and Chief Executive Officer

Yes, that's a great point, Jay. We need to observe how it develops because while each situation has similarities, they also differ. Fortunately, we aren't seeing any changes in duration, which is positive. There are always opportunities for more add-ons like we've discussed previously, which will impact all aspects of the flow. The top two layers will merge—AI and our core engines. I see potential for introducing new product categories, particularly at the front end with a super agent that can write RTL, along with test benches and verification flows, since chip verification is just as critical as chip design. Our customers expect first-time accuracy. Therefore, the opportunities with AI in verification are significant due to the NP-complete exponential challenge it presents. I'm particularly excited about the new Agentic AI tools that enhance verification accuracy. Overall, I feel optimistic about the strength of all three layers of our approach. We've been innovating and leading the market in adapting our software to new hardware platforms, whether they are parallel CPUs, GPUs, or custom chips. Our base tools are performing exceptionally well, and we are gaining market share across almost all segments. Additionally, we're the first to market with Agentic AI. I’m pleased with our portfolio and the engagement we've established. While it's hard to predict exactly how things will unfold, I think it should last longer than previous cycles. We will keep you updated, but everything seems positive so far.

John Wall, Senior Vice President and Chief Financial Officer

Yes. This is John. We've been focusing on Moore's Law for a long time and have developed sales models that align price with value while maintaining our recurring revenue model. You can rely on us to ensure customer predictability, with subscriptions remaining central to our relationships with customers. We also won’t take excessive outcome risks. We will scope and measure outcomes, and price based on value metrics. Customers will have control over aspects like jobs, runs, compute, and throughput. Our growth will be deliberate and thoughtful, as it always has been.

Operator, Operator

And our next question comes from the line of Gianmarco Conti with Deutsche Bank.

Gianmarco Conti, Analyst

Congratulations on a strong quarter. I have a lengthy question. Apologies for revisiting ChipStack, but could you provide details on how we can connect ChipStack, which we know focuses on RTL automation, with its evolution compared to Cerebrus, which deals with NAND implementation? My question is whether there might be potential cannibalization in the future. Additionally, regarding the AI theme, could you share insights on model development in AI and if you're observing increased competition, especially from startups? I understand that this could be influencing client engagement. Lastly, are there any significant constraints when running multiple agents, considering the need for increased computing power, particularly at larger design scales?

Anirudh Devgan, President and Chief Executive Officer

Yes, I apologize for any background noise. I believe I understood your question, although I may not have caught every detail. It seems you're asking about the differences between our front-end agent and Cerebrus, as well as our approach to start-ups. First, I want to emphasize the critical importance of Cerebrus. There will be various types of AI Agentic flows that we will need to develop. We mentioned ChipStack because it's a new category in RTL design and verification, but we are actively working on several other agents as well. We have extended Cerebrus to a full flow, incorporating a front-end design agent like Cerebrus, along with a back-end agent for physical implementation, which is currently time-consuming and in high demand for efficiency improvements. Similar principles are at play in Cerebrus AI Studio, where we conduct more explorations to ensure customers achieve better results. Moving forward, we will be highlighting considerable activity in back-end physical design, alongside areas like digital design, verification, and analog processes, where we are exploring automation in migration flows. While we are very enthusiastic about ChipStack, it does not detract from our development of several significant agentic flows. Regarding start-ups, we continuously monitor them and have a history of acquiring promising candidates, typically targeting earlier stage companies like we did with ChipStack, which we believe is the top AI start-up available. We are very confident in our own research and development capabilities, with a team of around 10,000 people, including many with advanced degrees. We also have 3,000 customer support engineers and engage regularly with our major customers, holding multiple R&D meetings each week to gather their insights. Our substantial investment in R&D positions us well. Generally, start-ups may find success in areas outside our focus or as we explore new territories. However, our dedication to AI is comprehensive, and we leverage start-ups as accelerators when needed, while maintaining a significant investment across all major domains that our customers prioritize.

Operator, Operator

And our next question comes from the line of Ruben Roy with Stifel.

Ruben Roy, Analyst

Anirudh, you answered bits and pieces of what I'm about to ask, but I was hoping to put together a question on SD&A and just to understand sort of the longer-term strategy. It seems like some companies, enterprises, industrials otherwise are maybe thinking about pulling some simulation workloads in-house or partnering with the AI infrastructure ecosystem. We've seen Synopsys and NVIDIA talk about targeting Omniverse digital twins for that type of thing. How should investors think about your strategy? Is it sort of a neutral strategy and you'll work with accelerated compute providers, et cetera, and their tools? Or are you trying to build sort of an ecosystem that's Cadence specific? I'm just trying to understand kind of longer-term strategy and thinking around SD&A.

Anirudh Devgan, President and Chief Executive Officer

Thank you for the question. In SD&A, we focus on two critical areas: 3D-IC and the innovations occurring, particularly in packet level analysis, as well as physical AI for simulations involving planes, cars, robots, and drones. This focus is a significant reason for our acquisitions of BETA and Hexagon. We are dedicated to developing core engines that will integrate seamlessly with accelerated computing. We have collaborated on GPU projects in India for years and were the first to adapt all our software solvers for accelerated computing platforms, as physical simulation often involves matrix multiplication—a process in which NVIDIA excels, especially since AI fundamentally relies on matrix multiplication. We also utilize Omniverse, not as a primary platform for our tools, but rather as an additional route to market alongside our direct engagement with customers. While Omniverse serves as a great platform to deploy our products, we remain neutral regarding this aspect. Our primary aim is to create fundamental solvers capable of tackling the toughest challenges, integrating them with AI and computing resources, and making them available across all platforms. I feel confident about our position in this regard.

Operator, Operator

And our next question comes from the line of Andrew DeGasperi with BNP Paribas.

Andrew DeGasperi, Analyst

I just had a question. You mentioned several times in the prepared remarks about taking share across the board. And I was just curious, is this kind of a change relative to previous quarters? And is it focused in any particular area? And are you surprised by this relative to what you've seen in the past?

Anirudh Devgan, President and Chief Executive Officer

Yes. I believe our competitive position has improved, and we are aware of this. This is particularly evident in hardware due to the uniqueness of our platforms and intellectual property. The results reflect this as our growth exceeds that of the market. Our intellectual property is performing well, hardware is strong, and we continue to excel in EDA, 3D-IC while maintaining a solid position in analog and making gains in digital and verification. I am optimistic about our status as a technology-focused, R&D-driven company, and these investments are clearly benefiting us as customers adopt more of our solutions.

Operator, Operator

And our next question comes from the line of Kelsey Chia with Citi.

Wei Chia, Analyst

Congrats on the great results. I'd like to dive a little on China. So John, you mentioned that you contemplated a more prudent guidance from China. China revenue grew 18% last year, outpacing corporate average and also well above your initial guidance heading into 2025. How should we think about the sustainability of this strength? And also, what are the assumptions you have embedded in that guidance?

John Wall, Senior Vice President and Chief Financial Officer

As we mentioned earlier, the guidance includes an expectation of 12% of revenue coming from China in 2024 and 13% in 2025, with a similar range of 12% to 13% anticipated for 2026. We are observing very robust design activity and strong bookings growth in China. However, visibility in the pipeline is more immediate for the first half of the year. Therefore, we are exercising caution in our guidance for the second half of the year, as we have greater clarity for the first half. Do you have any additional comments on design activity in China?

Anirudh Devgan, President and Chief Executive Officer

Design activity is good in China. And I think it has stabilized. I mean we had mentioned this last year also, second half had stabilized. And I think it continues to be strong. I mean, China is all the trends that are in the U.S. are also in China, a lot of AI chips, a lot of physical AI is even stronger with cars and autonomous driving, EVs. So it's good to see China doing well.

Operator, Operator

And our next question comes from the line of Joshua Tilton with Wolfe Research.

Joshua Tilton, Analyst

I will echo my congratulations on a strong quarter. I kind of have a high-level one. I know a lot of times we focus on what the 3-year CAGR has been. And I think on this call, Andrew mentioned that semis companies now represent or still represent, I think, from my understanding, about 55% of the business. So my question is, how do we think about growth over the next 3 years as the mix of semis and systems levels out and what feels like the mix of upfront and recurring levels out at what I'm assuming is kind of more sustainable levels than the shifts you've seen over the last few years?

Anirudh Devgan, President and Chief Executive Officer

Yes. I think we are super excited about the system companies doing more silicon. And there have been some questions in the past. And like I had said before, I think this is an irreversible and accelerating trend, okay? And of course, we gave several examples this time. And especially because of AI, the system companies will do a lot. And then with physical AI, they will do even more. Now that number, 55-45, first of all, moves very, very slowly because the semi companies are doing well, too. I mean we are growing at a record pace, but both of them are growing. So semi companies, okay, what NVIDIA has done, of course, is phenomenal. What is happening with Broadcom is phenomenal. And then Qualcomm, MediaTek, there are so many semi companies are doing phenomenally well. So the ratio, I think more and more system companies will contribute more, but it doesn't move as fast as you would think, which is a good thing because the semi companies are also growing rapidly. And of course, semi companies will have an essential role in the build-out of AI, which is driving all this growth. So that's what I would like to say.

John Wall, Senior Vice President and Chief Financial Officer

And Josh, I previously mentioned that we anticipate the recurring revenue mix to stay around 80% in fiscal '26, consistent with 2025. When we refer to our cautious guidance for 2026, we believe there is significant potential for growth in our recurring revenue as well as our upfront revenue. We appreciate this balance strategically; recurring revenue offers stability while upfront revenue highlights areas of accelerating customer demand and our unique assets. We're experiencing strong performance across the board, which is why Anirudh is discussing market share gains.

Operator, Operator

And our final question comes from the line of Nay Soe Naing with Berenberg.

Nay Soe Naing, Analyst

Maybe one for John. I mean you mentioned about leveraging AI internally. And I was wondering how we should think about that in our models how should we think about your incremental margins going forward? I think with your '26 guide, what you're implying is incremental margins of about 51%, which is slightly below the rate that you've been trending in the last recent or last few years as well. So I just wanted to triangulate with the internal AI leverage and how you're guiding for margin for '26 and how we should think about margin a bit longer term in the age of AI?

John Wall, Senior Vice President and Chief Financial Officer

Yes, thank you for the question. Looking at our achievements in 2025, we reached an incremental margin of 59%. This indicates that there is significant potential for operating leverage within the company, which has historically performed around a 45% operating margin. Thus, there is considerable room for improvement beyond that 59% incremental margin we attained in 2025. Typically, we maintain a cautious approach when providing guidance at the beginning of the year and prefer to build from there. In that light, the 51% in our current guidance likely aligns with what we would propose for incremental margin at the start of each year. I believe this is one of the strongest guidance frameworks we've ever established. Regarding your observations on AI and how we utilize it internally, you're correct. Anirudh has mentioned this for years; our design processes are geared towards AI, and we have gained valuable insights from our internal team about its applications. Furthermore, we have developed a robust business focused on emulating hardware, with much of our AI implementation aimed at simulating engineering workflows, which enhances the value derived from our R&D investments. We anticipate that as we leverage more engineering capabilities and AI, we will likely increase our R&D efforts, involving more personnel and AI resources, rather than reducing staff.

Anirudh Devgan, President and Chief Executive Officer

Thank you all for joining us this afternoon. It's an exciting time for Cadence as we begin 2026 with product leadership and strong business momentum. Our continued execution of the intelligent system design strategy, customer-first mindset, and our high-performance culture are driving accelerated growth. Great Place to Work and Fortune Magazine recognized Cadence as one of Fortune's 100 Best Companies to Work for in 2025, ranking it #11. And on behalf of our employees and our Board of Directors, we thank our customers, partners, and investors for their continued trust and confidence in Cadence.

Operator, Operator

And ladies and gentlemen, thank you for participating in today's Cadence Fourth Quarter and Fiscal Year 2025 Earnings Conference Call. This concludes today's call, and you may now disconnect.