Earnings Call Transcript

Arista Networks, Inc. (ANET)

Earnings Call Transcript 2025-09-30 For: 2025-09-30
View Original
Added on April 03, 2026

Earnings Call Transcript - ANET Q3 2025

Rudolph Araujo, Head of Investor Advocacy

Thank you, Christa. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks' Chairperson and Chief Executive Officer; and Chantelle Breithaupt, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for the fiscal third quarter ending September 30, 2025. If you want a copy of the release, you can access it online on our website. During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the fourth quarter of the 2025 fiscal year, longer-term business model and financial outlook for 2026 and beyond, our total addressable market and strategy for addressing these market opportunities including AI, customer demand trends, tariffs and trade restrictions, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K and which could cause actual results to differ materially from those anticipated by these statements. These forward-looking statements apply as of today and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. This analysis of our Q3 results and our guidance for Q4 2025 is based on non-GAAP and excludes all noncash stock-based compensation impacts, certain acquisition-required charges and other nonrecurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. With that, I will turn the call over to Jayshree.

Jayshree Ullal, CEO

Thank you, everyone, for joining us this afternoon on our third quarter 2025 earnings call. Arista continues to drive its 19th consecutive record quarter of growth in this AI era. We achieved almost $2.31 billion this quarter with software and services contributing approximately 18.7% of revenue. Our non-GAAP gross margin of 65.2% was influenced by favorable mix and inventory benefits. Americas was strong at almost 80% and international at approximately 20%. On September 11 at our Analyst Day, we showcased both networking for AI and AI for networking with our continued momentum across our data-driven network platforms. Unlike many others, our Etherlink portfolio highlights our accelerated networking approach, bringing a single point of network control for zero-touch automation, trusted security, traffic engineering, and telemetry to dramatically improve compute and GPU utilization. Superior AI networks from Arista improve the performance of AI accelerators. Of course, we interoperate with NVIDIA, the worldwide market leader in GPUs, but we also recognize our responsibility to create a broad and open ecosystem, including AMD, Anthropic, Arm, Broadcom, OpenAI, Pure Storage and VAST Data to name a few, and build that modern AI stack of the 21st century. This stack includes the trio of compute, memory storage and a solid network foundation to run training and inference models. Our stated goal of $1.5 billion AI aggregate for 2025, comprising both back end and front end is well underway. We are now committed to $2.75 billion out of our new target of $10.65 billion in revenue, representing 20% revenue growth in 2026. We are experiencing momentum across cloud and AI titans, neo cloud providers and the campus enterprise. The demand and scale of AI build-outs is clearly unprecedented, as we look to move data faster across multiplanar networks. People and leadership are key to our success. And to that end, we announced Todd Nightingale as our President and Chief Operating Officer last quarter. This time, we want to celebrate the promotion of Ken Duda, our President and Chief Technology Officer, not only of engineering, but our top AI and cloud segment of customers as well. Ken, as many of you know, has been a champion of architecture, innovation and culture since founding Arista over 20 years ago. Ken, would you like to say a few words?

Kenneth Duda, President and Chief Technology Officer

Thanks, Jayshree. I would like to one of the best things about working at Arista is getting to build some of the most ambitious networks ever built, ultra-low latency trading networks, global scale cloud networks and most recently, multi-petabit AI networks. Our success in AI has many sources, the sheer power and performance of our hardware platforms, our innovations in fabric architecture, our AI-focused telemetry and provisioning automation, our reputation for the highest quality software and our leadership in the Ultra Ethernet Consortium, the UEC, and our work in Ethernet Scale Up Networking or ESUN. And most importantly, the way we partner with the world's largest AI companies. Partnership has been key to our success over and over at Arista and the AI revolution is no exception. In addition to being a lot of fun, these partnerships benefit our company, both through the sheer revenue opportunity, but also in providing us with the opportunity to learn and innovate at the edge of what's possible. We can then apply what we've learned to bring solutions to the broader networking market, helping a much larger and more diverse customer base, build the most advanced and reliable infrastructure in the industry. For example, our Etherlink distributed switch fabric powers some of the largest AI fabrics in the world. It's also an excellent underlay for data centers of all sorts, providing a full run-rate fabric with no hotspots at petabit scale for all workloads, including AI. Etherlink speeds are going from 800 gigabits today to 1.6 terabits in the near future while leveraging our EOS operating system and our NetDI diagnostics infrastructure for top hardware and software reliability. Arista AVA or Autonomous Virtual Assist uses AI to help our customers design, build and operate their networks. AVA draws on both our internal knowledge base and also on the customers' data stored in NetDL, Arista's network data lake plus AVA has agentic capabilities to help troubleshoot proactively. Our other recent innovations include SWAG, Switch Aggregation Technology, which provides the features of campus stacking along with fault containment and in-service software upgrades for maximum uptime. By running a common EOS and common NetDI platform across so many use cases, we are able to maintain alignment between our different market segments, leveraging central engineering investments efficiently as we pursue cloud, enterprise and AI markets simultaneously. I am so grateful for the opportunity to lead the Arista engineering and cloud teams in an era with so many exciting opportunities.

Jayshree Ullal, CEO

Thank you, Ken, and congratulations on a fantastic 21-year career, a very well-deserved promotion at Arista. You have always built the always-on resilient leaf-spine architecture, both now for networking for AI workloads, and AVA to bring AI to networking. At Oracle AI World, Ken was invited to formally announce our collaboration with Oracle Acceleron. This builds upon a decade of partnership with Oracle, starting with our Exadata migration from InfiniBand to Ethernet for AI networks to RoCE, RDMA over converged Ethernet, and now multiplanar networking across cloud AI for on-time job completion in gigawatt scale AI data centers. As part of our Leadership 2.0, we have built and focused a cloud and AI mission and organization, now led by industry veteran, Tyson Lamoreaux, reporting to Ken and Hugh. I am so delighted to formally welcome Tyson to Arista. Tyson, if you guys know him well, built the first cloud network for Amazon AWS in the 2000 era and pioneered the first AI network for a stealth sovereign AI company the last couple of years. Tyson, you've had a busy few weeks here. Tell us more.

Tyson Lamoreaux, Cloud and AI Leader

Thank you, Jayshree, and thanks for the question. It's really incredible to have joined the team at a time where Arista is building so much momentum. Spending time with customers has been a top priority for me since coming on board. And I've been so impressed with how strong these partnerships are, both with our long-standing titans and with our emerging customers. We're deeply engaged with them on next-gen architectures for their cloud networks, front end, back end, scale up, scale out and scale across. I mean, really everywhere. It's translating to a ton of wins, and I got to say it's a lot more than I anticipated before I got here. I really love our continuing commitment to open standards and innovation like ESUN and UEC and of course, the practical here, now and always problems that we're addressing by building the hardware systems, software, everything that delivers exceptional power efficiency, reliability, density, visibility and manageability for our customers. I think my background as a builder and operator are really well suited to helping the team anticipate customer needs and delivering the right products for them. I guess the last thing I'd highlight is the culture. I mean it's just tremendous. The customer focus, commitment to quality, innovation and operational excellence are top notch here and have made me feel right at home. Thanks, Jayshree. Back to you.

Jayshree Ullal, CEO

Thank you, Tyson, and welcome home. With Tyson's credentials and a track record, Arista is really poised to address multiple facets of cloud and AI innovation at a system-wide level converging silicon, hardware, software, cables, optics and racks as an overall platform. At the Optical Compute Conference, OCP, Arista unveiled its first Ethernet for Scale-Up Networks or ESUN specification, along with important 12 industry experts. While we began with 4 co-founders, we are now supporting and increasing to more people so that we can build the right interoperable scale-up standard. While there's always white noise, Arista also continues to clarify our role in white box and how we will continue to coexist like we always have the past decade or more. The concept is clear. It's all about good, better and best, where in some simple use cases, a commodity white box is good enough. Yet in other cases, customers seek the value of better Arista blue boxes with state-of-the-art hardware with built-in NetDI for signal integrity, physical, passive, active component and troubleshooting management. The best is, of course, the Arista branded EOS platform for the ultimate superiority. We find ourselves amid an undeniable and explosive AI megatrend. As AI models and tokens grow in size and complexity, Arista is driving network scale of AI XPUs, handling the power and performance. Basically, the tokens must translate to terawatts, teraflops and terabits. We are experiencing a golden era in networking with an increasing TAM now of over $100 billion in forthcoming years. Our centers of data strategy ranging from client to branch to campus to data and now cloud and AI centers is a very consistent mission for the company. We will continue to invest in our customers, our leaders, our partners and certainly most of all, our innovative technology. And with that, Chantelle, I'd like to hand it to you as our CFO, for financial specifics.

Chantelle Breithaupt, CFO

Thank you, Jayshree. It is great to see the broadening of the AI ecosystem, and I am excited for Arista to be an innovative unit. Turning now to Q3 performance. Total revenues were $2.3 billion, up 27.5% year-over-year, above our guidance of $2.5 billion. This was supported by strong growth across all of our product sectors. International revenues for the quarter came in at $468.3 million or 20.2% of total revenue, down from 21.8% in the prior quarter. The overall gross margin in Q3 was 65.2%, above our guidance of 64%, down from 65.6% last quarter and up from 64.6% in the prior year quarter. The year-over-year gross margin improvement was primarily driven by strength in the enterprise segment. Operating expenses for the quarter were $383.3 million or 16.6% of revenue, up from last quarter at $370.6 million. R&D spending came in at $251.4 million or 10.9% of revenue, up from $243.3 million in the last quarter. Sales and marketing expense was $109.5 million or 4.7% of revenue compared to $105.3 million last quarter. Both quarter-over-quarter dollar increases were driven by additional headcount, inclusive of the VeloCloud acquisition. Our G&A costs came in at $22.4 million or 1% of revenue, up from last quarter at $22 million. Our operating income for the quarter was $1.12 billion, landing at 48.6% of revenue. Other income and expense for the quarter was a favorable $98.9 million, and our effective tax rate was 21.2%. This resulted in net income for the quarter of $962.3 million or 41.7% of revenue. Our diluted share number was 1.277 billion shares, resulting in a diluted earnings per share number for the quarter of $0.75, up 25% from the prior year. Now on to the balance sheet. Cash, cash equivalents and investments ended the quarter at $10.1 billion. Of the $1.5 billion repurchase program approved in May 2025, $1.4 billion remains available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price and other factors. Now let's move next to operating cash performance for the third quarter. We generated approximately $1.3 billion of cash from operations in the period, reflecting a strong business model performance. DSOs came in at 59 days, down from 67 days in Q2, driven by billing linearity. Inventory turns were 1.4x, flat to last quarter. Inventory increased to $2.2 billion in the quarter, up from $2.1 billion in the prior period. Most of this increase is due to higher evaluation inventory, indicating uptake of our new products and new use cases. Our purchase commitments and inventory at the end of the quarter totaled $7 billion, up from $5.7 billion at the end of Q2. We will continue to have some variability in future quarters as a reflection of the combination of demand for our new products and the lead times from our key suppliers. Our total deferred revenue balance was $4.7 billion, up from $4.1 billion in Q2. As of Q3, the majority of the deferred revenue balance is product related. Our product deferred revenue increased approximately $625 million versus last quarter. We remain in a period of ramping our new products, winning new customers and expanding new use cases, including AI. These trends have resulted in increased customer-specific acceptance clauses and an increase in the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis, independent of underlying business drivers. Accounts payable days was 55 days, down from 65 days in Q2, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $30.1 million. In October 2024, we began our initial construction work to build expanded facilities in Santa Clara, and we expect to incur approximately $100 million in CapEx during fiscal year 2025 for this project. Q3 delivered a strong performance, underscoring our strategic progress. This continues to give us confidence for the remainder of FY '25 and through FY '26. But let's first start with our outlook for Q4. Revenue of $2.3 billion to $2.4 billion with continued growth expected across our cloud, AI, enterprise and providers markets. Gross margin in the range of 62% to 63%, inclusive of possible known tariff scenarios, operating margin of approximately 47% to 48%. Our effective tax rate is expected to be approximately 21.5% with approximately 1.281 billion diluted shares. Incorporating this Q4 outlook, our guidance for FY '25 is as follows: full year revenue growth of approximately 26% to 27% or $8.87 billion at the midpoint. We are on track to deliver between $750 million and $800 million for our campus segment and our AI center target of at least $1.5 billion. For gross margin, the outlook is approximately 64%, inclusive of possible known tariff scenarios. We anticipate operating margin of roughly 48%, demonstrating Arista's strong operational execution and scalable business model. Our outlook for FY '26 presented at our September Analyst Day remains relatively unchanged. Full year revenue growth of approximately 20%, now at a higher dollar amount of $10.65 billion, inclusive of both a campus target of $1.25 billion and an AI center target of $2.75 billion. For gross margin, a range is expected of approximately 62% to 64%, driven by customer mix. And for operating margin, an outlook of approximately 43% to 45%, allowing for investments in relation to achieving the strategic goals of Arista. In closing, the momentum continues. The breadth and depth of our customer interactions have never been stronger nor more exciting. In true Arista style, we remain pragmatic, yet are aware of the potential over the next few years. I wish to extend a warm welcome to Tyson. We are thrilled that you have joined our team, and congratulations to Ken on the well-deserved promotion. I will now turn the call back to Rudy for Q&A.

Rudolph Araujo, Head of Investor Advocacy

Thank you, Chantelle. We will now move into the Q&A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Christa, please take it away.

Operator, Operator

Your first question comes from the line of Tal Liani with Bank of America.

Tal Liani, Analyst

I want to ask about the sequential guidance, and it's a fundamental question, but I'll support it with numbers. Last year, the growth was quite stable in the last three quarters, with sequential growth rates between 6.5% and 7.5%. This year started with 10% growth in the second quarter, then dropped to 5%, and now it's only 1.6%. So there is a slowdown. The question is, what are the underlying drivers for this deceleration? What should we take from it for next year? What does it imply? Should we be worried about future growth?

Jayshree Ullal, CEO

Thanks, Tal. To address your last point, we have no concerns about our demand. Our shipments and revenue align with our supply. If we can execute the shipments, the revenue, as demonstrated in Q2, exceeded our guidance. However, there are instances when we can't fulfill all orders despite the demand, which is why you're seeing those fluctuations. I wouldn't place too much significance on the quarterly changes. We have never felt more confident about the demand, which is evident in our ongoing commitment to 20% growth, even as the figure continues to rise from 8.75% to 8.87%. So, demand remains steady, with some variability in shipments.

Operator, Operator

Your next question comes from the line of Aaron Rakers with Wells Fargo.

Aaron Rakers, Analyst

I'll stick to kind of the model as well. I'm curious, when I look at the gross margin guidance for this quarter, I think it was 62% to 63%. I guess if we were to assume that your services' gross margin stays consistent at 81%, 82%, it would seem to imply that product gross margin falls below 60%. So I guess the question is, can you unpack the gross margin drivers in this quarter in terms of the guidance? How much is tariff related? Or is there other dynamics to consider? And does that change kind of the expectation as we look forward?

Jayshree Ullal, CEO

Okay. Sure, Aaron. First of all, I think you're overestimating our services and software margins, but be that as it may. We do have a mix of product margin where it's significantly below 60% with our cloud and AI titans driving the volume and higher obviously, for the enterprise customers. The average of which, together with services is yielding that number. So when the mix tilts heavily towards the cloud and AI, you can expect some pressure on our gross margins. But overall, I think we managed it very well the manufacturing team, now led by Todd does a fantastic job here. So again, the discipline and mix plays well together, but I don't think it's any change from prior years where when we have a heavy mix of AI and cloud, we feel it in our gross margins.

Chantelle Breithaupt, CFO

Yes. The only thing I would add to that is I wouldn't miss into the last part of your question that it insinuates or offers a new model going into next year. This is a normal part of our mix conversation and well within the guide that you've seen us perform at these levels before.

Operator, Operator

Your next question comes from the line of Michael Ng with Goldman Sachs.

Michael Ng, Analyst

Thank you for the question. I was wondering if you could just talk about Arista's positioning as we move into more full rack solutions. Is this going to be more of a partnership model? How do you think about addressing this growing convergence between compute and networking?

Jayshree Ullal, CEO

Michael, that's an excellent question. As mentioned during Analyst Day, Andy Bechtolsheim is actively leading a significant number of these rack initiatives alongside the hardware team. We typically have about 5 to 7 projects with various accelerator options at any moment. While NVIDIA currently sets the standard, we anticipate that four or five additional accelerators will emerge over the next few years. Arista aims to integrate all elements, including cabling, co-packaging, power, cooling, and connections to various XPU cartridges, establishing itself as the preferred network platform in many scenarios. We are engaged in numerous early design efforts, which will likely materialize as Ethernet standards continue to strengthen. We now have a UEC specification and have discussed the Scale-Up Ethernet specification for ESUN, which allows us to incorporate different workflows under the same Ethernet transport headers and data link layers. We expect significant progress on this front by 2026, with full realization in 2027 as Scale-Up Ethernet gains importance. In terms of revenue recognition, our approach may differ from the traditional OEM model, potentially leaning more towards the blue box JDM model, where we collaborate on intellectual property, provide reference designs, and offer capabilities extending beyond just networking. However, many of these opportunities will also involve selling the existing network configurations within these racks.

Operator, Operator

Your next question comes from the line of Atif Malik with Citi.

Atif Malik, Analyst

Jayshree, in your prepared remarks, you mentioned large language model providers like OpenAI, Anthropic, and they have announced partnerships with your cloud titans. Can you share with us who is driving the decision-making on networking hardware on these announcements? And just your commentary on your share being stable within the circle of your cloud titan?

Jayshree Ullal, CEO

Yes. So to answer your last question first, I think our share is strong. We always, as you know, coexist with 2 other types of competitors. One is the bundling strategy with NVIDIA and the other is the white box. So we have not seen any significant changes in share up or down at the moment, it's stable. Having said that, it's also a massive market. And we think rising tide rises all boats and this boat is feeling pretty good. Now specific to who makes the decision, it's really a combination. We intimately work with the software and LLM players because they certainly guide the design but we also work with the cloud titans, and it's a shared responsibility between both of them and where the responsibility for procuring the large data centers and the power and the location and the cooling is clearly done by our cloud Titans, but the specifications are exactly what's required on the scale up, scale out network is done by the partners like OpenAI and Anthropic. So it's really a joint decision.

Operator, Operator

Your next question comes from the line of Samik Chatterjee with JPMorgan.

Samik Chatterjee, Analyst

Jayshree, reflecting on your earlier answer to another question, you mentioned some variability in shipments at the customer level which may be causing some fluctuations from quarter to quarter. I'm curious about the Tier 1 customers you're working with and their progress towards achieving their cluster sizes of 100,000 and more. Has there been any change to those plans that might be contributing to the slowdown in your fourth quarter guidance? What is causing the variability in shipments? Is it related to supply issues at all? Any insights on that would be appreciated.

Jayshree Ullal, CEO

Yes. Yes. Samik, I would say it's largely supply driven. As you know, all 4 are doing well on the 100,000 mark. 3 have already crossed it. The fourth one, I don't know if they'll cross it by end of the year or next year, but they're getting there. So we're feeling pretty good on our large GPU deployments. At the same time, the variability I was stating is demand is greater than our ability to ship. Lead times on many of our components, including standard memory and chips and merchant silicon and everything, it's nothing like 2022, but they have very long lead times ranging from 38 to 52 weeks. So we are coping with that. And you can see Chantelle is leaning in and making greater and greater purchase commitments, we wouldn't do that without demand.

Operator, Operator

Your next question comes from the line of Amit Daryanani with Evercore ISI.

Amit Daryanani, Analyst

I think people are trying to understand the growth rate you've experienced over the past three to four quarters, which has been in the high 20% range. You seem to be suggesting that this will slow down, not only in December but also into 2026, with growth dropping from the high 20% to the low 20%. Could you explain what is causing this slowdown? It seems that factors like your purchase commitments and deferred product growth might suggest that growth could actually accelerate rather than decelerate in the coming quarters. I'd appreciate any insights on what's contributing to this projected deceleration.

Jayshree Ullal, CEO

Okay, Amit, but I don't like the word deceleration. We're talking about big, big numbers here, guys. And I'm committing to double-digit 20% and above percentage, don't call it deceleration, call it variability across quarters, and demand is great. I just don't know whether it will land in '26 or '27.

Chantelle Breithaupt, CFO

Yes. The only other thing I'd add to this just generally is a topic is that when you think about that the large AI use cases are acceptance clauses, it really comes down to that coming together and the timing of that. That doesn't follow a seasonality model. That's also for...

Jayshree Ullal, CEO

Good point. It lands when it lands. That is a very good point that Chantelle is making that in the cloud, we started having predictability of how they landed and how they got constructed. In AI, it's taking longer.

Operator, Operator

Your next question comes from the line of David Vogt with UBS.

David Vogt, Analyst

I'm going to ask this question knowing it might provoke a response from Jayshree. Considering the outlook for 2026 that you recently raised, which we didn't anticipate would happen this early, I want to discuss your efforts related to the AI-centric opportunities, campus, and Velo. It seems there may be limited growth potential in your core business beyond AI and campus. Could you elaborate on what trends you're observing in that specific market and how we should view its development leading into 2026?

Chantelle Breithaupt, CFO

Okay. I’ll share my thoughts as Jayshree to figure out the right tone for her response. I believe it’s important to consider how we began this journey in early 2025, or even late 2024. Our approach involves not assuming that everything will go perfectly to achieve our targets; we prefer to keep our options open. We have set specific goals for ourselves in relation to AI and our campus initiatives. This doesn’t mean we aren’t focused on other areas, but I don’t think assuming full success across the board is wise, as it could leave us vulnerable. We will continue to provide updates as we assess the situation, right Jayshree?

Jayshree Ullal, CEO

Yes, absolutely. And so yelling isn't the tone I'd like to attribute it to, excitement maybe, your enthusiasm is the one I'd like you to think about, which is clearly, AI and campus is going to grow and do great guns for us as it should because they are 2 very large TAMs. Whether it is Ken and Tyson driving the AI and cloud TAM or whether it's Todd Nightingale driving the campus and these 2 are going to grow substantially in double digits, right? So to your point, it doesn't leave the core business with a lot of opportunity. But that's not to say it may be flattish, it may be grow. It's to say that our customers are putting more attention there and that the existing business, which is already on very large numbers, will have lesser growth. We don't yet know if it's flattish or single digit or whether more will go to AI. We frankly can't predict the mix this early in the game on 2026, but we think we're in for a great ride in 2026.

Operator, Operator

Your next question comes from the line of Ben Reitzes with Melius Research.

Benjamin Reitzes, Analyst

I appreciate you clarifying some of those earlier questions, more on the long-term side, Jayshree. I think there was an earlier question around OpenAI and Anthropic and just some of these larger builds with the private companies that obviously are becoming hyperscalers. Maybe without naming names or whatnot, I just wanted to hear about your confidence on being able to participate in some of these builds that are affiliated with some of your cloud titans. And do you think you'll get a lot of this growth? Is there anything that's changing or evolving that gives you more or less confidence as we end the year here in '25?

Jayshree Ullal, CEO

Yes, that's a really good question, Ben, and thank you for that thoughtful question. Until now, majority of how we've measured our AI success through our cloud and AI titans has been number of GPUs and how much are they installing and can we verify that the Ethernet network works. The majority of it to date has been scale out. First, I want to reflect that there are 3 big use cases sitting in front of us, scale up, scale out and scale across. Arista's participation to date has largely been in scale-out. So we've got 2 major use cases in addition to augmenting this, and that's what makes the Etherlink portfolio that Ken described so eloquently so beautiful. Now how are these being built? Clearly, they're being driven by large language models, tokens transformers, inference use cases, you name it all. So the influence is clearly coming from these players you named. But the way they are driving the infrastructure, and I can't keep track of the gigawatts myself, it's 10 gigawatts here, 10 there, 30 there. It's adding up to a lot. But I can just tell you, no matter what it is, Arista has been looked at as a very important and relevant participant, especially right now in the scale-out and scale across. We will participate in the scale up. It will take a little longer. Today, it is largely a set of proprietary technologies like NVLink or PCIe, and I think that will happen more in '27. So that to say that as we get now confident about exceeding our $10 billion goal next year, we're looking at our next goal of $15 billion in the next few years. And I think AI will be a very large part of it, and so will be the companies you mentioned.

Operator, Operator

Your next question comes from the line of Tim Long with Barclays.

Timothy Long, Analyst

Appreciate the question here. Jayshree, maybe if we could just dig a little bit. You mentioned blue box a few times here kind of in that middle portion of the good, better, best. Two-parter. One, could you talk a little bit about kind of the economic model margins or anything like that, that we should expect as blue box maybe becomes a bigger part of the mix over time? And second, can you talk a little bit about where we would expect to see these type of deployments? I'm assuming something like scale up might be, as you described, a little bit more simple and not need the full EOS. But from either a customer or a use case standpoint, where would you expect Arista to be most successful with blue box deployment?

Jayshree Ullal, CEO

Yes. Thank you, Tim. Those are great questions. As I mentioned at the Analyst Day, we are already seeing rapid success. For example, there was a situation where they were struggling to get their white box to function effectively. These are critical AI workloads. We are witnessing a neo-cloud implementation using non-NVIDIA GPUs, where they are aiming to deploy Arista with its outstanding hardware. Initially, they intended to use an open NOS, but they are now moving towards a hybrid approach that incorporates both an open NOS and Ken's EOS, which is proving to be very effective for this specific use case. It seems that they are starting with a Blue Box, but it is quickly evolving into a hybrid model that includes both blue and branded EOS boxes. Economically, this situation is largely comparable to what we see with our cloud and AI leaders, although there will be some situations, as you correctly pointed out, that haven't emerged yet. However, as we scale up significantly, we anticipate improvements in margins and overall economic strength. Essentially, the volume of these deployments will increase, which will place more pressure on margins. We will strategically manage a combination of scaling up, scaling out, and scaling across to maintain overall margins while ensuring we capture our fair share. I hope this addresses your questions on both points.

Operator, Operator

Our next question comes from the line of Meta Marshall with Morgan Stanley.

Meta Marshall, Analyst

Maybe a question for you, Jayshree. I know you aren't breaking out the front end and back end anymore, but as more inference use cases are being developed, what are you observing regarding the upgrades to the front-end network compared to your expectations from a year ago?

Jayshree Ullal, CEO

Thank you. I think about a year or maybe even two years ago, we were on the outside looking in at the back-end networks mainly built with InfiniBand. This year, we've noticed a significant shift, as we are increasingly being invited to build their 800-gig networks, while last year it was mostly 400 gig. I anticipate that next year will feature a mix of 8 and 1.60 on the back end. The back end is applying pressure on the front end, making it more challenging for us to define the back-end specifications that connect to GPUs compared to the front end. We are aware of specific instances among our cloud partners where this pressure is not only affecting AI developments but also pushing them to upgrade their cloud infrastructure. While this is occurring on a smaller scale, there is a larger trend of the back and front ends merging and converging, making it increasingly difficult to differentiate between them, as it’s about equally balanced.

Kenneth Duda, President and Chief Technology Officer

I'd just like to point out that we're seeing that Arista, I think, is the only successful vendor outside of China selling both front end and back end. And this is where our engineering alignment is so important because we can offer the customer a consistent solution across their entire infrastructure. I think this is a unique differentiator that will really help us succeed as these networks become more and more mainstream.

Operator, Operator

Your next question comes from the line of Karl Ackerman with BNP Paribas.

Karl Ackerman, Analyst

How should we think about your market opportunity between disaggregated scheduled fabrics versus nonscheduled fabrics, which appear to be used in the largest AI accelerator clusters at one of your largest customers? I mean you, in fact, happen to be the only networking switch vendor who offers both networking topologies. And I'm curious if other data center operators seek to adopt your DSF architecture given the congestion-free advantages it offers.

Jayshree Ullal, CEO

Well, I think you hit on it, and Ken hit on it, too, so I'd like him to answer part of the question. But look, we're not religious. We jointly developed the DSF architecture with one of our leading cloud titans, Meta. And we've been selling the nonscheduled fabric for a very long time. So we've never been religious about this. And both are doing very, very well at our cloud titans and specifically the one we co-developed with.

Kenneth Duda, President and Chief Technology Officer

That's exactly right. We've had both architectures in massive production scale for, I think, 15 years now. And we'll continue to offer this range of choice to our customers, offering them their choice between the highest value fabric with deep buffers, no hotspots, congestion-free, loss-free, or an unscheduled fabric, which is maybe lower cost, but also can be more difficult to operate. And they both run the same software. So it gives the customer a range of options and a consistent operating model.

Operator, Operator

Your next question comes from the line of Simon Leopold with Raymond James.

Simon Leopold, Analyst

I wanted to come back to the topic around the blue box, which you've talked about quite a bit at the analyst meeting. So I appreciate it's not new. But what I don't quite think I understand is how it may be evolving or changing in that it sounds like there's a broader base of customers that may be employing it and that this is a factor that's in your 2026 margin guidance. Could you elaborate on what you're assuming blue box trends are in 2026?

Jayshree Ullal, CEO

I believe the blue box trends in 2026 will mainly persist with a small group of customers who possess the operational expertise to handle it. This primarily includes our specialty cloud providers or large players. It's unlikely to become widespread, so we are talking about single-digit customers, perhaps 10 or 20, but certainly not hundreds. These customers must demonstrate operational excellence to utilize our NetDI and hardware effectively, building upon it with their open NOS or similar solutions. In this context, it’s worth noting that the absence of the EOS layer may lead us to accept lower margins, which we've accounted for in our 2026 guidance. We are optimistic that the combination of the blue box and the EOS branded box will allow us to thrive as a profitable and fast-growing business.

Operator, Operator

Your next question comes from the line of James Fish with Piper Sandler.

James Fish, Analyst

Just on that topic of blue box shockingly, maybe not just 2026, but what do you see in terms of the mix regarding the adoption curve as to what percentage of the business could actually represent over not just next year, but 3, 5 years from now? And you guys mentioned the convergence of front end and back end. Does that take away from kind of your advantage of where you sit today, though, if that line starts to blur a little bit more and allows competition to enter?

Jayshree Ullal, CEO

Yes, please go ahead.

Kenneth Duda, President and Chief Technology Officer

In terms of the front end and back-end converging, this is purely advantageous to us because the front end requires a massive number of features. It's incredibly mission-critical and supports a whole variety of applications, not just the straightforward if demanding communication patterns of the AI back end. So we see that our ability to tackle both of them effectively is a significant source of strength and a real differentiator and something that's not easy for competitors to replicate. If you look at NVIDIA, for example, the sales volume is small in the front end and Cisco is small in the back end. And so I think we'll see that kind of convergence being beneficial to us.

Jayshree Ullal, CEO

Yes. Thank you, Ken. And on the blue box, I'm not sure we model 3 to 5 years. But if I had to venture what I think the evolution of the blue box will be, I think it will be more significant in the scale-up use cases where there's a higher dependency on the strength of our hardware and our NetDI capability and a lower requirement for software. So don't know yet what that will be. I think it will be high in units, low in dollars kind of things. So the mix may still be small, but it will actually be incremental since that's not a use case we do today.

Operator, Operator

Your next question comes from the line of Antoine Chkaiban with New Street Research.

Antoine Chkaiban, Analyst

I'd like to ask about the UEC. So can you maybe tell us about the progress that the consortium is making, whether the different voices are aligned and what milestones investors should be looking out for going forward?

Rudolph Araujo, Head of Investor Advocacy

Antoine, can you repeat your question? You weren't coming into clear?

Jayshree Ullal, CEO

Yes. Yes. Yes, Antoine, yes. So after 2 years of lots of hard work led by Hugh Holbrook and now Tom Emmons, UEC did publish their first specification, I believe it was 1.0 in June of 2025. Arista's Ethernet portfolio is entirely UEC capable, compatible, and we will continue to add more and more compliance, packet trimming, packet spring, dynamic load balancing. These are all important features that our switches support. And we will augment that with the ESUN specification. As I described, we've been an early pioneer, 4 vendors started this together, including Broadcom, Arista and a couple of our Titan customers. I'm pretty sure it will be 20, 25, 30 over time. And having a standards-based OCP ESUN agreement will allow us to expand UEC into the scale-up configuration as well, leveraging UEC and IEEE specs. So this modular framework for Ethernet for scale-up and scale-out is a thing of beauty, and Arista is in the middle of it.

Operator, Operator

Your next question comes from the line of George Notter with Wolf Research.

George Notter, Analyst

I think in the monologue, you mentioned neo cloud is an area where you're getting more momentum. I think you guys actually said at the Analyst Day as well. I'm just curious like what are you seeing with that customer set? I guess, from my perspective, I've historically kind of thought of that customer as being more focused on the bundle, which isn't necessarily your game, but it sounds like you're maybe talking a bit more positively. I'm just wondering what you're seeing in that space.

Jayshree Ullal, CEO

Yes, I think you’re correct, George. Initially, we were focused on bundling. I can recall two instances where we were invited to the party because in order to use my GPU, you needed to obtain the network from me, so we weren't there. Leaving those two aside, I believe they may become more open-minded over time. Many more neo clouds are emerging globally that are seeking Arista’s assistance, not just with the product, but also with network design and software capabilities, as they lack the staff and expertise to manage everything on their own. They prefer to rely on us to meet their network demands. We're also reaching out to numerous neo cloud companies and smaller enterprises, although they tend to have smaller numbers of GPU clusters. However, starting with a few hundred to a few thousand clusters gives us hope for growth because they all seem to possess valuable colocation space and power, which is a significant asset moving forward.

Operator, Operator

Your next question comes from the line of Sebastien Naji with William Blair.

Sebastien Cyrus Naji, Analyst

I'd like to understand a little bit more about the investments you're making in the enterprise go-to-market. It looks like sales and marketing expenses stepped up in the quarter. Where do you think you make the most progress as we go into 2026? Is it geographic expansion? Is it investing more into the channel? Is it just trying to cross-sell more into the existing enterprise customer base? I'd love to get your thoughts there.

Jayshree Ullal, CEO

Yes. You're hitting on a really important spot because we really have 2 sides to our coin. On one side, the AI and cloud makes us dizzy. But we're just as excited and dizzy about the huge $30 billion TAM, and that's why we're so happy to have Todd here. We were just at an international innovate in London that Ken, Todd and I all got a chance to see. And the excitement and enthusiasm for a relevant high-quality network vendor has never been higher. So indeed, we want to invest there. Todd, do you want to say a few words?

Todd Nightingale, President and Chief Operating Officer

Yes. We're excited about the growth here across the board from the enterprise space, but there's 3 real dimensions we're staying really focused on. One is expansion into the campus. The VeloCloud acquisition completes our portfolio there, getting great traction, pushing extremely hard around the world. It's a ton, a ton of white space accounts for us that we haven't gotten. I think you mentioned geographic expansion. That's great. We saw good numbers in Asia this quarter. We've got a lot of opportunity, I think, to accelerate there, and we like the progress. But the last is just reaching new logos, and we're investing in our channel to really deliver that and bring us more opportunities more at bats to find folks and introduce them to Arista for the first time.

Jayshree Ullal, CEO

That is a cricket analogy. Yes, there you go. So Sebastien, we're feeling really good, and it clearly is the other half of our numbers.

Operator, Operator

Your next question comes from the line of Ryan Koontz with Needham.

John Jeffrey Hopson, Analyst

This is Jeff Hopson on for Ryan Koontz. We've seen a lot of the deals with the hyperscalers or the AI model companies with new data center build-outs, probably not a level since we've seen with the cloud build-out. So I'm just curious, is there a way to think about Arista's opportunity with new network builds versus refreshing or upgrading existing networks?

Jayshree Ullal, CEO

Yes, that's exactly the way to think about it because in the past, with the cloud, we rarely got to talk about gigawatts and beyond. So much of them were multi-megawatts. So these are newly constructed AI build-outs as opposed to the traditional CPU or storage-driven cloud build-outs. Of course, they will have refresh too. But frankly, they're not getting the attention. All the attention is going to the new build-outs for AI. So that's the right way to look at it.

Rudolph Araujo, Head of Investor Advocacy

So we have time for one last question.

Operator, Operator

That comes from the line of Ben Bollin with Cleveland Research.

Benjamin Bollin, Analyst

Jayshree, you talked a little bit about some of the tightening lead time conditions out there. Curious what you're seeing from these cloud customers around engineering and delivery lead times, how that has evolved? And in particular, just changes you're seeing on your confidence in delivering their needs, whatever, in the next 12, 18 months. That's it.

Jayshree Ullal, CEO

Ben, as you know, forecast is a very delicate science. I hardly get it right. So I do rely as does Tyson and Ken and the whole team on early preview and early forecast from our large customers without which we couldn't do proper planning. Even before they put in their purchase orders, we've got to have a good idea of what they want. And you're seeing that reflected in Chantelle's purchase commitments. So when it comes to our large and intimate customer engagement, they understand and they've gotten burned by the 2022 supply crisis and are absolutely planning with us. Some of that is true in Todd's areas too, with the large enterprises because in a large data center, you have to plan ahead. It's not like they miraculously show up. They need power, they need space, and those are 1- or 2-year lead times. Where we have to be more vigilant, and this is something Todd and my campus and the entire manufacturing team is working on, is realize as a campus business, we had one of our best quarters. Congratulations, Todd, Kumar, this quarter on the campus. That planning cycle is a lot shorter. That tends to be days and weeks, not months or half a year or longer, right? So we're working again on this dichotomy in our business and planning as much as we can for the AI, but also planning ahead as much as we can for the enterprise and campus. Would you like to add something, Todd?

Todd Nightingale, President and Chief Operating Officer

Yes. I'll just add, we are getting aggressive, as Jayshree said, on to improve our campus lead times and really accelerate that business and help drive that enterprise growth that we feel pretty passionately about. And the only other thing is the investment here and the amount of dollars being put into purchasing, making those purchase commitments is key, and we're looking for improvement in that.

Rudolph Araujo, Head of Investor Advocacy

Thanks, Jayshree and Todd. That concludes Arista Networks Third Quarter 2025 Earnings Call. We have posted a presentation that provides additional information on our results, which you can access on the Investors section of our website. Thank you for joining us today and for your interest in Arista.

Operator, Operator

Thank you for joining. Ladies and gentlemen, this concludes today's call. You may now disconnect.