Earnings Call Transcript
Arista Networks, Inc. (ANET)
Earnings Call Transcript - ANET Q2 2024
Operator, Operator
Welcome to the Second Quarter 2024 Arista Networks Financial Results Earnings Conference Call. As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section at the Arista website following this call. Ms. Liz Stine, Arista's Director of Investor Relations, you may begin.
Liz Stine, Director of Investor Relations
Thank you, operator. Good afternoon, everyone and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks Chairperson and Chief Executive Officer; and Chantelle Breithaupt, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal second quarter ending June 30, 2024. If you would like a copy of this release, you can access it online at our website. During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the third quarter of the 2024 fiscal year, longer-term financial outlooks for 2024 and beyond, our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization and the benefits of acquisitions which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K and which could cause actual results to differ materially from those anticipated by these statements. These forward-looking statements apply as of today and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. Also, please note that certain financial measures we use on the call are expressed on a non-GAAP basis and have been adjusted to exclude certain charges. We have provided reconciliations of these non-GAAP financial measures to GAAP financial measures in our earnings press release. With that, I will turn the call over to Jayshree.
Jayshree Ullal, CEO
Thank you, Liz, and thank you, everyone, for joining us this afternoon for our second quarter 2024 earnings call. As a pure-play networking innovator with greater than $70 billion total addressable market ahead of us, we are pleased with our superior execution this quarter. We delivered revenues of $1.69 billion for the quarter, with a non-GAAP earnings per share of $2.10. Services and Software Support renewals contributed strongly at approximately 17.6% of revenue. Our non-GAAP gross margin of 65.4% was influenced by outstanding manufacturing discipline realizing cost reductions. International contribution for the quarter registered at 19%, with the Americas strong at 81%. As we celebrated our tenth anniversary at the New York Stock Exchange with our near and dear investors and customers, we are now supporting over 10,000 customers with a cumulative of 100 million ports deployed worldwide. In June 2024, we launched Arista's Etherlink AI platforms that are ultra-Ethernet consortium compatible, validating the migration from InfiniBand to Ethernet. This is a rich portfolio of 800-gig products, not just a point product but, in fact, a complete portfolio that is both NIC and GPU agnostic. The AI portfolio consists of the 7060 x6 AI switch that supports 64800 gig or 128 400-gig Ethernet ports with a capacity of 51 terabits per second. The 7800 R4 AI Spine is our fourth generation of Arista's flagship 7800, offering 100% non-blocking throughput with a proven virtual output queuing architecture. The 7800 R4 supports up to 460 terabits in a single chassis, corresponding to 576800 gigabit Ethernet ports or 1,152 400-gig port density. The 7700 R4 AI distributed Etherlink Switch is a unique product offering with a massively parallel distributed scheduling and congestion-free traffic spraying fabric. The 7700 represents the first in a new series of ultra-scalable intelligent distributed systems that can deliver the highest consistent throughput for very large AI clusters. Let's just say once again, Arista is making Ethernet great. First, we began this journey with low latency in 2009. And then there was cloud and routing in the 2015 era, followed by WAN and Campus in the 2020 era and now AI in our fifth generation in 2025. Our Etherlink portfolio is in the midst of trials and can support up to 100,000 XTUs in a 2-tier design built on our proven and differentiated extensible OS. We are quite pleased with our progress across Cloud, AI, Campus and Enterprise customers. I would like to invite Ashwin Kohli, our newly appointed Chief Customer Officer, to describe our diverse set of customer wins in 2024. Ashwin, over to you.
Ashwin Kohli, Chief Customer Officer
Many thanks, Jayshree. Thank you for inviting me to my first earnings call. Let me walk everybody through the 4 global customer wins. The first example is an AI enterprise win with a large Tier 2 cloud provider which has been heavily investing in GPUs to increase their revenue and penetrate new markets. Their senior leadership wanted to be less reliant on traditional core services and work with Arista on new, reliable and scalable Ethernet fabrics. Their environment consisted of new NVIDIA H100s. However, it was being connected to their legacy networking vendor which resulted in them having significant performance and scale issues with their AI applications. The goal of our customer engagement was to refresh the front-end network to alleviate these issues. Our technical partnership resulted in deploying a 2-step migration path to alleviate the current issues using 400-gig 7080s, eventually migrating them to an 800-gig AI Ethernet link in the future. The second next win highlights our adjacencies in both campus and routing. This customer is a large data center customer which has deployed us for almost a decade. The team was able to leverage that success to help them demonstrate our value for the global campus network which spans across hundreds and thousands of square feet globally. The customer had considerable dissatisfaction with a current vendor which led them to a last-minute request to create a design for their new corporate headquarters. Given only 3 months' window, Arista leveraged the existing data center design and adapted this to the campus topology with a digital twin of the design in minimal time. CloudVision was used for visibility and lifecycle management. The same customer once again was struggling with extreme complexity in their routing environment, as well with multiple parallel backbones and numerous technical complexities. Arista simplified their routing network by removing legacy routers, increasing bandwidth and moving to a simple fixed form factor platform router. The core spine leverages the same U.S. software, streamlining their certification procedures and instilling confidence in the stability of the products. Once again, CloudVision came to the rescue. The third example is the next win in the international arena of a large automotive manufacturer that due to its size and scale, previously had more than 3 different vendors in the data center which created a very high level of complexity both from a technical and from an operational perspective. The customer's key priority was to achieve a higher level of consistency across their infrastructure which is now being delivered via a single U.S. binary image and CloudVision solution from Arista. Their next top priority was to use automation, consistent end-to-end provisioning and visibility which can be delivered by our CloudVision platform. This simplification has led the customer to adopt Arista beyond the data center and extend the Arista solution into the routing component of the infrastructure which included our 7500 R3 spine platforms. This once again shows a very clear example of the same Arista EOS and One CloudVision solution delivering multiple use cases. And Jayshree, this last win demonstrates our strength in the service provider routing space. We have been at the forefront of providing innovative solutions for service provider customers for many years. As we all know, we are in the midst of an optical and packet integration. As a result, our routers support industry-leading dense 400-gig Zero Plus coherent pluggable optics. In this service provider customer example, we provided a full turnkey solution, including our popular 7280 R3 routers and our newly announced AWE 7250 WAN router as a BGP route reflector along with CloudVision and professional services. We showcased our strength in supporting a wide variety of these pluggable coherent optics, along with our SR and EVP and solutions which allowed this middle-mile service provider customer to build out a 400-gig statewide backbone at cloud scale economics. Thanks, Jayshree and back over to you.
Jayshree Ullal, CEO
Well, thank you, Ashwin and congratulations. Hot off the press is our new and highest Net Promoter Score of 87 which translates to 95%. Hats off to your team for achieving that. It's so exciting to see the momentum of our enterprise sector. As a matter of fact, as we speak, we are powering the broadcasters of the Olympics, symbolic of our commitment to the media and entertainment vertical. And so it's fair to say that so far in 2024, it's proving to be better than we expected because of our position in the marketplace and because of our best-of-breed platform for mission-critical networking. I am reminded of the 1980s when San was famous for declaring the network is the computer. Well, 40 years later, we're seeing the same cycle come true again with the collective nature of AI training models mandating a lossless highly available network to seamlessly connect every AI accelerator in the cluster to one another for peak job completion times. Our AI networks also connect trained models to end users and other multi-tenant systems in the front-end data center, such as storage, enabling the AI system to become more than the sum of its parts. We believe data centers are evolving to holistic AI centers, where the network is the epicenter of AI management for acceleration of applications, compute, storage and the wide area network. AI centers need a foundational data architecture to deal with the multimodal AI data sets that run on our differentiated EOS network data systems. Arista showcased the technology demonstration of our EOS-based AI agent that can directly connect on the NIC itself or alternatively, inside the host. By connecting into adjacent Arista switches to continuously keep up with the current state, send telemetry or receive configuration updates, we have demonstrated the network working holistically with network interface cards such as NVIDIA BlueField and we expect to add more NICs in the future. Well, I think the Arista purpose and vision is clearly driving our customer traction. Our networking platforms are becoming the epicenter of all digital transactions, be they campus center, data center, plan centers or AI centers. And with that, I'd like to turn it over to Chantelle, our Chief Financial Officer, to review the financial specifics and tell us more. Over to you, Chantelle.
Chantelle Breithaupt, CFO
Thanks, Jayshree. It really was great to see everyone at the New York Stock Exchange IPO celebration event. Now turning to the numbers. This analysis of our Q2 results and our guidance for Q3 is based on non-GAAP and excludes all non-cash stock-based compensation impacts, certain acquisition-related charges and other nonrecurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. Total revenues in Q2 were $1.69 billion, up 15.9% year-over-year, significantly above the upper end of our guidance of $1.62 billion to $1.65 billion. Growth was delivered across all 3 sectors of Cloud, Enterprise and Providers. Services and Subscription Software contributed approximately 17.6% of revenue in the quarter, up from 16.9% in Q1. International revenues for the quarter came in at $316 million or 18.7% of total revenue, down from 20.1% in the prior quarter. This quarter-over-quarter decrease was driven by a relatively weaker performance in our APJ region. The overall gross margin in Q2 was 65.4%, above our guidance of 64%, up from 64.2% last quarter and up from 61.3% in the prior year quarter. The year-over-year gross margin improvement was primarily driven by a reduction in inventory-related reserves. Operating expenses for the quarter were $319.8 million or 18.9% of revenue, up from last quarter at $265 million. R&D spending came in at $216.7 million or 12.8% of revenue, up from $164.6 million in the last quarter. This primarily reflected increased headcount and higher new product introduction costs in the period. Sales and marketing expense was $85.1 million or 5% of revenue compared to $83.7 million last quarter, with a double-digit percentage increase of headcount in the quarter versus the prior year. Our G&A costs came in at $18 million or 1.1% of revenue, up from last quarter at $16.7 million. Our operating income for the quarter was $785.6 million or 46.5% of revenue. Other income and expense for the quarter was a favorable $70.9 million and our effective tax rate was 21.5%. This resulted in net income for the quarter of $672.6 million or 39.8% of revenue. Our diluted share number was 319.9 million shares, resulting in a diluted earnings per share number for the quarter of $2.10, up 32.9% from the prior year. Turning to the balance sheet. Cash, cash equivalents and investments ended the quarter at $6.3 billion. In the quarter, we repurchased $172 million of our common stock at an average price of $282.20 per share. Of the $172 million, $82 million was repurchased under our prior $1 billion authorization which is now complete and the remaining $90 million was purchased under the new program of $1.2 billion approved in May 2024. The actual timing and amount of future repurchases will depend upon market and business conditions, stock price and other factors. Now turning to operating cash performance for the second quarter. We generated $989 million of cash from operations in the period, reflecting strong earnings performance with a favorable contribution from working capital. Days Sales Outstanding came in at 66 days, up from 62 days in Q1, impacted by large service renewals at the end of the quarter. Inventory turns were 1.1 times, up from 1 turn last quarter. Inventory decreased to $1.9 billion in the quarter, down from $2 billion in the prior period, reflecting a reduction in our raw materials inventory. Our purchase commitments and inventory at the end of the quarter totaled $4 billion, up from $3.5 billion at the end of Q1. We expect this number to stabilize as supplier lead times improve but we'll continue to have some variability in future quarters as a reflection of demand for our new product introductions. Our total deferred revenue balance was $2.1 billion, up from $1.7 billion in Q1. The majority of the deferred revenue balance is services-related and directly linked to the timing and term of service contracts which can vary on a quarter-by-quarter basis. Our product deferred revenue increased approximately $253 million versus last quarter. As a reminder, we expect 2024 to be a year of new product introductions, new customers and expanded use cases. These trends may result in increased customer trials and contracts with customer-specific acceptance clauses and increase the variability and magnitude of our product deferred revenue balances. Accounts payable days were 46 days, up from 36 days in Q1, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $3.2 million. As we enter the second half of fiscal year 2024, we are encouraged by the momentum that we see in the market. Our existing innovative product portfolio, along with our new product introductions, are well suited for our Cloud, AI Enterprise and Providers customers. We will continue to invest in our R&D and go-to-market through both people and processes. With all of this as a backdrop for fiscal year '24, our revenue growth guidance is now at least 14%. Gross margin outlook remains at 62% to 64% and operating margin is now raised to approximately 44%. Our guidance for the third quarter based on non-GAAP results and excluding any non-cash stock-based compensation impacts and other nonrecurring items is as follows: revenues of approximately $1.72 billion to $1.75 billion, gross margin of approximately 63% to 64% and operating margin at approximately 44%. Our effective tax rate is expected to be approximately 21.5%, with diluted shares of approximately 321 million shares.
Liz Stine, Director of Investor Relations
Thank you, Chantelle. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Operator, take it away.
Operator, Operator
Our first question comes from Michael Ng with Goldman Sachs.
Michael Ng, Analyst
As we head into next-generation GPUs with Blackwell and the MVL3672, there's been some discussion about whether the systems may be less modular in some of its components, particularly for back-end networking. I was just wondering if you could share your views and provide some clarity on the vendor modularity of Blackwell, particularly as it relates to networking? And how that might affect Arista's positioning over the next couple of years, if at all?
Jayshree Ullal, CEO
Sure. Michael, I think as the GPUs get faster and faster, obviously, the dependency on the network for higher throughput is clearly related. And therefore, our timely introduction of these 800-gig products will be required, especially more for Blackwell. In terms of its connection and modularity with NVLink and 72 port, there's always been a market for what I call scale up, where you're connecting the GPUs internally in a server and the density of those GPUs connecting in the past has been more PCIe and cell and now NVLink and there's a new consortium now for called UAL that's going to specify that, I believe, eventually, by the way, even there, Ethernet will win. And so that density depends more on the AI accelerator and how they choose to connect. As I've often said, it's more a bus technology. So eventually, where Arista plays strongly both on the front end and back end is on the scale out, not on the scale up. So independent of the modularity, whether it's a wrap-based design, a chassis or multiple RU, the ports have to come out Ethernet and those Ethernet ports will connect into scale-out switches from Arista.
Operator, Operator
Our next question comes from the line of Aaron Rakers with Wells Fargo.
Aaron Rakers, Analyst
I guess the metric that stands out to me the most is the deferred revenue balance up. Looks like 95% in total year-on-year. And it looks like you're now at about $520 million of product deferred. On the product deferred line, can you help us appreciate how do we think about that number? Is that related to these AI opportunities? Just the cadence of how we should expect the revenue recognition from that again being almost 60% above what was previously the peak level of product deferred.
Jayshree Ullal, CEO
Yes. No, good question, Aaron. Let me start generically and of course, I'll hand to my CFO, Chantelle. Product deferred sort of ebbs and flows. It goes in, it comes out. And it's particularly high when we have a lot of new product and new use cases. But it's not extraordinary to see us running a higher product deferred in 1 quarter or in 1 year and then dipping down. So the long term is consistent, the short term can ebb and flow. Do you want to say a few words on that?
Chantelle Breithaupt, CFO
Yes, thank you, Jayshree. The only thing I would add to that is that the deferred balance is always a mix of customers and use cases. So I wouldn't rotate on any 1 particular intersection of those. It really is a mix of those combined.
Operator, Operator
Our next question comes from the line of Meta Marshall with Morgan Stanley.
Meta Marshall, Analyst
Great, Jayshree, last quarter you mentioned four major AI trials that your team was involved in. Ashwin went over the four wins you had during the quarter. I'm trying to understand if the Tier 2 win was part of those AI trials, or if there's any update on the status of those four trials or the current number of AI trials.
Jayshree Ullal, CEO
Yes. No, I'm going to speak to it and I want to turn it over to Ashwin since he's here with us. First of all, all 4 trials are largely in what I call Cloud and AI Titans. A couple of them could be classified as specialty providers as well, depending on how they end up. But those 4 are going very well. They started out as largely trials. They're now moving into pilots this year, most of them. And with any luck next year, maybe we won't be saying 4 to 5 and we could say 5 out of 5. That's my hope, anyway. But in addition to that, we have tens of smaller customers who are starting to do AI pilots. And Ashwin, you've been smack in the middle of a lot of those, maybe you want to speak to that a little bit?
Ashwin Kohli, Chief Customer Officer
Yes. Absolutely, Jayshree. Meta, so I just wanted to clarify the example that I shared with you was more around a Tier 2 cloud provider. And if I take a step back, the types of conversations my team is having with customers is either around general-purpose enterprise customers or it's around Tier 2 cloud providers, which are different from the ones Jayshree's referring to.
Jayshree Ullal, CEO
And they tend to be early adopters.
Ashwin Kohli, Chief Customer Officer
Absolutely.
Jayshree Ullal, CEO
They're about to build an AI cluster. It's a reasonably small size, not classified in the thousands or tens of thousands. But you've got to start somewhere. So they started about a few hundred GPUs, would you say?
Ashwin Kohli, Chief Customer Officer
Absolutely, yes.
Operator, Operator
Our next question comes from the line of Atif Malik with Citi.
Atif Malik, Analyst
Jayshree, you mentioned a $70 billion TAM number in your prepared remarks. Can you help us understand what is in that TAM? And how does that relate to the $750 million AI networking revenue number you have provided for next year?
Jayshree Ullal, CEO
Yes, I frequently receive that question. Firstly, the total addressable market is significantly larger than the $750 million we've secured so far, especially considering we're in the early stages. This includes our data center market and our AI market, which we assess with a narrower focus on how much of InfiniBand will transition to Ethernet on the backend. We’re not including the AI market that’s already part of our data center. Additionally, there’s a large campus market valued over $10 billion, along with the wide area and routing market. These four areas represent what I refer to as the campus center, data center, AI center, and ran center. Built on top of these is some valuable software. We experienced a notable increase in software and service renewals this quarter, mainly around CloudVision observability and security. So, I would define these four core elements along with three software components on top of them, while also remembering the services and support associated with these markets.
Operator, Operator
Our next question comes from the line of Antoine Chkaiban with New Street Research.
Antoine Chkaiban, Analyst
I'd like actually to ask about the non-AI component of your Cloud and AI segment. What can you tell us about how investments in traditional infrastructure are trending? Because we heard from other vendors that inventory digestion is now easing. So are you seeing that too?
Jayshree Ullal, CEO
We saw that last year. We saw that there was a lot of pivot going on from the classic cloud, as I like to call it, to the AI in terms of spend. And we continue to see favorable preferences to AI spend in many of our large cloud customers. Having said that, at the same time, simultaneously, we are going through a refresh cycle where many of these customers are moving from 100 to 200 or 200 to 400 gig. So while we think AI will grow faster than cloud, we're betting on classic cloud continuing to be an important aspect of our contributions.
Operator, Operator
Our next question will come from the line of Amit Daryanani with Evercore.
Amit Daryanani, Analyst
I guess just a question related to the updated '24 guide and I realize it's at least 14% growth for the year. But your compares actually get much easier in the back half of the year versus what you've had in the first half. So just from an H2 versus H1 basis, is it reasonable to think that growth can actually accelerate for you in the back half of the year? And if it doesn't, why do you think it does not accelerate in the back half?
Chantelle Breithaupt, CFO
Yes. I think that Jayshree and I came to this guide of at least 14% because we do see multiple scenarios as we go through the second half of the year. We do expect to continue to see some acceleration in growth but I would say that from the perspective of the forward scenarios, we are comfortable with at least 14% and we'll come back in Q3 and see where we were able to guide for the rest of the year.
Jayshree Ullal, CEO
Look at us. We're known to be traditionally conservative. We went from 10 to 12, 12 to 14. And now my CFO says at least 14. So let's see how the second half goes. But I think at this point, you should think we are confident about second half and we're getting increasingly confident about 2025.
Operator, Operator
Our next question comes from the line of Tal Liani with Bank of America.
Tal Liani, Analyst
By the way?
Jayshree Ullal, CEO
Tal, you there?
Tal Liani, Analyst
Yes. Can you hear me?
Jayshree Ullal, CEO
Yes, yes.
Tal Liani, Analyst
My question is related to the previous one. I calculated the expected growth for the fourth quarter, and it's significantly lower than what we've observed this quarter or will see next quarter. I'm curious if this is simply due to conservatism as you mentioned earlier, or if there’s something specific about the fourth quarter that is causing the expected growth to be only 9% year-over-year. This trend is evident across all metrics; the expected revenue growth is lower, and the gross margin is also reduced. While some of this may be due to conservatism, is there anything specific such as timing of revenue recognition, seasonality, or other factors influencing the lower guidance for the fourth quarter?
Jayshree Ullal, CEO
So Tal, if you go back to November Analyst Day, to call out gross margin lower, I would disagree because I think we're just blowing it off our guide. Our guide was 63 to 64. And we have now shown 2 quarters of amazing gross margin. Hats off to my Mike Capas and John McCool, Alex and the entire team for really working on disciplined cost reductions. But yet, if you look at mix, in general, the costs and etc., I would say you should plan on our gross margins being as we projected. They're not lower. I think we just did exceptionally well the last 2 quarters, so it's relatively lower. That's the first thing. Second, in terms of growth, I would say we always aim for double-digit growth. We came in with 10% to 12%. And again, Q2 is just an outstanding quarter. I don't want to use it as a benchmark for how Q3, Q4 will be. But of course, we're operating off large numbers. We'll aim to do better but we'll have more visibility as we go into Q3 and we'll be able to give you a good sense of the year.
Operator, Operator
Our next question comes from the line of George Notter with Jefferies.
George Notter, Analyst
I guess I was just curious about what your expectations were coming into the quarter for product deferred revenue. I guess I'm curious how much you thought would be added to that product deferred category in Q2. And then also, do you have a view on product deferred revenue for Q3?
Chantelle Breithaupt, CFO
Yes, our approach regarding product deferred revenue remains unchanged; we do not provide guidance on it. There’s nothing new to report in that regard. As we progress through this quarter, we have a general idea of where we might end up, and I can say it aligned with our expectations based on our planning forecast.
George Notter, Analyst
Got it. And I assume these are new products you've shipped to customers, you're waiting for customer acceptance. Any sense for when those customer acceptances might start to flow through? Is that a 2024 event? Is that a 2025 event? How do you think about it?
Chantelle Breithaupt, CFO
Yes. They all have different timings because they're unique to the customer, the use case, AI, classic cloud, etc. So they're all unique and bespoke that way. So there's no set trending on that. And so as we roll through the quarters, they'll come off as they get deployed and then that's where we'll land from a forecasting perspective.
Jayshree Ullal, CEO
And I think it's fair to say if it's AI, it takes longer; if it's classic cloud, it's shorter.
Operator, Operator
Our next question comes from the line of Samik Chatterjee with JPMorgan.
Samik Chatterjee, Analyst
And strong results here. But if I can just ask a question on the commentary that Ashwin had in the prepared remarks. Ashwin, you mentioned the Tier 2 customer, where you're refreshing the front end as I sort of interpreted it to alleviate some of the bandwidth sort of concerns from the back end. How do you think about that opportunity across your customer base? Particularly, how should we think about sort of that as being attached to the $750 million target for back-end revenues that you have for next year? Just help us think about the opportunity that you're seeing with your customers on that side.
Ashwin Kohli, Chief Customer Officer
Yes, Samik, it's hard to say, right? I mean I don't want to attach the $750 million back to this one customer, right? The goal around this one customer was to demonstrate our wins in the enterprise and in the non-cloud space. But outside that, it would be very hard to go translate that to what's happening within the $750 million, right? I don't know, Jayshree, if you've got any comments around that at all?
Jayshree Ullal, CEO
Yes, I want to add that there are four trends Ashwin and the team are observing in the enterprise and provider sector. The transition to 100-gig data centers is progressing well. Anyone still using 10 or 40-gig technology is definitely not an early adopter. Additionally, some are even transitioning to 400-gig.
Ashwin Kohli, Chief Customer Officer
Absolutely, Jayshree.
Jayshree Ullal, CEO
So that's on the data center. Campus, I know, in general, is a slow market. But for Arista, we are still seeing a lot of desire. And you heard Ashwin talk about a campus win but they're really frustrated and they're struggling with existing campus deployments. So we feel really good about our $750 million target for next year. The routed WAN, again, we're both in Tier 2 and service providers and even in enterprise, there's a lot of activity going on there. And finally, the AI trials you talked about, they tend to be smaller but it's a representation of the confidence the customer has. They may be using other GPUs, servers, etc. But when it comes to the mission-critical networks, they've recognized the importance of best-of-breed reliability, availability, performance, no loss and the familiarity with the data center is naturally leading to pilots and trials on the AI side with us.
Operator, Operator
Our next question comes from the line of Karl Ackerman with BNP Paribas.
Karl Ackerman, Analyst
So there are several data points across the supply chain that indicate enterprise networking and traditional server units are beginning to recover. I was hoping you might discuss what you are hearing from your enterprise and service provider customers on their commitment to upgrade their services and networking gear over the next couple of quarters. And as you address that, perhaps you could discuss the number of new customers being added in these verticals over the last couple of quarters out of the 10,000 or so that you have today?
Jayshree Ullal, CEO
Yes, let me address the second question. We are making progress systematically. We recently celebrated reaching 10,000 customers. In the past, we used to add large numbers of customers at once, but now we are acquiring many smaller customers who are satisfied with the consistent addition of hundreds of customers each quarter, and this is going very well. What was your other question?
Karl Ackerman, Analyst
How to think about the adoption or the growth of server and networking gear on for campus environments and what you're seeing there.
Jayshree Ullal, CEO
Okay. So perhaps it may come as a surprise to you but servers aren't always related to campus. Devices and users are much more related to campus. Service tend to be dealing with more data center upgrades. So in the campus, we're tending to see 2 things right now: greenfield buildings that are planning for '25, '26 and they're smack in the middle of those RFPs. Or they're trying to create a little oasis in the desert and prove that our post-pandemic campus is much better with a Leaf/Spine topology, wired/wireless connecting to it at leaf and then enabling things like zero-touch automation, method segmentation, capabilities, analytics, et cetera. So the campus is really turning out to be, in a somewhat sluggish overall market, we are finding that our customers are very interested in modernizing their campus. And again, it has a lot to do with the familiarity with us in the data center and that's translating to more success in the campus.
Operator, Operator
Our next question comes from the line of Ben Bollin with Cleveland Research.
Ben Bollin, Analyst
Jayshree, I mean just the bigger picture as you think about back-end network architectures gradually capturing more of the traditional front-end. What do you think that looks like over the next several years? How quickly could that become a more realistic opportunity to capture more of that true fabric of overall compute resources?
Jayshree Ullal, CEO
I believe there are many market studies indicating that InfiniBand still dominates today. Just a year ago, we were on the outside looking in. So, it’s rewarding to see that even InfiniBand players recognize our efforts to enhance Ethernet. I anticipate that more of the back end will transition to Ethernet. Although we are committed to achieving at least $750 million next year, I foresee it becoming challenging to differentiate between the back end and front end as they converge on Ethernet. Our AI center, as we refer to it, will integrate both aspects. Looking ahead three to four years, I envision the AI center functioning as a hub for both front and back ends. We will be able to monitor this as long as GPUs are used specifically for training purposes. However, as time progresses, I expect to see a rise in edge use cases, inference requests, and small-scale training applications, which will blur those distinctions.
Operator, Operator
Our next question will come from the line of Alex Henderson with Needham.
Alex Henderson, Analyst
I wanted to discuss the spending priorities among the major cloud providers. There is significant investment in AI technologies, including both front-end and back-end networking, as well as GPUs. However, we are seeing a resurgence in the growth rate of applications that traditionally have relied on CPUs in data centers. I'm curious if there's a risk of them under-investing in that area and whether there might be an eventual need to increase spending in that segment due to the current emphasis. Alternatively, is their investment in this area continuing at a pace that aligns with the moderate growth of application demand?
Jayshree Ullal, CEO
Alex, that's a very insightful question. There seems to be a significant focus among the Cloud Titans on training and advanced training, with an emphasis on bigger GPUs and sophisticated models like OpenAI, ChatGPT, and LAMAs. You're correct that, to some extent, the traditional cloud, which I still refer to as classic, has been somewhat overlooked last year and this year. However, I believe that once the training models are set, we will see a resurgence in this area, creating a cycle that supports both sectors. Currently, we're observing more activity in AI and more moderate activity in the cloud.
Operator, Operator
Our next question comes from the line of Ben Reitzes with Melius Research.
Unidentified Analyst, Analyst
This is Jackie Day for Ben. Wondering if you could comment on the competitive environment. And if you're seeing Spectrum ex from NVIDIA? And if so, how you're doing against it?
Jayshree Ullal, CEO
Yes. Well, first, I just want to say, when you say competitive environment, it's complicated with NVIDIA because we really consider them a friend on the GPUs as well as the mix, so not quite a competitor. But absolutely, we will compete with them on the Spectrum switch. We have not seen the Spectrum except in 1 customer where it was bundled. But otherwise, we feel pretty good about our win rate and our success. For a number of reasons, great software, portfolio products and architecture has proven performance, visibility features, management capabilities, high availability. And so I think it's fair to say that if a customer were bundling with their GPUs, then we wouldn't see it. If a customer were looking for best of breed, we absolutely see it and win it.
Operator, Operator
Our next question comes from the line of James Fish with Piper Sandler.
James Fish, Analyst
I wanted to revisit the enterprise segment. Could you provide insight into the number of replacements you're observing compared to previous periods? I'm trying to determine if we're beginning to witness an increase in the core enterprise data center network refresh in comparison to the share gains you've historically experienced. Additionally, is there a noticeable shift in enterprise customer behavior regarding the data center or, as Jayshree mentioned earlier, the campus?
Jayshree Ullal, CEO
Yes, James, let me speak about this and Ashwin, I’m sure you have more insights since you are closer to the situation. I believe we have three types of enterprise customers. The first group is the early adopters, and in that group, Ashwin's team is witnessing a significant number of refreshes. They already have 100-gig setups and are potentially planning for 400-gig upgrades. The second group includes the fast followers, who are still considering migrating to 100-gig systems. Then there are the risk-averse customers, and we are still getting to know them because this represents an untapped opportunity for us. Additionally, there's likely a fourth category of customers who are disillusioned with the public cloud and are looking to bring some of their workloads back to their data centers. I would say that there is activity in all four groups, indicating that we are not experiencing saturation and there remains ample opportunity for us, particularly in speed upgrades and across different customer classes and their respective stages. Ashwin, do you want to add anything?
Ashwin Kohli, Chief Customer Officer
Certainly. To address your question, what I’m hearing from customers is that they are increasingly frustrated with being tied to proprietary systems in the data center. They seek flexibility to integrate various use cases like data center, campus, and routing. Customers want solutions that are straightforward and dependable, ensuring their networks are operational when they start their day. Arista has established its brand and value over the past decade, and this message is resonating well with our current customers. They are adopting Arista not just for data center needs but are also broadening their usage across data center, campus, routing, and WAN. Additionally, our team is actively engaging with new Global 2000 and Fortune 500 customers to promote this message.
Operator, Operator
Our next question comes from the line of Ittai Kidron with Oppenheimer.
Ittai Kidron, Analyst
Nice numbers, ladies. I wanted to go back to gross margin. Jayshree, Chantelle, if you don't mind, in your prepared remarks, you talked about manufacturing efficiencies, cost reduction. I guess I'm wondering why is that not something that carries forward to the next quarter as well? I understand you're trying to be conservative but I'm sure these are not changes that have a very short lifespan that they can carry forward. So why not be a bit more optimistic on what the gross margin outlook should be?
Chantelle Breithaupt, CFO
Yes, it's a great question. Our goal is always to exceed our guidance. However, the main factor influencing our expectations for the second half is the anticipated mix of customers. As you can understand, we have different customer demographics that affect our mix. This is a significant reason for maintaining our guidance as is for the second half. We will continue to pursue variable cost productivity and manage costs, hoping to deliver more. At this moment, the primary consideration is the mix assumption for the second half.
Jayshree Ullal, CEO
If you consider that this isn't just limited to this quarter, John and the team have done an excellent job over the past year. Therefore, I'm planning to approach them again and request additional efficiencies, but they might feel they've already maximized their efforts. We'll see if they're open to further cost reductions. I completely agree with Chantelle that the results are mainly driven by customer mix, and I believe we have already implemented many cost reductions over the last year.
Operator, Operator
Our next question comes from the line of Simon Leopold with Raymond James.
Simon Leopold, Analyst
I know that you typically hold off on the 10% customer disclosures until the end of the fiscal year. But what I'm hoping to gather is how customer concentration may be evolving from comparison to last year. Do you expect with your past customers growing their spending so much that it stays similar or with the diversity of the new opportunities, the concentration you've had historically decline? Any kind of indication you could offer? I'd appreciate it.
Jayshree Ullal, CEO
I mean, I'll try but you're right to say that we don't know. It's only half the year. I expect both Microsoft and Meta to be greater than 10% customers for us. I don't expect any other 10% concentration. Now in Microsoft and Meta, how they will pivot to AI and how they will reduce the spend and all those things that movie will play out in the next 6 months. So we'll know better. But at this point, I think you can assume they won't be exactly the same. Some may go up, some may go down. But these are 2 extremely vital customers, strategic customers, we co-develop with them. We partner with them very, very well. And we expect to do well with them, both in cloud and AI, depending on their priorities, of course.
Simon Leopold, Analyst
And does the pipeline suggest you can have new 10% customers next year? Or do you expect sort of a similar concentration next year?
Jayshree Ullal, CEO
We're not aware of any customer that will approach 10% this year or next year.
Operator, Operator
Our next question comes from the line of Tim Long with Barclays.
Tim Long, Analyst
Maybe we'll go over to the software services, the AI stuff has been beaten up a little bit here. In a good way. You talked a lot about some of the software capabilities. It seems like Arista might be leaning in a little bit more to this revenue line. It's been growing faster than the hardware product the last multiple quarters. So could you talk about kind of sustainability of that strength focus that you guys have on this service software area and what that would mean to growth rate going forward?
Jayshree Ullal, CEO
Thank you for the question. I appreciate the change in focus. I believe we will remain in the teens for a while as there are three key areas to consider. First, there's the services component, which tends to pressure our service percentage down as product sales increase. Historically, this has been in the teens, but it could drop. The second area is our perpetual software, which is closely tied to specific use cases, such as routing, where we perform very well. The better we do there, the stronger our performance will be. Additionally, we have CloudVision, a subscription service that offers either Network-as-a-Service or on-premise solutions, which is another positive aspect for us. The third area I want to highlight is security. In May, we introduced both micro and macro segmentation and announced UNO, our Unified Network Observability. Although this is a new development, Ashwin and I have ambitious plans for it, and I believe it could be a pivotal factor for us. As the services segment might decline over time, these new products could lead to growth. Achieving and maintaining 17 points would be an impressive achievement, and I'd be very proud of that as our numbers continue to grow.
Operator, Operator
Our next question comes from the line of Sebastien Naji with William Blair.
Sebastien Naji, Analyst
Sorry to bring you back to the AI discussion, but this one is a bit more high level. We keep hearing about larger and larger API clusters being developed. As Arista connects these clusters and they scale, I'm curious about whether this affects your ability to generate more revenue. You've mentioned 15% of CapEx. Does that figure change as the clusters you need to connect become bigger?
Jayshree Ullal, CEO
Yes, so if you look at an AI network design, you can look at it through 2 lenses, just through the compute, in which case you look at scale up and you look at it strictly through how many processes there are. But when we look at an AI network design, it's a number of GPUs or XTUs per workload. Distribution and location of these GPUs are important. And whether the cluster has multiple tenants and how it's divvied up between the host, the memory, the storage and the wide area plays a role and the optimization to make on the applications for the collective communication libraries for specific workloads, levels of resilience, how much redundancy you want to put in, active, link base, load balancing, types of visibility. So the metrics are just getting more and more. There are many more communication combinations. But it all starts with the number of GPUs, performance and billions of parameters. Because the training models are definitely centered around job completion time. But then there's multiple concentric circles of additional things we have to add to that network design. All this to say, a network design-centric approach has to be taken for these GPU clusters. Otherwise, you end up being very siloed and that's really what we're working on. So it goes beyond scale and performance to some of these other metrics I mentioned.
Liz Stine, Director of Investor Relations
Thanks, Sebastien. Operator, we have time for one last question, please.
Operator, Operator
Our final question comes from the line of David Vogt with UBS.
David Vogt, Analyst
Maybe just to bring it back together and maybe both for Jayshree and Chantelle. I guess what I'm trying to think through is this is your product introduction. You have a pretty strong ramp of AI likely next year. But does the guide imply that we're going to start to see a much bigger contribution in Q4 driven by the comments around mix earlier and the gross margin discussion? Because I would imagine early in the stage of their life cycle plus the fact that they're hyperscalers, they're going to be a relatively more modest gross margin profile at the beginning of the glide path versus the end of the glide path. So just any color there in terms of what Q4 might look like from an AI perspective relative to your expectations from a glide path perspective?
Jayshree Ullal, CEO
Yes. Let me just remind you of how we are approaching 2024, including Q4, right? Last year, trials. So small, it was not material. This year, we're definitely going into pilots. Some of the GPUs and you've seen this in public blogs published by some of our customers have already gone from tens of thousands to 24,000 and are heading towards 50,000 GPUs. Next year, I think there will be many of them heading into tens of thousands aiming for 100,000 GPUs. So I see next year as more promising. Some of them might happen this year. But I think we're very much going from trials to pilots, trials being hundreds. And this year, we're in the thousands. But I wouldn't focus on Q4. I'd focus on the entire year and say, yes, we've gone into the thousands. And I'll let Chantelle turn for this glide path, right? So we expect to be single-digit small percentages of our total revenue in AI this year. But we are really, really expecting next year to be the $750 million a year or more.
Chantelle Breithaupt, CFO
Yes. Yes. I think so. I completely agree, Jayshree. The only thing I would add to it is you have to think of the kind of the matrix we're working within. So we have cloud and enterprise customers and we have very, very different scopes of readiness at those customers. So Q3, Q4, Q1, Q2 next year, they're all eligible for timing for types, etc. So we just want to make sure that variability that I spoke to in my prepared remarks is understood in that context.
Jayshree Ullal, CEO
Yes. I think and simple things like power and cooling are affecting the ability to deploy in massive scale. So there's nothing magic about Q4. There's plenty of magic about the year in its entirety and next year.
Liz Stine, Director of Investor Relations
This concludes the Arista Networks second quarter 2024 earnings call. We have posted a presentation which provides additional information on our results which you can access on the Investors section of our website. Thank you for joining us today and thank you for your interest in Arista.
Operator, Operator
Thank you for joining, ladies and gentlemen. This concludes today's call. You may now disconnect.