Q4 2025 Arista Networks Inc Earnings Call
Speaker #1: Welcome to the fourth quarter 2025 Arista Networks financial results earnings conference call. During the call, all participants will be in a listen-only mode. After the presentation, we will conduct a question-and-answer session.
Speaker #1: Instructions will be provided at that time. If you need to reach an operator at any time during the conference, please press the star key followed by 0.
Speaker #1: As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section on the Arista website following this call.
Speaker #1: Mr. Rudolph Araujo, Arista's VP of Investor Advocacy, you may begin.
Speaker #2: Thank you, Regina. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks Chairperson and Chief Executive Officer; and Chantelle Breithaupt, Arista's Chief Financial Officer.
Rudolph Araujo: Thank you, Regina. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks Chairperson and Chief Executive Officer, and Chantelle Breithaupt, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal fourth quarter, ending December 31, 2025. If you want a copy of the release, you can access it online on our website. During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the first quarter of the 2026 fiscal year, longer-term business model and financial outlooks for 2026 and beyond.
Rudolph Araujo: Thank you, Regina. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks Chairperson and Chief Executive Officer, and Chantelle Breithaupt, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal fourth quarter, ending December 31, 2025. If you want a copy of the release, you can access it online on our website. During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the first quarter of the 2026 fiscal year, longer-term business model and financial outlooks for 2026 and beyond.
Speaker #2: This afternoon, Arista Networks issued a press release announcing the results for its fiscal fourth quarter ending December 31, 2025. If you want a copy of the release, you can access it online on our website.
Speaker #2: During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the first quarter of the 2026 fiscal year, longer-term business model, and financial outlooks for 2026 and beyond.
Speaker #2: Our total addressable market and strategy for addressing these market opportunities—including AI, customer demand trends, tariffs and trade restrictions, supply chain constraints, component costs, manufacturing output, inventory management, and inflationary pressures on our business—lead times, product innovation, working capital optimization, and the benefits of acquisitions.
Rudolph Araujo: Our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, tariffs and trade restrictions, supply chain constraints, component costs, manufacturing output, inventory management, and inflationary pressures on our business, lead times, product innovation, working capital optimization, and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K, and which could cause actual results to differ materially from those anticipated by these statements. These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call.
Rudolph Araujo: Our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, tariffs and trade restrictions, supply chain constraints, component costs, manufacturing output, inventory management, and inflationary pressures on our business, lead times, product innovation, working capital optimization, and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K, and which could cause actual results to differ materially from those anticipated by these statements. These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call.
Speaker #2: Which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC. Specifically, in our most recent Form 10Q and Form 10K.
Speaker #2: And which could cause actual results to defer materially from those anticipated by these statements. These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future.
Speaker #2: We undertake no obligation to update these statements after this call. This analysis of our Q4 results and our guidance for Q1 2026 is based on non-GAAP and excludes all non-cash stock-based compensation impacts, certain acquisition-required charges, and other non-recurring items.
Rudolph Araujo: This analysis of our Q4 results and our guidance for Q1 2026 is based on non-GAAP and excludes all non-cash stock-based compensation impacts, certain acquisition-required charges, and other non-recurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided on our, in our earnings release. With that, I will turn the call over to Jayshree.
Rudolph Araujo: This analysis of our Q4 results and our guidance for Q1 2026 is based on non-GAAP and excludes all non-cash stock-based compensation impacts, certain acquisition-required charges, and other non-recurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided on our, in our earnings release. With that, I will turn the call over to Jayshree.
Speaker #2: A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. With that, I will turn the call over to Jayshree.
Speaker #3: Thank you, Rudy. And thank you, everyone, for joining us this afternoon for our fourth quarter and full-year 2025 earnings call. Well, 2025 has been another defining year for Arista.
Jayshree Ullal: Thank you, Rudy, and thank you everyone for joining us this afternoon for our Q4 and full 2025 earnings call. Well, 2025 has been another defining year for Arista. With the momentum of generative AI, cloud, and enterprise, we have achieved well beyond our goal at 28.6% growth, driving a record revenue of $9 billion, coupled with non-GAAP gross margin of 64.6% for the year and a non-GAAP operating margin of 48.2%. The Arista 2.0 momentum is clear as we surpassed 150 million cumulative ports of shipments in Q4 2025. International growth was a good milestone in both Asia and Europe, growing north of 40% annually.
Jayshree Ullal: Thank you, Rudy, and thank you everyone for joining us this afternoon for our Q4 and full 2025 earnings call. Well, 2025 has been another defining year for Arista. With the momentum of generative AI, cloud, and enterprise, we have achieved well beyond our goal at 28.6% growth, driving a record revenue of $9 billion, coupled with non-GAAP gross margin of 64.6% for the year and a non-GAAP operating margin of 48.2%. The Arista 2.0 momentum is clear as we surpassed 150 million cumulative ports of shipments in Q4 2025. International growth was a good milestone in both Asia and Europe, growing north of 40% annually.
Speaker #3: With the momentum of generative AI and cloud and enterprise, we have achieved well beyond our goal at 28.6% growth, driving a record revenue of $9 billion, coupled with non-GAAP gross margin of 64.6% for the year and a non-GAAP operating margin of 48.2%.
Speaker #3: The Arista 2.0 momentum is clear, as we surpassed $150 million cumulative ports of shipments in Q4 2025. International growth was a good milestone in both Asia and Europe, growing north of 40% annually.
Speaker #3: As expected, we have exceeded our strategic goals of $800 million in campus and branch expansion, as well as $1.5 billion in AI center networking.
Jayshree Ullal: As expected, we have exceeded our strategic goals of $800 million in campus and branch expansion, as well as $1.5 billion in AI center networking. Shifting to annual customer sector revenue for 2025, cloud and AI titans contributed significantly at 48%. Enterprise and financials recorded at 32%, while AI and specialty providers, which now includes Apple, Oracle, and their initiatives, as well as emerging neo clouds, performed strongly at 20%. We had two greater than 10 customer concentration in 2025. Customer A and B drove 16% and 26% of our overall business. We cherish our privileged partnerships that have spanned 10 to 15 years of collaborative engineering. With our ever-increasing AI momentum, we anticipate a diversified customer base in 2026, including one, maybe even two additional 10% customers.
Jayshree Ullal: As expected, we have exceeded our strategic goals of $800 million in campus and branch expansion, as well as $1.5 billion in AI center networking. Shifting to annual customer sector revenue for 2025, cloud and AI titans contributed significantly at 48%. Enterprise and financials recorded at 32%, while AI and specialty providers, which now includes Apple, Oracle, and their initiatives, as well as emerging neo clouds, performed strongly at 20%. We had two greater than 10 customer concentration in 2025. Customer A and B drove 16% and 26% of our overall business. We cherish our privileged partnerships that have spanned 10 to 15 years of collaborative engineering. With our ever-increasing AI momentum, we anticipate a diversified customer base in 2026, including one, maybe even two additional 10% customers.
Speaker #3: Shifting to annual customer sector revenue for 2025, cloud and AI titans contributed significantly at 48%. Enterprise and financials recorded at 32%, while AI and specialty providers—which now include Apple, Oracle, and their initiatives, as well as emerging Neo Clouds—performed strongly at 20%.
Speaker #3: We had two greater-than-10% customer concentrations in 2025. Customer A and B drove 16% and 26% of our overall business. We cherish our privileged partnerships that have spanned 10 to 15 years of collaborative engineering.
Speaker #3: With our ever-increasing AI momentum, we anticipate a diversified customer base in 2026, including one, maybe even two additional 10% customers. In terms of annual 2025 product lines, our core cloud AI and data center products built upon our highly differentiated Arista EOS stack are successfully deployed across 10 gig, 200 gig, and 800 gigabit Ethernet speeds, with 1.6 terabit migration imminent.
Jayshree Ullal: In terms of annual 2025 product lines, our core cloud, AI, and data center products, built upon our highly differentiated Arista EOS stack, is successfully deployed across 10 gig to 800 gigabit Ethernet speeds, with 1.6 terabit migration imminent. This includes our portfolio of EtherLink AI and our 7000 series platforms for best-in-class performance, power efficiency, high availability, automation, agility for both the front and back-end compute, storage, and all of the interconnect zones.... Of course, we interoperate with NVIDIA, the recognized worldwide market leader in GPUs, but also realize our responsibility to broaden the open AI ecosystem, including leading companies such as AMD, Anthropic, Arm, Broadcom, OpenAI, Pure Storage, and VAST Data, to name a few, that create the modern AI stack of the 21st century.
Jayshree Ullal: In terms of annual 2025 product lines, our core cloud, AI, and data center products, built upon our highly differentiated Arista EOS stack, is successfully deployed across 10 gig to 800 gigabit Ethernet speeds, with 1.6 terabit migration imminent. This includes our portfolio of EtherLink AI and our 7000 series platforms for best-in-class performance, power efficiency, high availability, automation, agility for both the front and back-end compute, storage, and all of the interconnect zones.... Of course, we interoperate with NVIDIA, the recognized worldwide market leader in GPUs, but also realize our responsibility to broaden the open AI ecosystem, including leading companies such as AMD, Anthropic, Arm, Broadcom, OpenAI, Pure Storage, and VAST Data, to name a few, that create the modern AI stack of the 21st century.
Speaker #3: This includes our portfolio of Etherlink AI and our 7,000 series platforms for best-in-class performance, power efficiency, high availability, automation, agility, for both the front and backend compute, storage, and all of the interconnect zones.
Speaker #3: Of course, we interoperate with NVIDIA, the recognized worldwide market leader in GPUs. But also realize our responsibility to broaden the OpenAI ecosystem, including leading companies such as AMD, Anthropic, ARM, Broadcom, OpenAI, Pure Storage, and VAST Data, to name a few, that create the modern AI stack of the 21st century.
Speaker #3: Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models processing tokens at teraflops. Arista's core sector revenue was driven at 65% of revenue.
Jayshree Ullal: Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models, processing tokens at teraflops. Arista's core sector revenue was driven at 65% of revenue. We are confident of our number one position in market share in high-performance switching, according to most major industry analysts. We launched our Blue Box initiative, offering enriched diagnostics of our hardware platforms, dubbed NetDL, that can run across both our flagship EOS and our open NOS platforms. We saw an excellent uptick in 800 gig adoption in 2025, gaining greater than 100 customers cumulatively for our EtherLink products, and we are co-designing several AI rack systems with 1.6 switching emerging this year. With our increased visibility, we are now doubling from 2025 to 2026 to $3.25 billion in AI networking revenue.
Jayshree Ullal: Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models, processing tokens at teraflops. Arista's core sector revenue was driven at 65% of revenue. We are confident of our number one position in market share in high-performance switching, according to most major industry analysts. We launched our Blue Box initiative, offering enriched diagnostics of our hardware platforms, dubbed NetDL, that can run across both our flagship EOS and our open NOS platforms. We saw an excellent uptick in 800 gig adoption in 2025, gaining greater than 100 customers cumulatively for our EtherLink products, and we are co-designing several AI rack systems with 1.6 switching emerging this year. With our increased visibility, we are now doubling from 2025 to 2026 to $3.25 billion in AI networking revenue.
Speaker #3: We are confident of our number one position in market share in high-performance switching, according to most major industry analysts. We launched our Blue Box initiative, offering enriched diagnostics of our hardware platforms dubbed NetDI, that can run across both our flagship EOS and our open NOS platforms.
Speaker #3: We saw an excellent uptick in 800 gig adoption in 2025, gaining greater than 100 customers cumulatively for our Etherlink products, and we are co-designing several AI rack systems with 1.60 switching emerging this year.
Speaker #3: With our increased visibility, we are now doubling from 2025 to 2026 to $3.25 billion in AI networking revenue. Our network adjacencies market is comprised of routing, replacing routers, and our cognitive AI-driven AVA campus.
Jayshree Ullal: Our network adjacencies market is comprised of routing, replacing routers, and our cognitive AI-driven AVA campus. Our investments in cognitive wired and wireless, zero-touch operation, networked identity, scale and segmentation, get several accolades in the industry. Our open modern stacking with SWAG, Switched Aggregation Group, and our recent Vespa for layer two and layer three wired and wireless scale are compelling campus differentiators. Together with our recent VeloCloud acquisition in July 2025, we are driving that homogeneous, secure client to branch to campus solution with unified management domains. Looking ahead, we are committed to our aggressive goal of $1.25 billion for 2026 for the cognitive campus and branch. We have also successfully deployed in many routing edge, core spine, and peering use cases.
Jayshree Ullal: Our network adjacencies market is comprised of routing, replacing routers, and our cognitive AI-driven AVA campus. Our investments in cognitive wired and wireless, zero-touch operation, networked identity, scale and segmentation, get several accolades in the industry. Our open modern stacking with SWAG, Switched Aggregation Group, and our recent Vespa for layer two and layer three wired and wireless scale are compelling campus differentiators. Together with our recent VeloCloud acquisition in July 2025, we are driving that homogeneous, secure client to branch to campus solution with unified management domains. Looking ahead, we are committed to our aggressive goal of $1.25 billion for 2026 for the cognitive campus and branch. We have also successfully deployed in many routing edge, core spine, and peering use cases.
Speaker #3: Our investments in cognitive wired and wireless, zero-touch operation, networked identity, scale and segmentation get several accolades in the industry. Our open modern stacking with SWAG, Switched Aggregation Group, and our recent Vespa, for Layer 2 and Layer 3 wired and wireless scale, are compelling campus differentiators.
Speaker #3: Together with our recent VeloCloud acquisition in July 2025, we are driving that homogeneous, secure, client-to-branch-to-campus solution with unified management domains. Looking ahead, we are committed to our aggressive goal of $1.25 billion for '26 for the cognitive campus and branch.
Speaker #3: We have also successfully deployed in many routing edge core spine and peering use cases. In Q4 2025, Arista launched our flagship 7,800 R4 spine for many routing use cases, including DCI, AI spines, with that massive 460 terabits of capacity to meet the demanding needs of multi-service routing, AI workloads, and switching use cases.
Jayshree Ullal: In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI spines, with that massive 460 terabits of capacity to meet the demanding needs of multi-service routing, AI workloads, and switching use cases. The combined campus and routing adjacencies together contribute approximately 18% of revenue. Our third and final category is the network software and services based on subscription models such as ACARE, CloudVision, Observability, Advanced Security, and even some branch edge services. We added another 350 CloudVision customers a day, almost 1 new customer a day, and deployed an aggregate of 3,000 customers with CloudVision over the past decade. Arista's subscription-based network services and software revenue contributed approximately 17%, and please note that it does not include perpetual software licenses that are otherwise included in core or adjacent markets.
Jayshree Ullal: In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI spines, with that massive 460 terabits of capacity to meet the demanding needs of multi-service routing, AI workloads, and switching use cases. The combined campus and routing adjacencies together contribute approximately 18% of revenue. Our third and final category is the network software and services based on subscription models such as ACARE, CloudVision, Observability, Advanced Security, and even some branch edge services. We added another 350 CloudVision customers a day, almost 1 new customer a day, and deployed an aggregate of 3,000 customers with CloudVision over the past decade. Arista's subscription-based network services and software revenue contributed approximately 17%, and please note that it does not include perpetual software licenses that are otherwise included in core or adjacent markets.
Speaker #3: The combined campus and routing adjacencies together contribute approximately 18% of revenue. Our third and final category is the network software and services based on subscription models, such as ACARE, CloudVision, Observability, Advanced Security, and even some branch edge services.
Speaker #3: We added another 350 Cloud Vision customers a day almost one new customer a day, and deployed an aggregate of 3,000 customers with Cloud Vision over the past decade.
Speaker #3: Arista's subscription-based network services and software revenue contributed approximately 17%. Please note that this does not include perpetual software licenses, which are otherwise included in core or adjacent markets.
Speaker #3: Arista 2.0 momentum is clear. We find ourselves at the epicenter of mission-critical network transactions. We are becoming the preferred network innovator of choice for client-to-cloud and AI networking with a highly differentiated software stack and a uniform Cloud Vision software foundation.
Jayshree Ullal: Arista 2.0 momentum is clear. We find ourselves at the epicenter of mission-critical network transactions. We are becoming the preferred network innovator of choice for client to cloud and AI networking, with a highly differentiated software stack and a uniform CloudVision software foundation. We are proud to power Warner Bros. distribution, network streaming for 47 markets in 21 languages in the Pan-European Winter Olympics that is happening as I speak. We are now north of 10,000 cumulative customers, and I'm particularly impressed with our traction in the 5 to 10 million customer category, as well as the 1 million customer category in 2025. Arista's 2.0 vision resonates with our customers who value us for leading that transformation from incongruent silos to reliable centers of data. The data can reside as campus centers, data centers, WAN centers, or AI centers, regardless of their location.
Jayshree Ullal: Arista 2.0 momentum is clear. We find ourselves at the epicenter of mission-critical network transactions. We are becoming the preferred network innovator of choice for client to cloud and AI networking, with a highly differentiated software stack and a uniform CloudVision software foundation. We are proud to power Warner Bros. distribution, network streaming for 47 markets in 21 languages in the Pan-European Winter Olympics that is happening as I speak. We are now north of 10,000 cumulative customers, and I'm particularly impressed with our traction in the 5 to 10 million customer category, as well as the 1 million customer category in 2025. Arista's 2.0 vision resonates with our customers who value us for leading that transformation from incongruent silos to reliable centers of data. The data can reside as campus centers, data centers, WAN centers, or AI centers, regardless of their location.
Speaker #3: We are proud to power, Warner Brothers Distribution, network streaming for 47 markets in 21 languages in the pan-European winter Olympics that is happening, as I speak.
Speaker #3: We are now north of 10,000 cumulative customers, and I'm particularly impressed with our traction in the 5 to 10 million customer category as well as the 1 million customer category in 2025.
Speaker #3: Arista's 2.0 vision resonates with our customers, who value us for leading that transformation from incongruent silos to reliable centers of data. The data can reside as campus centers, data centers, WAN centers, or AI centers, regardless of their location.
Speaker #3: Networking for AI has achieved production scale with an all-Ethernet-based Arista AI center. In 2025, we are a founding member of the Ethernet-based standards for both Scale Up, with eSun, as well as completing the Ultra Ethernet Consortium 1.0 specification for Scale Out AI networking.
Jayshree Ullal: Networking for AI has achieved production scale with an all-Ethernet-based Arista AI Center. In 2025, we are a founding member of the Ethernet-based standards for both scale-up with ESAN, as well as completing the Ultra Ethernet Consortium 1.0 specification for scale-out AI networking. These AI centers seamlessly connect the back-end AI accelerators to the front end of compute, storage, WAN, and classic cloud networking. Our AI-accelerated networking portfolio, consisting of three families of EtherLink spine leaf fabric, are successfully deployed in scale-up, scale-out, and scale-across networks. Network architectures must handle both training and inference frontier models to mitigate congestion. For training, the key metric is obviously job completion time, the amount of time taken between admitting a job, training job, to an AI accelerator cluster and the end of a training run. For inference, the key metric is slightly different.
Jayshree Ullal: Networking for AI has achieved production scale with an all-Ethernet-based Arista AI Center. In 2025, we are a founding member of the Ethernet-based standards for both scale-up with ESAN, as well as completing the Ultra Ethernet Consortium 1.0 specification for scale-out AI networking. These AI centers seamlessly connect the back-end AI accelerators to the front end of compute, storage, WAN, and classic cloud networking. Our AI-accelerated networking portfolio, consisting of three families of EtherLink spine leaf fabric, are successfully deployed in scale-up, scale-out, and scale-across networks. Network architectures must handle both training and inference frontier models to mitigate congestion. For training, the key metric is obviously job completion time, the amount of time taken between admitting a job, training job, to an AI accelerator cluster and the end of a training run. For inference, the key metric is slightly different.
Speaker #3: These AI centers seamlessly connect the backend AI accelerators to the frontend of compute, storage, WAN, and classic cloud networking. Our AI accelerated networking portfolio consisting of three families of Ethernet spine leaf fabric are successfully deployed in Scale Up, Scale Out, and Scale Across networks.
Speaker #3: Network architectures must handle both training and inference frontier models to mitigate congestion. For training, the key metric is obviously job completion time. The amount of time taken between admitting a job, training job, to an AI accelerator cluster, and the end of a training run.
Speaker #3: For inference, the key metric is slightly different. It's the time taken to the first token—basically, the amount of latency it takes for a user submitting a query to receive their first response.
Jayshree Ullal: It's the time taken to a first token, basically the amount of latency it takes for a user submitting a query to receive their first response. Arista has clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow, and all the patterns associated with it. Our AI for networking strategy, based on AVA, Autonomous Virtual Assist, curates the data for higher-level functions. Together with our publish-subscribe state foundation in EOS, NetDL, or Network Data Lake, we instrument our customers' networks to deliver proactive, predictive, and prescriptive features for enhanced security, observability, and agentic AI operations. Coupled with the Arista validated designs for network simulation, digital twin, and validation functionality, Arista platforms are perfectly optimized and suited for network as a service. Our global relevance with customers and channels is increasing.
Jayshree Ullal: It's the time taken to a first token, basically the amount of latency it takes for a user submitting a query to receive their first response. Arista has clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow, and all the patterns associated with it. Our AI for networking strategy, based on AVA, Autonomous Virtual Assist, curates the data for higher-level functions. Together with our publish-subscribe state foundation in EOS, NetDL, or Network Data Lake, we instrument our customers' networks to deliver proactive, predictive, and prescriptive features for enhanced security, observability, and agentic AI operations. Coupled with the Arista validated designs for network simulation, digital twin, and validation functionality, Arista platforms are perfectly optimized and suited for network as a service. Our global relevance with customers and channels is increasing.
Speaker #3: Arista has clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow, and all the patterns associated with it.
Speaker #3: Our AI for networking strategy, based on AVA—Autonomous Virtual Assist—curates the data for higher-level functions. Together with our published, subscribed state foundation in EOS, NetDL, or network data lake, we instrument our customers' networks to deliver proactive, predictive, and prescriptive features for enhanced security, observability, and agentic AI operations.
Speaker #3: Coupled with the Arista validated designs for network simulation, digital twin, and validation functionality, Arista platforms are perfectly optimized and suited for network-as-a-service.
Speaker #3: Our global relevance with customers and channels is increasing. In 2025 alone, we conducted three large customer events across three continents: Asia, Europe, and the United States, and many other smaller ones, of course.
Jayshree Ullal: In 2025 alone, we conducted 3 large customer events across 3 continents, Asia, Europe, and United States, and many other smaller ones, of course. We touched 4,000 to 5,000 strategic customers and partners in the enterprise. While many customers are struggling with their legacy incumbents, Arista is deeply appreciated for redefining the future of networking. Customers have long appreciated our network innovation and quality, demonstrated by our highest Net Promoter Score of 93% and lowest security vulnerabilities in the industry. We now see the pace of acceptance and adoption accelerating in the enterprise customer base. Our leadership team, including our newly appointed co-presidents, Ken Duda and Todd Nightingale, have driven strategic and cohesive execution. Tyson Lamoreaux, our newest Senior Vice President, who joined us with deep cloud operations experience, has ignited our hypergrowth across our AI and cloud titan customers.
Jayshree Ullal: In 2025 alone, we conducted 3 large customer events across 3 continents, Asia, Europe, and United States, and many other smaller ones, of course. We touched 4,000 to 5,000 strategic customers and partners in the enterprise. While many customers are struggling with their legacy incumbents, Arista is deeply appreciated for redefining the future of networking. Customers have long appreciated our network innovation and quality, demonstrated by our highest Net Promoter Score of 93% and lowest security vulnerabilities in the industry. We now see the pace of acceptance and adoption accelerating in the enterprise customer base. Our leadership team, including our newly appointed co-presidents, Ken Duda and Todd Nightingale, have driven strategic and cohesive execution. Tyson Lamoreaux, our newest Senior Vice President, who joined us with deep cloud operations experience, has ignited our hypergrowth across our AI and cloud titan customers.
Speaker #3: We touched four to five thousand strategic customers and partners in the enterprise. While many customers are struggling with their legacy incumbents, Arista is deeply appreciated for redefining the future of networking.
Speaker #3: Customers have long appreciated our network innovation and quality, demonstrated by our highest net promoter score of 93% and the lowest security vulnerabilities in the industry.
Speaker #3: We now see the pace of acceptance and adoption accelerating in the enterprise customer base. Our leadership team including our newly appointed co-presidents Ken Duda and Todd Nightingale have driven strategic and cohesive execution.
Speaker #3: Tyson Lamoreaux, our newest senior vice president, who joined us with deep cloud operator experience, has ignited our hypergrowth across our AI and cloud Titan customers.
Speaker #3: Exiting 2025, we are now at approximately 5,200 employees, which also includes the recent VeloCloud acquisition. I am incredibly proud of the entire Arista A team, and thank you all employees for your dedication and hard work.
Jayshree Ullal: Exiting 2025, we are now at approximately 5,200 employees, which also includes the recent VeloCloud acquisition. I am incredibly proud of the entire Arista A-Team, and thank you, all employees, for your dedication and hard work. Of course, our top-notch engineering and leadership team has always steadfastly prioritized our core Arista way principles of innovation, culture, and customer intimacy. Well, I think you would agree that 2025 has indeed been a memorable year, and we expect 2026 to be a fantastic one as well. We are amid an unprecedented networking demand, with massive and a growing TAM of $100+ billion. And so despite all the news on the mounting supply chain, allocation, rising costs of memory, and silicon fabrication, we increased our 2026 guidance to 25% annual growth, accelerating now to $11.25 billion.
Jayshree Ullal: Exiting 2025, we are now at approximately 5,200 employees, which also includes the recent VeloCloud acquisition. I am incredibly proud of the entire Arista A-Team, and thank you, all employees, for your dedication and hard work. Of course, our top-notch engineering and leadership team has always steadfastly prioritized our core Arista way principles of innovation, culture, and customer intimacy. Well, I think you would agree that 2025 has indeed been a memorable year, and we expect 2026 to be a fantastic one as well. We are amid an unprecedented networking demand, with massive and a growing TAM of $100+ billion. And so despite all the news on the mounting supply chain, allocation, rising costs of memory, and silicon fabrication, we increased our 2026 guidance to 25% annual growth, accelerating now to $11.25 billion.
Speaker #3: Of course, our top-notch engineering and leadership team has always steadfastly prioritized our core Arista Way principles of innovation, culture, and customer intimacy. Well, I think you would agree that 2025 has indeed been a memorable year, and we expect 2026 to be a fantastic one as well.
Speaker #3: We are amid an unprecedented networking demand with massive and a growing TAM of 100-plus billion. And so, despite all the news on the mounting supply chain, allocation, rising costs of memory, and silicon fabrication, we increased our 25% annual growth, accelerating now to 11.25 billion.
Speaker #3: And with that happy news, I turn it over to Chantelle, our CFO.
Jayshree Ullal: With that happy news, I turn it over to Chantelle, our CFO.
Jayshree Ullal: With that happy news, I turn it over to Chantelle, our CFO.
Speaker #2: Thank you, Jayshree, and congratulations to you and our employees on a terrific 2025. As you outlined, this was an outstanding year for the company, and that strength is clearly reflected in our financial results.
Chantelle Breithaupt: Thank you, Jayshree, and congratulations to you and our employees on a terrific 2025. As you outlined, this was an outstanding year for the company, and that strength is clearly reflected in our financial results. Let me walk through the details. To start off, total revenues in Q4 were $2.49 billion, up 28.9% year over year, and above the upper end of our guidance of $2.3 to $2.4 billion. It was great to see that all geographies achieve strong growth within the quarter. Services and subscription software contributed approximately 17.1% of revenue in the fourth quarter, down from 18.7% in Q3, which reflects the normalization following some non-recurring VeloCloud service renewal in the prior quarter.
Chantelle Breithaupt: Thank you, Jayshree, and congratulations to you and our employees on a terrific 2025. As you outlined, this was an outstanding year for the company, and that strength is clearly reflected in our financial results. Let me walk through the details. To start off, total revenues in Q4 were $2.49 billion, up 28.9% year over year, and above the upper end of our guidance of $2.3 to $2.4 billion. It was great to see that all geographies achieve strong growth within the quarter. Services and subscription software contributed approximately 17.1% of revenue in the fourth quarter, down from 18.7% in Q3, which reflects the normalization following some non-recurring VeloCloud service renewal in the prior quarter.
Speaker #2: Let me walk through the details. To start off, total revenues in Q4 were $2.49 billion, up 28.9% year over year, and above the upper end of our guidance of $2.3 to $2.4 billion.
Speaker #2: It was great to see that all geographies achieved strong growth within the quarter. Services and subscription software contributed approximately 17.1% of revenue in the fourth quarter, down from 18.7% in Q3, which reflects the normalization following some non-recurring VeloCloud service renewal in the prior quarter.
Chantelle Breithaupt: International revenues for the quarter came in at $528.3 million, or 21.2% of total revenue, up from 20.2% last quarter. This quarter-over-quarter increase was driven by a stronger contribution from our large global customers across our international markets. The overall gross margin in Q4 was 63.4%, slightly above the guidance of 62% to 63%, and down from 64.2% in the prior year. This year-over-year decrease is due to the higher mix of sales to our cloud and AI titan customers in the quarter. Operating expenses for the quarter were $397.1 million, or 16% of revenue, up from the last quarter at $383.3 million.
Chantelle Breithaupt: International revenues for the quarter came in at $528.3 million, or 21.2% of total revenue, up from 20.2% last quarter. This quarter-over-quarter increase was driven by a stronger contribution from our large global customers across our international markets. The overall gross margin in Q4 was 63.4%, slightly above the guidance of 62% to 63%, and down from 64.2% in the prior year. This year-over-year decrease is due to the higher mix of sales to our cloud and AI titan customers in the quarter. Operating expenses for the quarter were $397.1 million, or 16% of revenue, up from the last quarter at $383.3 million.
Speaker #2: International revenues for the quarter came in at $528.3 million, or 21.2% of total revenue, up from 20.2% last quarter. This quarter-over-quarter increase was driven by a stronger contribution from our large global customers across our international markets.
Speaker #2: The overall gross margin in Q4 was 63.4%, slightly above the guidance of $62 to $63%, and down from 64.2% in the prior year. This year-over-year decrease is due to the higher mix of sales to our cloud and AI Titan customers in the quarter.
Speaker #2: Operating expenses for the quarter were $397.1 million, or 16% of revenue, up from the last quarter at $383.3 million. R&D spending came in at $272.6 million, or 11% of revenue, up from $10.9% last quarter.
Chantelle Breithaupt: R&D spending came in at $272.6 million, or 11% of revenue, up from 10.9% last quarter. Arista continued to demonstrate its commitment and focus on networking innovation with a fiscal year 2025 R&D spend at approximately 11% of revenue. Sales and marketing expense was $98.3 million, or 4% of revenue, down from $109.5 million last quarter. FY 2025 closed the year with sales and marketing at 4.5%, representative of the highly efficient Arista go-to-market model. Our G&A costs came in at $26.3 million, or 1.1% of revenue, up from $22.4 million last quarter, reflecting continued investment in systems and processes to scale Arista 2.0. For fiscal year 2025, G&A expense held at 1% of revenue.
Chantelle Breithaupt: R&D spending came in at $272.6 million, or 11% of revenue, up from 10.9% last quarter. Arista continued to demonstrate its commitment and focus on networking innovation with a fiscal year 2025 R&D spend at approximately 11% of revenue. Sales and marketing expense was $98.3 million, or 4% of revenue, down from $109.5 million last quarter. FY 2025 closed the year with sales and marketing at 4.5%, representative of the highly efficient Arista go-to-market model. Our G&A costs came in at $26.3 million, or 1.1% of revenue, up from $22.4 million last quarter, reflecting continued investment in systems and processes to scale Arista 2.0. For fiscal year 2025, G&A expense held at 1% of revenue.
Speaker #2: Arista continued to demonstrate its commitment and focus on networking innovation, with fiscal year '25 R&D spend at approximately 11% of revenue. Sales and marketing expense was $98.3 million, or 4% of revenue, down from $109.5 million last quarter.
Speaker #2: FY '25 closed the year with sales and marketing at 4.5%, representative of the highly efficient Arista go-to-market model. Our G&A costs came in at $26.3 million, or 1.1% of revenue, up from $22.4 million last quarter.
Speaker #2: Reflecting continued investment in systems and processes to scale Arista 2.0. For fiscal year '25, G&A expense held at 1% of revenue. Our operating income for the quarter was $1.2 billion, or 47.5% of revenue.
Chantelle Breithaupt: Our operating income for the quarter was $1.2 billion, or 47.5% of revenue. This strong Q4 finish contributed to an operating income result for fiscal year 2025 of $4.3 billion or 48.2% of revenue. Other income and expense for the quarter was a favorable $102 million, and our effective tax rate was 18.4%. This lower than normal quarterly tax rate reflected the release of statutory tax reserves due to the expiration of the statute of limitations. Overall, this resulted in net income for the quarter of $1.05 billion, or 42% of revenue. It is exciting to see Arista delivering over $1 billion in net income for the first time. Congratulations to the Arista team on this impressive achievement.
Chantelle Breithaupt: Our operating income for the quarter was $1.2 billion, or 47.5% of revenue. This strong Q4 finish contributed to an operating income result for fiscal year 2025 of $4.3 billion or 48.2% of revenue. Other income and expense for the quarter was a favorable $102 million, and our effective tax rate was 18.4%. This lower than normal quarterly tax rate reflected the release of statutory tax reserves due to the expiration of the statute of limitations. Overall, this resulted in net income for the quarter of $1.05 billion, or 42% of revenue. It is exciting to see Arista delivering over $1 billion in net income for the first time. Congratulations to the Arista team on this impressive achievement.
Speaker #2: This strong Q4 finish contributed to an operating income result for fiscal year 2025 of $4.3 billion, or 48.2% of revenue. Other income and expense for the quarter was a favorable $102 million, and our effective tax rate was 18.4%.
Speaker #2: This lower-than-normal quarterly tax rate reflected the release of tax reserves due to the expiration of the statute of limitations. Overall, this resulted in net income for the quarter of $1.05 billion, or 42% of revenue.
Speaker #2: It is exciting to see Arista delivering over $1 billion in net income for the first time. Congratulations to the Arista team on this impressive achievement.
Speaker #2: Our diluted share number was 1.276 billion shares, resulting in a diluted earnings per share for the quarter of 82 cents, up 24.2% from the prior year.
Chantelle Breithaupt: Our diluted share number was 1.276 billion shares, resulting in a diluted earnings per share for the quarter of $0.82, up 24.2% from the prior year. For fiscal year 2025, we are pleased to have delivered a diluted earnings per share of $2.98, a 28.4% increase year over year. Now turning to the balance sheet. Cash, cash equivalents, and marketable securities ended the quarter at approximately $10.74 billion. In the quarter, we repurchased $620.1 million of our common stock at an average price of $127.84 per share. Within fiscal 2025, we repurchased $1.6 billion of our common stock at an average price of $100.63 per share.
Chantelle Breithaupt: Our diluted share number was 1.276 billion shares, resulting in a diluted earnings per share for the quarter of $0.82, up 24.2% from the prior year. For fiscal year 2025, we are pleased to have delivered a diluted earnings per share of $2.98, a 28.4% increase year over year. Now turning to the balance sheet. Cash, cash equivalents, and marketable securities ended the quarter at approximately $10.74 billion. In the quarter, we repurchased $620.1 million of our common stock at an average price of $127.84 per share. Within fiscal 2025, we repurchased $1.6 billion of our common stock at an average price of $100.63 per share.
Speaker #2: For fiscal year '25, we are pleased to have delivered a diluted earnings per share of $2.98, a 28.4% increase year over year. Now, turning to the balance sheet.
Speaker #2: Cash, cash equivalents, and marketable securities ended the quarter at approximately $10.74 billion. In the quarter, we repurchased $620.1 million of our common stock at an average price of $127.84 per share.
Speaker #2: Within fiscal 2025, we repurchased $1.6 billion of our common stock at an average price of $100.63 per share. Of the $1.5 billion repurchase program approved in May 2025, $817.9 million remain available for repurchase in future quarters.
Chantelle Breithaupt: Of the $1.5 billion repurchase program approved in May 2025, $817.9 million remains available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price, and other factors. Now turning to operating cash performance for the fourth quarter, we generated approximately $1.26 billion of cash from operations in the period. This result was an outcome of strong earnings performance, with an increase in deferred revenue, offset by an increase in accounts receivable, driven by higher shipments and end-of-quarter service renewals. DSOs came in at 70 days, up from 59 days in Q3, driven by renewals and the timing of shipments in the quarter. Inventory turns were 1.5 times, up from 1.4 last quarter.
Chantelle Breithaupt: Of the $1.5 billion repurchase program approved in May 2025, $817.9 million remains available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price, and other factors. Now turning to operating cash performance for the fourth quarter, we generated approximately $1.26 billion of cash from operations in the period. This result was an outcome of strong earnings performance, with an increase in deferred revenue, offset by an increase in accounts receivable, driven by higher shipments and end-of-quarter service renewals. DSOs came in at 70 days, up from 59 days in Q3, driven by renewals and the timing of shipments in the quarter. Inventory turns were 1.5 times, up from 1.4 last quarter.
Speaker #2: The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price, and other factors. Now turning to operating cash performance for the fourth quarter, we generated approximately $1.26 billion of cash from operations in the period.
Speaker #2: This result was an outcome of strong earnings performance with an increase in deferred revenue, offset by an increase in accounts receivable, driven by higher shipments and end-of-quarter service renewals.
Speaker #2: DSOs came in at 70 days, up from 59 days in Q3, driven by renewals and the timing of shipments in the quarter. Inventory turns were 1.5 times, up from 1.4 last quarter.
Speaker #2: Inventory increased marginally to $2.25 billion, reflecting diligent inventory management across raw and finished goods. Our purchase commitments at the end of the quarter were $6.8 billion, up from $4.8 billion at the end of Q3.
Chantelle Breithaupt: Inventory increased marginally to $2.25 billion, reflecting diligent inventory management across raw and finished goods. Our purchase commitments at the end of the quarter were $6.8 billion, up from $4.8 billion at the end of Q3. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters due to the combination of demand for our new products, component pricing, such as the supply constraint on DDR4 memory, and the lead times from our key suppliers. Our total deferred revenue balance was $5.4 billion, up from $4.7 billion in the prior quarter. In Q4, the majority of the deferred revenue balance is product-related.
Chantelle Breithaupt: Inventory increased marginally to $2.25 billion, reflecting diligent inventory management across raw and finished goods. Our purchase commitments at the end of the quarter were $6.8 billion, up from $4.8 billion at the end of Q3. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters due to the combination of demand for our new products, component pricing, such as the supply constraint on DDR4 memory, and the lead times from our key suppliers. Our total deferred revenue balance was $5.4 billion, up from $4.7 billion in the prior quarter. In Q4, the majority of the deferred revenue balance is product-related.
Speaker #2: As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters due to the combination of demand for our new products, component pricing—such as the supply constraint on DDR4 memory—and the lead times from our key suppliers.
Speaker #2: Our total deferred revenue balance was $5.4 billion, up from $4.7 billion in the prior quarter. In Q4, the majority of the deferred revenue balance is product-related.
Speaker #2: Our product deferred revenue increased approximately $469 million versus last quarter. We remain in a period of ramping our new products, winning new customers, and expanding new use cases, including AI.
Chantelle Breithaupt: Our product deferred revenue increased approximately $469 million versus last quarter. We remain in a period of ramping our new products, winning new customers, and expanding new use cases, including AI. These trends have resulted in increased customer-specific acceptance clauses and an increase in the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis, independent of underlying business drivers. Accounts payable days were 66 days, up from 55 days in Q3, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $37 million. In October 2024, we began our initial construction work to build expanded facilities in Santa Clara and incurred approximately $100 million in CapEx during fiscal year 2025 for this project.
Chantelle Breithaupt: Our product deferred revenue increased approximately $469 million versus last quarter. We remain in a period of ramping our new products, winning new customers, and expanding new use cases, including AI. These trends have resulted in increased customer-specific acceptance clauses and an increase in the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis, independent of underlying business drivers. Accounts payable days were 66 days, up from 55 days in Q3, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $37 million. In October 2024, we began our initial construction work to build expanded facilities in Santa Clara and incurred approximately $100 million in CapEx during fiscal year 2025 for this project.
Speaker #2: These trends have resulted in increased customer-specific acceptance clauses and an increase in the volatility of our product-deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis, independent of underlying business drivers.
Speaker #2: Accounts payable dates were $66 days, up from $55 days in Q3, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $37 million, in October 2024, we began our initial construction work to build expanded facilities in Santa Clara, and incurred approximately $100 million in capex during fiscal year 2025 for this project.
Speaker #2: As we move through 2025, we have gained visibility and confidence for fiscal year 2026. As Jayshree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 25% revenue growth, delivering approximately $11.25 billion.
Chantelle Breithaupt: As we have moved through 2025, we have gained visibility and confidence for fiscal year 2026. As Jayshree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 25% revenue growth, delivering approximately $11.25 billion. We maintain our 2026 campus revenue goal of $1.25 billion and raise our AI centers goal from $2.75 billion to $3.25 billion. For gross margin, we reiterate the range for the fiscal year of 62% to 64%, inclusive of mix and anticipated supply chain cost increases from memory and silicon. In terms of spending, we expect to continue to invest in innovation, sales, and scaling the business to ensure our status as a leading pure play networking company.
Chantelle Breithaupt: As we have moved through 2025, we have gained visibility and confidence for fiscal year 2026. As Jayshree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 25% revenue growth, delivering approximately $11.25 billion. We maintain our 2026 campus revenue goal of $1.25 billion and raise our AI centers goal from $2.75 billion to $3.25 billion. For gross margin, we reiterate the range for the fiscal year of 62% to 64%, inclusive of mix and anticipated supply chain cost increases from memory and silicon. In terms of spending, we expect to continue to invest in innovation, sales, and scaling the business to ensure our status as a leading pure play networking company.
Speaker #2: We maintain our 2026 campus revenue goal of $1.25 billion, and raise our AI centers goal from $2.75 to $3.25 billion. For gross margin, we reiterate the range for the fiscal year of $62 to $64%, inclusive of mix and anticipated supply chain cost increases for memory and silicon.
Speaker #2: In terms of spending, we expect to continue to invest in innovation, sales, and scaling the business to ensure our status as a leading pure-play networking company.
Chantelle Breithaupt: With our increased revenue guidance, we are now confident to raise the operating margin outlook to approximately 46% in 2026. On the cash front, we will continue to work to optimize our working capital investments, with some expected variability in inventory due to the timing of component receipts on purchase commitments. Our structural tax rate is expected at 21.5%, back to the usual historical rate, up from the seasonally lower rate of 18.4% experienced last quarter, Q4 2025. With all of this as a backdrop, our guidance for Q1 is as follows: revenues of approximately $2.6 billion, gross margin between 62% and 63%, and operating margin at approximately 46%. Our effective tax rate is expected to be approximately 21.5%, with approximately 1.275 billion diluted shares.
Speaker #2: With our increased revenue guidance, we are now confident to raise the operating margin outlook to approximately 46% in 2026. On the cash front, we will continue to work to optimize our working capital investments, with some expected variability in inventory due to the timing of component receipts on purchase commitments.
Chantelle Breithaupt: With our increased revenue guidance, we are now confident to raise the operating margin outlook to approximately 46% in 2026. On the cash front, we will continue to work to optimize our working capital investments, with some expected variability in inventory due to the timing of component receipts on purchase commitments. Our structural tax rate is expected at 21.5%, back to the usual historical rate, up from the seasonally lower rate of 18.4% experienced last quarter, Q4 2025. With all of this as a backdrop, our guidance for Q1 is as follows: revenues of approximately $2.6 billion, gross margin between 62% and 63%, and operating margin at approximately 46%. Our effective tax rate is expected to be approximately 21.5%, with approximately 1.275 billion diluted shares.
Speaker #2: Our structural tax rate is expected at 21.5%, back to the usual historical rate, up from the seasonally lower rate of 18.4% experienced last quarter, Q4 2025.
Speaker #2: With all of this as a backdrop, our guidance for the first quarter is as follows: revenues of approximately $2.6 billion, gross margin between $62 and $63%, and operating margin at approximately 46%.
Speaker #2: Our effective tax rate is expected to be approximately 21.5%, with approximately $1.275 billion diluted shares. In closing, at our September analyst date, we had a theme of building momentum, and we are doing just that.
Chantelle Breithaupt: In closing, at our September Analyst Day, we had a theme of building momentum, and we are doing just that. In the campus WAN, data, and AI centers, we are uniquely positioned to deliver what customers need. We will continue to deliver both our world-class customer experience and innovation. I am enthusiastic about our fiscal year ahead. Now back to you, Rudy, for Q&A.
Chantelle Breithaupt: In closing, at our September Analyst Day, we had a theme of building momentum, and we are doing just that. In the campus WAN, data, and AI centers, we are uniquely positioned to deliver what customers need. We will continue to deliver both our world-class customer experience and innovation. I am enthusiastic about our fiscal year ahead. Now back to you, Rudy, for Q&A.
Speaker #2: In the campus WAN, data, and AI centers, we are uniquely positioned to deliver what customers need. We will continue to deliver both our world-class customer experience and innovation.
Speaker #2: I am enthusiastic about our fiscal year ahead. Now back to you, Rudy, for Q&A.
Speaker #1: Thank you, Chantelle. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question.
Rudolph Araujo: Thank you, Chantelle. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Regina, please take it away.
Rudolph Araujo: Thank you, Chantelle. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Regina, please take it away.
Speaker #1: Thank you for your understanding. Regina, please take it away.
Speaker #3: We will now begin the Q&A portion of the Arista earnings call. To ask a question during this time, simply press star and then the number 1 on your telephone keypad.
Operator 3: We will now begin the Q&A portion of the Arista earnings call. To ask a question during this time, simply press Star and then 1 on your telephone keypad. If you would like to withdraw your question, press Star and 1 again. Please pick up your handset before asking questions to ensure optimal sound quality. Our first question will come from the line of Meta Marshall with Morgan Stanley. Please go ahead.
Operator: We will now begin the Q&A portion of the Arista earnings call. To ask a question during this time, simply press Star and then 1 on your telephone keypad. If you would like to withdraw your question, press Star and 1 again. Please pick up your handset before asking questions to ensure optimal sound quality. Our first question will come from the line of Meta Marshall with Morgan Stanley. Please go ahead.
Speaker #3: If you would like to withdraw your question, press star and the number 1 again. Please pick up your handset before asking questions to ensure optimal sound quality.
Speaker #3: Our first question will come from the line of Mita Marshall with Morgan Stanley. Please go ahead.
Speaker #4: Great. And congratulations on the quarter. I guess in terms of kind of the commentary you had, Jayshree, on the one or two additional 10% customers, I guess just digging more into that, what are the puts and takes of is it bottlenecks in terms of their building, is it what would make or break kind of whether those become two new additional kind of 10% customers?
Meta Marshall: Great, and congratulations on the quarter. I guess, in terms of kind of the commentary you had, Jayshree, on the one or two additional 10% customers, I guess just digging more into that, what are the, the puts and takes of, you know, is it bottlenecks in terms of their building? Is it? Like, what would make, make or break kind of whether those become two new additional kind of 10% customers? Thank you.
Meta Marshall: Great, and congratulations on the quarter. I guess, in terms of kind of the commentary you had, Jayshree, on the one or two additional 10% customers, I guess just digging more into that, what are the, the puts and takes of, you know, is it bottlenecks in terms of their building? Is it? Like, what would make, make or break kind of whether those become two new additional kind of 10% customers? Thank you.
Speaker #4: Thank you.
Speaker #5: Thank you, Mita, for the good wishes. So obviously, if I didn't have confidence, I wouldn't dare to say that, would I? But there's always variables.
Chantelle Breithaupt: Thank you, Meta, for the good wishes. So obviously, if I didn't have confidence, I wouldn't dare to say that, would I? But there's always variables. Some of it may be sitting in deferred, so there's an acceptance criteria that we have to meet, and there's also timing associated with meeting the acceptance criteria.
Jayshree Ullal: Thank you, Meta, for the good wishes. So obviously, if I didn't have confidence, I wouldn't dare to say that, would I? But there's always variables. Some of it may be sitting in deferred, so there's an acceptance criteria that we have to meet, and there's also timing associated with meeting the acceptance criteria.
Speaker #5: Some of it may be sitting in deferred, so there's an acceptance criteria that we have to meet. And there's also timing associated with meeting the acceptance criteria.
Speaker #5: Some of it is demand that is still underway. And in this age of all the supply chain allocation and inflation, we've got to be sure we can ship.
Jayshree Ullal: ...Some of it is demand that is still underway. And, you know, in this age of all the supply chain allocation and inflation, we've got to be sure we can ship. So we don't know if it's exactly a 10% or high single digits or low double digits, but, a lot of variables will decide that final number. But certainly the demand is there.
Jayshree Ullal: ...Some of it is demand that is still underway. And, you know, in this age of all the supply chain allocation and inflation, we've got to be sure we can ship. So we don't know if it's exactly a 10% or high single digits or low double digits, but, a lot of variables will decide that final number. But certainly the demand is there.
Speaker #5: So, we don't know if it's exactly a 10%, or high single digits, or low double digits, but a lot of variables will decide that final number.
Speaker #5: But certainly, the demand is there.
Speaker #4: Great. Thank you.
Chantelle Breithaupt: Great. Thank you.
Meta Marshall: Great. Thank you.
Speaker #5: Thank you.
Jayshree Ullal: Thank you.
Jayshree Ullal: Thank you.
Speaker #3: Our next question will come from the line of Somic Chatterjee with JPMorgan. Please go ahead.
Operator 2: Our next question will come from the line of Samik Chatterjee with JP Morgan. Please go ahead.
Jayshree Ullal: Our next question will come from the line of Samik Chatterjee with JP Morgan. Please go ahead.
Samik Chatterjee: Hi. Thanks for taking my question. And Jayshree, congrats on the quarter and the outlook. I don't want to sort of say that the 25% growth is not impressive, but since you're doing 30% is what the guidance is for Q1, maybe if I could understand what's maybe sort of leading to somewhat of a cautious in terms of visibility for the rest of the year. Is it these sort of 1 to 2, 2 new customers and their ramps that you're sort of more cautious about? Or is it availability of supply in relation to some of the components or memory that's sort of giving you maybe more, bit more cautiousness about the visibility for the remainder of the year, if you could understand the drivers there?
Speaker #6: Hi. Thanks for taking my question and, Jayshree, congrats on the quarter and the outlook. I don't want to sort of say that the 25% growth is not impressive, but since you're doing 30% is what the guidance is for one Q, maybe if I could understand what's maybe sort of leading to somewhat of a cautious in terms of visibility for the rest of the year.
Samik Chatterjee: Hi. Thanks for taking my question. And Jayshree, congrats on the quarter and the outlook. I don't want to sort of say that the 25% growth is not impressive, but since you're doing 30% is what the guidance is for Q1, maybe if I could understand what's maybe sort of leading to somewhat of a cautious in terms of visibility for the rest of the year. Is it these sort of 1 to 2, 2 new customers and their ramps that you're sort of more cautious about? Or is it availability of supply in relation to some of the components or memory that's sort of giving you maybe more, bit more cautiousness about the visibility for the remainder of the year, if you could understand the drivers there?
Speaker #6: Is it these sort of one to two new customers and their RAMs that you're sort of more cautious about, or is it availability of supply in relation to some of the components or memory?
Speaker #6: That's sort of giving you maybe a bit more cautiousness about the visibility for the remainder of the year, if you can understand the drivers there.
Jayshree Ullal: Yeah.
Jayshree Ullal: Yeah.
Speaker #6: Thank you.
Samik Chatterjee: Thank you.
Samik Chatterjee: Thank you.
Jayshree Ullal: No, thank you. Thank you, Samik. First, I don't think I'm being cautious. I think I went all out to give you a high dose of reality. But I understand your views on caution, given all the CapEx numbers you see from customers. That's an important thing to understand, that we don't track the CapEx. The first thing that happens in the CapEx is they got to build the data centers and get the power and get all of the GPUs and accelerators, and the network comes, lags a little. So demand is going to be very good, but whether the shipments exactly fall into 26 or 27, Todd, you can clarify when they really fall in, but there's a lot of variables there. That's one issue.
Jayshree Ullal: No, thank you. Thank you, Samik. First, I don't think I'm being cautious. I think I went all out to give you a high dose of reality. But I understand your views on caution, given all the CapEx numbers you see from customers. That's an important thing to understand, that we don't track the CapEx. The first thing that happens in the CapEx is they got to build the data centers and get the power and get all of the GPUs and accelerators, and the network comes, lags a little. So demand is going to be very good, but whether the shipments exactly fall into 26 or 27, Todd, you can clarify when they really fall in, but there's a lot of variables there. That's one issue.
Speaker #5: No. Thank you. Thank you, Samiq. First, I don't think I'm being cautious. I think I went all out to give you a high dose of reality.
Speaker #5: But I understand your views on caution, given all the capex numbers you see from customers. That's an important thing to understand—that we don't track the capex.
Speaker #5: The first thing that happens in the capex is they've got to build the data centers and get the power and get all of the GPUs and accelerators.
Speaker #5: And then the network comes, lags a little. So demand is going to be very good. But whether the shipments exactly fall into '26 or '27, Todd, you can clarify when they really fall in.
Speaker #5: But there's a lot of variables there. That's one issue. The second, as I said, is a large amount of these are new products, new use cases, highly tied to AI.
Jayshree Ullal: The second, as I said, is a large amount of these are new products, new use cases, highly tied to AI, where customers are still in their first innings. So again, you know, I'm giving you the greatest visibility I can, you know, fairly early in the year on the reality of what we can ship, not what the demand might be. It might be a multi-year demand that ships over multiple years. So, let's hope it continues. But of course, you must understand that we're also facing a law of large numbers. So 25% on a base of now $9 billion, when we started last year at $8.25 billion, is a really, really early and good start.
Jayshree Ullal: The second, as I said, is a large amount of these are new products, new use cases, highly tied to AI, where customers are still in their first innings. So again, you know, I'm giving you the greatest visibility I can, you know, fairly early in the year on the reality of what we can ship, not what the demand might be. It might be a multi-year demand that ships over multiple years. So, let's hope it continues. But of course, you must understand that we're also facing a law of large numbers. So 25% on a base of now $9 billion, when we started last year at $8.25 billion, is a really, really early and good start.
Speaker #5: Where customers are still in their first innings. So again, I'm giving you the greatest visibility I can fairly early in the year. On the reality of what we can ship, not what the demand might be.
Speaker #5: It might be a multi-year demand that ships over multiple years. So let's hope it continues. But of course, you must understand that we're also facing a law of large numbers.
Speaker #5: So 25% on a base of now $9 billion, when we started last year at $8.25 billion, is a really, really early and good start.
Speaker #6: Thank you.
Samik Chatterjee: Thank you.
Samik Chatterjee: Thank you.
Speaker #3: Our next question will come from the line of David Boat with UBS. Please go ahead.
Operator 2: Our next question will come from the line of David Vogt with UBS. Please go ahead.
Samik Chatterjee: Our next question will come from the line of David Vogt with UBS. Please go ahead.
Speaker #1: Great. Thanks, guys, for taking my question. Maybe Chantelle and Jayshree, can you help quantify sort of both the revenue impact and potential kind of gross margin impact embedded in your guide from the memory dynamics and the constraints?
David Vogt: Great. Thanks, guys, for taking my question. Maybe, Shantanu and Jayshree, can you help quantify sort of both the revenue impact and potential kind of gross margin impact embedded in your guide from the memory dynamics and the constraints? I know last quarter, and you even mentioned in this quarter, you know, obviously the supply chain does have some constraints. When you think about, I think, JC, you just said kind of the real outlook that you see, maybe can you help parameterize, you know, what you think could hold you back, if that's the way to phrase it, and just give us a sense for what upside could be, yeah, in a perfect world, effectively, if you could share that.
David Vogt: Great. Thanks, guys, for taking my question. Maybe, Shantanu and Jayshree, can you help quantify sort of both the revenue impact and potential kind of gross margin impact embedded in your guide from the memory dynamics and the constraints? I know last quarter, and you even mentioned in this quarter, you know, obviously the supply chain does have some constraints. When you think about, I think, JC, you just said kind of the real outlook that you see, maybe can you help parameterize, you know, what you think could hold you back, if that's the way to phrase it, and just give us a sense for what upside could be, yeah, in a perfect world, effectively, if you could share that.
Speaker #1: I know the last quarter, and you even mentioned in this quarter, obviously, the supply chain does have some constraints. When you think about—I think, Jayshree, you just said kind of the real outlook that you see—maybe can you help parameterize what you think could hold you back, if that's the way to phrase it?
Speaker #1: And just give us a sense for what upside could be in a perfect world, effectively, if you could share that.
Speaker #5: I'm going to give some general commentary and Chantelle, if you don't mind, adding to it. Our peers in the industry have been facing this probably longer than we have because I think the server industry probably saw it first because they're more memory intensive.
Jayshree Ullal: I'm going to give some general commentary and, Shantanu, if you don't mind adding to it. You know, our peers in the industry have been facing this probably longer than we have, because I think the server industry probably saw it first because they're more memory intensive. Add to that, that we're expecting increases from the silicon fabrication, that all the chips are made, as you know, essentially in with one company, Taiwan Semiconductor. So Arista has taken a very thoughtful approach, being aware of this since 2025, and frankly absorbed a lot of the costs in 2025 that we were incurring. However, in 2026, the situation has worsened significantly. We're having to smile and take it just about at any price we can get, and the, and the prices are horrendous. They're an order of magnitude exponentially higher.
Jayshree Ullal: I'm going to give some general commentary and, Shantanu, if you don't mind adding to it. You know, our peers in the industry have been facing this probably longer than we have, because I think the server industry probably saw it first because they're more memory intensive. Add to that, that we're expecting increases from the silicon fabrication, that all the chips are made, as you know, essentially in with one company, Taiwan Semiconductor. So Arista has taken a very thoughtful approach, being aware of this since 2025, and frankly absorbed a lot of the costs in 2025 that we were incurring. However, in 2026, the situation has worsened significantly. We're having to smile and take it just about at any price we can get, and the, and the prices are horrendous. They're an order of magnitude exponentially higher.
Speaker #5: Add to that that we're expecting increases from the silicon fabrication that all the chips are made— as you know, essentially with one company, Taiwan Semiconductor.
Speaker #5: So Arista has taken a very thoughtful approach being aware of this. Since 2025, and frankly, absorbed a lot of the costs in 2025 that we were incurring.
Speaker #5: However, in 2026, the situation has worsened significantly. We're having to smile and take it just about at any price we can get, and the prices are horrendous.
Speaker #5: They're an order of magnitude exponentially higher. So clearly, with the situation worsening, and also expected to last multiple years, we are experiencing shortages in memory.
Jayshree Ullal: So clearly, with the situation worsening and also expected to last multiple years, we are experiencing shortages in memory. Thankfully, as you can see, reflected in our purchase commitments, we are planning for this. And, I know that memory is now the new gold for the AI and automotive sector, but clearly it's not going to be easy, but it's going to favor those who planned and those who can spend the money for it. Chantelle?
Jayshree Ullal: So clearly, with the situation worsening and also expected to last multiple years, we are experiencing shortages in memory. Thankfully, as you can see, reflected in our purchase commitments, we are planning for this. And, I know that memory is now the new gold for the AI and automotive sector, but clearly it's not going to be easy, but it's going to favor those who planned and those who can spend the money for it. Chantelle?
Speaker #5: Thankfully, as you can see reflected in our purchase commitments, we are planning for this. And I know that memory is now the new gold for the AI and automotive sector.
Speaker #5: But clearly, it's not going to be easy. But it's going to favor those who planned, and those who can spend the money for it.
Chantelle Breithaupt: Yeah, I think, I think the only thing I'd add to your question, David, and thank you for that, is that so we're comfortable in the guide, and that's why we have the guide and why we raised the numbers that we did. So we're comfortable we have a path to there within the numbers we provided. The range of 62 to 64, I think we are pleased to hold despite this kind of pressure coming into it. You know, this has been our guide since September at our Analyst Day, so we're pleased to hold that guide and find ways to mitigate this, you know, this journey. Now, whether it ends up being, you know, 62.5 versus 63.5 in the guide in that range, that's where we'll continue to update you, but the range we're comfortable with.
Chantelle Breithaupt: Yeah, I think, I think the only thing I'd add to your question, David, and thank you for that, is that so we're comfortable in the guide, and that's why we have the guide and why we raised the numbers that we did. So we're comfortable we have a path to there within the numbers we provided. The range of 62 to 64, I think we are pleased to hold despite this kind of pressure coming into it. You know, this has been our guide since September at our Analyst Day, so we're pleased to hold that guide and find ways to mitigate this, you know, this journey. Now, whether it ends up being, you know, 62.5 versus 63.5 in the guide in that range, that's where we'll continue to update you, but the range we're comfortable with.
Speaker #5: Yeah, and I think the only thing I'd add to your question, David—and thank you for that—is that we're comfortable in the guide, and that's why we have the guide and why we raised the numbers that we did.
Speaker #5: So we're comfortable we have a path to get there within the numbers we provided. The range of 62 to 64, I think we are pleased to hold despite this kind of pressure coming into it.
Speaker #5: This has been our guide since September at our analyst day. So we're pleased to hold that guide and find ways to mitigate this journey.
Speaker #5: Now, whether it ends up being 62.5 versus 63.5 in the guide, in that range, that's where we'll continue to update you. But the range we're comfortable with.
Speaker #1: Understood. Thanks, guys.
David Vogt: Understood. Thanks, guys.
David Vogt: Understood. Thanks, guys.
Speaker #5: Thank you, David.
Jayshree Ullal: Thank you, David.
Jayshree Ullal: Thank you, David.
Speaker #6: Thanks.
Speaker #3: Our next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead.
Operator 2: Our next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead.
Jayshree Ullal: Our next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead.
Speaker #7: Yeah. Thanks for taking the question and congrats as well on the quarter and the guide. I guess when we think about the 3.25 billion guide for the AI contribution this year, I'm curious, Jayshree, how much you're factoring in, if any, from scale-up networking opportunity?
Aaron Rakers: Yeah, thanks for taking the question and congrats as well on the quarter and, and the guide. I guess when we think about the $3.25 billion guide for the AI contribution this year, I'm curious, Jayshree, how much you're factoring, if at all, if any, from scale-up networking opportunity, how do you see?
Aaron Rakers: Yeah, thanks for taking the question and congrats as well on the quarter and, and the guide. I guess when we think about the $3.25 billion guide for the AI contribution this year, I'm curious, Jayshree, how much you're factoring, if at all, if any, from scale-up networking opportunity, how do you see?
Speaker #7: How do you see is that more still of a 27? And also, can you unpack X the AI and X the campus contribution? It appears that your guiding is still pretty muted.
Jayshree Ullal: Yeah.
Jayshree Ullal: Yeah.
Aaron Rakers: Is that more still of a 27? And also, can you unpack like ex the AI and ex the campus contribution? It appears that you're guiding still pretty muted, low single digit growth on non-AI. Just curious how you see the-
Aaron Rakers: Is that more still of a 27? And also, can you unpack like ex the AI and ex the campus contribution? It appears that you're guiding still pretty muted, low single digit growth on non-AI. Just curious how you see the-
Speaker #7: Low single-digit growth on non-AI? Just curious how you see the non-AI, non-campus growth.
Jayshree Ullal: Oh!
Jayshree Ullal: Oh!
Aaron Rakers: non-AI, non-campus growth.
Aaron Rakers: non-AI, non-campus growth.
Jayshree Ullal: Yeah. Okay, yeah. Well, you know, rising tide rises all boats, but some go higher and some go lower. But to answer your specific question, what was it, Aaron?
Jayshree Ullal: Yeah. Okay, yeah. Well, you know, rising tide rises all boats, but some go higher and some go lower. But to answer your specific question, what was it, Aaron?
Speaker #5: Yeah, OK. Yeah. Well, a rising tide rises all boats, but some go higher and some go lower. But to answer your specific question, what was it on?
Speaker #7: How much scale-up.
Simon Leopold: How much Scale-Up?
Aaron Rakers: How much Scale-Up?
Speaker #6: Scale-up.
Aaron Rakers: Scale up.
Aaron Rakers: Scale up.
Jayshree Ullal: Oh, how much scale up. We have consistently described that today's configurations are mostly a combination of scale out and scale up, where largely based on 800 gig and smaller radix. Now that the ESAN specification is well underway, and Ken Duda, I think the spec will be done in a year or this year for sure. So, Ken and Hugh Holbrook are actively involved in that. We need a good solid spec, otherwise we'll be shipping proprietary products like some people in the world do today. And so we will tie our scale-up commitments greatly to availability of new products and a new ESAN spec, which we expect the earliest to be Q4 this year. And therefore, majority of the...
Jayshree Ullal: Oh, how much scale up. We have consistently described that today's configurations are mostly a combination of scale out and scale up, where largely based on 800 gig and smaller radix. Now that the ESAN specification is well underway, and Ken Duda, I think the spec will be done in a year or this year for sure. So, Ken and Hugh Holbrook are actively involved in that. We need a good solid spec, otherwise we'll be shipping proprietary products like some people in the world do today. And so we will tie our scale-up commitments greatly to availability of new products and a new ESAN spec, which we expect the earliest to be Q4 this year. And therefore, majority of the...
Speaker #5: Oh, how much scale-up. We have consistently described that today's configurations are mostly a combination of scale-out and scale-up, where it's largely based on 800-gig and smaller radix.
Speaker #5: Now that the eSun specification is well underway, and Ken Duda, I think the spec will be done in a year or this year for sure.
Speaker #5: So Ken and Hugh Holbrook are actively involved in that. We need a good, solid spec. Otherwise, we'll be shipping proprietary products like some people in the world do today.
Speaker #5: And so we will tie our scale-up commitment greatly to availability of new products and a new eSun spec, which we expect the earliest to be Q4 this year.
Speaker #5: And therefore, majority of the will be in some trials. We're a lot of anti-vector shine in the team is working on a lot of active AI racks with scale-up in mind.
Jayshree Ullal: We'll be in some trials, where a lot of, you know, Andy Bechtolsheim and the team is working on a lot of active AI racks with Scale-Up in mind. But the real production level will be in 2027, primarily centered around not just 800G, but 1.6T.
Jayshree Ullal: We'll be in some trials, where a lot of, you know, Andy Bechtolsheim and the team is working on a lot of active AI racks with Scale-Up in mind. But the real production level will be in 2027, primarily centered around not just 800G, but 1.6T.
Speaker #5: But the real production level will be in 2027, primarily centered around not just 800 gig, but 1.6T. And I think that.
Chantelle Breithaupt: And I think that regarding-
Chantelle Breithaupt: And I think that regarding-
Speaker #7: Thank you.
[Analyst]: Thank you.
[Analyst]: Thank you.
Chantelle Breithaupt: Oh, okay. Thank you, Aaron.
Jayshree Ullal: Oh, okay. Thank you, Aaron.
Speaker #5: OK. Thank you, Aaron.
Speaker #3: Our next question will come from the line of Ahmet Daryanani with Evercore ISI. Please go ahead.
Operator 2: Our next question will come from the line of Amit Daryanani with Evercore ISI. Please go ahead.
Jayshree Ullal: Our next question will come from the line of Amit Daryanani with Evercore ISI. Please go ahead.
Speaker #8: Yep. Thanks a lot. And congrats from my end as well for some really good numbers here. Jayshree, if I think of some of these model builders like Anthropic that I think you folks have talked about, they're starting to build these multi-billion dollar clusters on their own now.
Amit Daryanani: Yep, thanks a lot, and congrats from my end as well for some really good numbers here. Jayshree, if I think of some of these model builders like Anthropic, that I think you folks have talked about, you know, they're starting to build these multi-billion-dollar clusters on their own now. Can you just talk about your ability to participate in some of these build-outs as they happen, be that on the DCI side or maybe even beyond that? And by extension, does this give you an opportunity to ramp up with some of the larger cloud companies that these model builders are partnering with over time as well, as they build out TP or training clusters? I'd love to just understand how that kind of business scales up to you folks. Thank you.
Amit Daryanani: Yep, thanks a lot, and congrats from my end as well for some really good numbers here. Jayshree, if I think of some of these model builders like Anthropic, that I think you folks have talked about, you know, they're starting to build these multi-billion-dollar clusters on their own now. Can you just talk about your ability to participate in some of these build-outs as they happen, be that on the DCI side or maybe even beyond that? And by extension, does this give you an opportunity to ramp up with some of the larger cloud companies that these model builders are partnering with over time as well, as they build out TP or training clusters? I'd love to just understand how that kind of business scales up to you folks. Thank you.
Speaker #8: Can you just talk about your ability to participate in some of these build-outs as they happen, be that on the DCI side or maybe even beyond that?
Speaker #8: And by
Speaker #8: extension, does this give you an opportunity to Regarding oh, ramp up with some of the larger cloud companies that these model builders are partnering with over time as well as they build out TP or training clusters?
Speaker #8: I'd love to just understand how that kind of business scales up for you folks. Thank you.
Speaker #5: Yeah. No. I mean, that's a very thoughtful question. And I think you're absolutely right. The network infrastructure is playing a critical role with these model builders in a number of ways.
Jayshree Ullal: Yeah, no, Amit, that's a very thoughtful question, and I think you're absolutely right. I, the network infrastructure is playing a critical role with these model builders in a number of ways. If you look at us, initially, we were largely working with, you know, one or two model builders and one or two accelerators, NVIDIA and AMD, and OpenAI was the primarily dominant one. But today, we see that there's really, you know, multiple layers in a cake, where you've got the GPU accelerators. Of course, you've got power as the most difficult thing to get. But Arista needs to deal with multiple domains and model builders, and appropriately, whether it is Gemini or, you know, xAI or, Anthropic, Claude or OpenAI, and many more coming.
Jayshree Ullal: Yeah, no, Amit, that's a very thoughtful question, and I think you're absolutely right. I, the network infrastructure is playing a critical role with these model builders in a number of ways. If you look at us, initially, we were largely working with, you know, one or two model builders and one or two accelerators, NVIDIA and AMD, and OpenAI was the primarily dominant one. But today, we see that there's really, you know, multiple layers in a cake, where you've got the GPU accelerators. Of course, you've got power as the most difficult thing to get. But Arista needs to deal with multiple domains and model builders, and appropriately, whether it is Gemini or, you know, xAI or, Anthropic, Claude or OpenAI, and many more coming.
Speaker #5: If you look at us initially, we were largely working with one or two model builders. And one or two accelerators: NVIDIA and AMD. And OpenAI was the primarily dominant one.
Speaker #5: But today, we see that there's really multiple layers in a cake, where you've got the GPU accelerators. Of course, you've got power as the most difficult thing to get.
Speaker #5: But Arista needs to deal with multiple domains and model builders and appropriately whether it is Gemini or XAI or Anthropic, Claude, or OpenAI, and many more coming.
Speaker #5: These models and the multi-protocol algorithm or nature of these models is something we have to make sure we build the network correctly for. So that's one.
Jayshree Ullal: These models and the multi-protocol algorithm or nature of these models is something we have to make sure we build the network correctly for. So that's one. And then to your second point, you're absolutely right. I think the biggest issue is not only the model builders, but they're no more in silos in one data center, and you're going to see them across multiple colos and multiple locations and multiple partnerships with our cloud titan customers that we've historically not worked with this. So I think you'll see more copilot versions of it, if you will, with a number of our cloud titans. So we expect to work with them as AI specialty providers, but we also expect to work with our cloud titans in bringing the cloud and AI together.
Jayshree Ullal: These models and the multi-protocol algorithm or nature of these models is something we have to make sure we build the network correctly for. So that's one. And then to your second point, you're absolutely right. I think the biggest issue is not only the model builders, but they're no more in silos in one data center, and you're going to see them across multiple colos and multiple locations and multiple partnerships with our cloud titan customers that we've historically not worked with this. So I think you'll see more copilot versions of it, if you will, with a number of our cloud titans. So we expect to work with them as AI specialty providers, but we also expect to work with our cloud titans in bringing the cloud and AI together.
Speaker #5: And then to your second point, you're absolutely right. I think the biggest issue is not only the model builders, but they're no more in silos in one data center.
Speaker #5: And you're going to see them across multiple colos and multiple locations and multiple partnerships with our cloud Titan customers that we've historically not worked with this.
Speaker #5: So I think you'll see more co-pilot versions of it, if you will, with a number of our cloud Titans. AI specialty providers. But we also expect to work with our cloud Titans in bringing the cloud and AI together.
Speaker #7: Thank you.
Amit Daryanani: Thank you.
Amit Daryanani: Thank you.
Speaker #5: Thank you, Ahmet.
Jayshree Ullal: Thank you, Amit.
Jayshree Ullal: Thank you, Amit.
Speaker #3: Our next question comes from the line of George Nodder with Wolf Research. Please go ahead.
Operator 2: Our next question comes from the line of George Notter with Wolfe Research. Please go ahead.
Jayshree Ullal: Our next question comes from the line of George Notter with Wolfe Research. Please go ahead.
Speaker #6: Hi, guys. Thanks very much. I was just curious about the product-deferred revenue and how you see that coming off the balance sheet ultimately. Obviously, it's just been stacking up here quarter after quarter after quarter.
George Notter: Hi, guys. Thanks very much. I was just curious about the product deferred revenue and how you see that, you know, coming off the balance sheet ultimately. Obviously, it's just been stacking up here quarter after quarter after quarter. So a few questions here: Does that come off in big chunks that we'll see, you know, different quarters in the future? Does it come off more gradually? Does it continue to build? Like, what does the profile look like for that product deferred coming off the balance sheet and flowing through the P&L? And then also, I'm curious about how much product deferred do you have in the full year revenue guidance, the 25%? Thanks a lot.
George Notter: Hi, guys. Thanks very much. I was just curious about the product deferred revenue and how you see that, you know, coming off the balance sheet ultimately. Obviously, it's just been stacking up here quarter after quarter after quarter. So a few questions here: Does that come off in big chunks that we'll see, you know, different quarters in the future? Does it come off more gradually? Does it continue to build? Like, what does the profile look like for that product deferred coming off the balance sheet and flowing through the P&L? And then also, I'm curious about how much product deferred do you have in the full year revenue guidance, the 25%? Thanks a lot.
Speaker #6: So, a few questions here. Does that come off in big chunks that we'll see at different quarters in the future? Does it come off more gradually?
Speaker #6: Does it continue to build? What does the profile look like for that product-deferred coming off the balance sheet and flowing through the P&L? And then also, I'm curious about how much product-deferred do you have in the full-year revenue guidance, the 25%?
Speaker #6: Thanks a lot.
Speaker #5: Yeah. Hey, George. Thanks for the questions. Not much has changed in the sense of how we have this conversation. What goes into deferred is new product, new customers, new use cases.
Chantelle Breithaupt: Yeah. Hey, George. Thanks for the questions. Not much has changed in the sense of how we have this conversation. What goes into deferred is new product, new customers, new use cases. The great new use case is AI. The acceptance criteria for that, for the larger deployments, is 12 to 18 months. Some can be as short as 6 months, so there's a wide variety that goes in. Deferred has balances coming in and out every quarter. We don't guide deferred, and we don't say product-specific. What I can tell you in your questions is that there will be times where there are larger deployments that will feel a little lumpier as we go through. But again, it's a net release of a balance, so it depends what comes in at that same quarter timing.
Jayshree Ullal: Yeah. Hey, George. Thanks for the questions. Not much has changed in the sense of how we have this conversation. What goes into deferred is new product, new customers, new use cases. The great new use case is AI. The acceptance criteria for that, for the larger deployments, is 12 to 18 months. Some can be as short as 6 months, so there's a wide variety that goes in. Deferred has balances coming in and out every quarter. We don't guide deferred, and we don't say product-specific. What I can tell you in your questions is that there will be times where there are larger deployments that will feel a little lumpier as we go through. But again, it's a net release of a balance, so it depends what comes in at that same quarter timing.
Speaker #5: The great new use case is AI. The acceptance criteria for that, for the larger deployments, is 12 to 18 months. Some can be as short as six months.
Speaker #5: So there's a wide variety. That goes in. Deferred has balances coming in and out every quarter. We don't guide deferred. And we don't say product-specific.
Speaker #5: What I can tell you in your questions is that there will be times where there are larger deployments that will feel a little lumpier as we go through.
Speaker #5: But again, it's a net release of a balance, so it depends what comes in at that same quarter timing.
Speaker #6: Got it. OK. Any sense for what's in the full-year guide then? I assume not much. Is that fair to say?
George Notter: Got it. Okay. Any sense for what's in the full year guide then? I assume not much. Is that fair to say?
George Notter: Got it. Okay. Any sense for what's in the full year guide then? I assume not much. Is that fair to say?
Jayshree Ullal: It's super hard, George. It's when the acceptance criteria happens. You know, if it happens December thirty-second, it's a different situation. If it all happens in, you know, Q2, Q3, Q4, that's a different. So that's something we really have to work with the customer. So-
Speaker #5: It's super hard, George. It's when the acceptance criteria happens. If it happens December 32, it's a different situation. If it all happens in Q2, Q3, Q4, that's a different so that's something we really have to work with the customer.
Jayshree Ullal: It's super hard, George. It's when the acceptance criteria happens. You know, if it happens December thirty-second, it's a different situation. If it all happens in, you know, Q2, Q3, Q4, that's a different. So that's something we really have to work with the customer. So-
Speaker #5: So sorry that we're not able to be clairvoyant on that.
George Notter: Thank you.
George Notter: Thank you.
Jayshree Ullal: Sorry that we're not able to be clairvoyant on that.
Jayshree Ullal: Sorry that we're not able to be clairvoyant on that.
Speaker #6: Makes sense. Thank you.
George Notter: Makes sense. Thank you.
George Notter: Makes sense. Thank you.
Speaker #5: Thank you.
Jayshree Ullal: Thank you.
Jayshree Ullal: Thank you.
Speaker #7: Thank you.
Chantelle Breithaupt: Thank you.
Chantelle Breithaupt: Thank you.
Speaker #3: Our next question comes from the line of Ben Reitzes with Melius Research. Please go ahead.
Operator 2: Our next question comes from the line of Ben Reitzes with Melius Research. Please go ahead.
Chantelle Breithaupt: Our next question comes from the line of Ben Reitzes with Melius Research. Please go ahead.
Speaker #8: Hey. Thanks a lot. And I guess my congrats to you guys. This execution and guide is really something. So I wanted to.
Ben Reitzes: Hey, thanks a lot, and I guess my congrats, too, guys. You know, this execution and guide is really something. So, I wanted to-
Ben Reitzes: Hey, thanks a lot, and I guess my congrats, too, guys. You know, this execution and guide is really something. So, I wanted to-
Jayshree Ullal: Thank you, Ben.
Jayshree Ullal: Thank you, Ben.
Speaker #5: Thank you, Ben.
Ben Reitzes: You're welcome. I wanted to ask about two things that I just was wondering if you could talk a little bit more about your Neocloud momentum, and what that is looking like in terms of materiality. And then also, if you don't mind touching on AMD, with the launch. We're kind of hearing about you getting a lot of networking attached to the four fifty type product or their new chips. I'm wondering if that is a catalyst or not, as you go throughout the year. Thanks so much.
Speaker #8: Ask you're welcome. I wanted to ask about two things that I just was wondering if you could talk a little bit more about your NeoCloud momentum.
Ben Reitzes: You're welcome. I wanted to ask about two things that I just was wondering if you could talk a little bit more about your Neocloud momentum, and what that is looking like in terms of materiality. And then also, if you don't mind touching on AMD, with the launch. We're kind of hearing about you getting a lot of networking attached to the four fifty type product or their new chips. I'm wondering if that is a catalyst or not, as you go throughout the year. Thanks so much.
Speaker #8: And what that is looking like in terms of materiality. And then also, if you don't mind touching on AMD, with the launch, we're kind of hearing about you getting a lot of networking attached to the 450-type product or their new chips.
Speaker #8: I'm wondering if that is a catalyst or not. As you go throughout the year, thanks so much.
Speaker #5: Yeah. So Ben, as you can imagine, the specialty cloud providers have historically had a cacophony of many types of providers. We are definitely seeing AI as one of the clear impasses that used to be content providers, tier two cloud providers.
Jayshree Ullal: Yeah. So Ben, as you can imagine, the specialty cloud providers have historically had a cacophony of many types of providers. We are definitely seeing AI as one of the clear impacts. It used to be content providers, tier two cloud providers, but AI is clearly driving that section. And it's a suite of customers, some of who have real financial strength and are looking now to invest and increase and pivot to AI. So the rate at which they pivot in AI will greatly define how well we do this. And, you know, they're not yet titans, but they want to be or could be titans, is the way to look at it. So, and we're going to invest with them, and these are healthy customers. It's nothing like the dotcom era, so we feel good about that.
Jayshree Ullal: Yeah. So Ben, as you can imagine, the specialty cloud providers have historically had a cacophony of many types of providers. We are definitely seeing AI as one of the clear impacts. It used to be content providers, tier two cloud providers, but AI is clearly driving that section. And it's a suite of customers, some of who have real financial strength and are looking now to invest and increase and pivot to AI. So the rate at which they pivot in AI will greatly define how well we do this. And, you know, they're not yet titans, but they want to be or could be titans, is the way to look at it. So, and we're going to invest with them, and these are healthy customers. It's nothing like the dotcom era, so we feel good about that.
Speaker #5: But AI is clearly driving that section. And it's a suite of customers some of whom have real financial strength and are looking now to invest and increase and pivot to AI.
Speaker #5: So the rate at which they pivot in AI will greatly define how well we do there. And they're not yet Titans, but they want to be or could be Titans is the way to look at it.
Speaker #5: So and we're going to invest with them. And these are healthy customers. It's nothing like the that. There are a set of NeoClouds that we watch more carefully because some of them are oil money converted into AI or crypto money converted into AI.
Jayshree Ullal: There are a set of neo clouds that we watch more carefully because some of them are, you know, you know, oil money converted into AI or crypto money converted into AI. Over there, we are going to be much more careful, because some of those neo clouds are, you know, looking at Arista as the preferred partner, but we would also be looking at the health of the customer, or they may just be a one-time. We don't know the exact nature of their business, and those will be smaller biz and they don't contribute in large dollars, but they are becoming increasingly plentiful in quantity, even if they're not yet in numbers.
Jayshree Ullal: There are a set of neo clouds that we watch more carefully because some of them are, you know, you know, oil money converted into AI or crypto money converted into AI. Over there, we are going to be much more careful, because some of those neo clouds are, you know, looking at Arista as the preferred partner, but we would also be looking at the health of the customer, or they may just be a one-time. We don't know the exact nature of their business, and those will be smaller biz and they don't contribute in large dollars, but they are becoming increasingly plentiful in quantity, even if they're not yet in numbers.
Speaker #5: And over there, we are going to be much more careful because some of those NeoClouds are looking at Arista as the preferred partner. But we would also be looking at the health of the customer.
Speaker #5: Or they may just be a one-time. We don't know the exact nature of their business, and those will be smaller, and they don't contribute in large dollars.
Speaker #5: But they are becoming increasingly plentiful in quantity, even if they're not yet in numbers. So I think you're seeing this dichotomy of two types in that category, or three types.
Jayshree Ullal: So I think you're seeing this dichotomy of two types in that category, or three types, the classic CDN and security specialty providers, tier two cloud, the AI specialty are going to lean in and invest, and then the neo clouds in different geographies.
Jayshree Ullal: So I think you're seeing this dichotomy of two types in that category, or three types, the classic CDN and security specialty providers, tier two cloud, the AI specialty are going to lean in and invest, and then the neo clouds in different geographies.
Speaker #5: The classic CDN and security specialty providers, tier two cloud. The AI specialty are going to lean in and invest. And then the NeoClouds in different geographies.
Speaker #8: Yeah. And the AMD?
Karl Ackerman: The AMD?
Ben Reitzes: The AMD?
Speaker #5: Oh, yes. The AMD question. A year ago, I think I said this to you, but I'll repeat it. A year ago, it was pretty much 99% NVIDIA, right?
Jayshree Ullal: Ah, yes, the AMD question. You know, a year ago, I think I said this to you, but I'll repeat it. A year ago, it was pretty much 99% NVIDIA, right? Today, when we look at our deployments, we see about 20, 20 percent, maybe a little more, 20 to 25%, where AMD is becoming the preferred accelerator of choice. And in those scenarios, Arista is clearly preferred because they're building best-of-breed building blocks for the NIC, for the network, for the I/O, and they want open standards as opposed to a full-on vertical stack from one vendor. So you're right to point out that, AMD and in particular, it's a joy to work with Lisa and Forrest and the whole team, and we do very well in that multi-vendor open configuration.
Jayshree Ullal: Ah, yes, the AMD question. You know, a year ago, I think I said this to you, but I'll repeat it. A year ago, it was pretty much 99% NVIDIA, right? Today, when we look at our deployments, we see about 20, 20 percent, maybe a little more, 20 to 25%, where AMD is becoming the preferred accelerator of choice. And in those scenarios, Arista is clearly preferred because they're building best-of-breed building blocks for the NIC, for the network, for the I/O, and they want open standards as opposed to a full-on vertical stack from one vendor. So you're right to point out that, AMD and in particular, it's a joy to work with Lisa and Forrest and the whole team, and we do very well in that multi-vendor open configuration.
Speaker #5: Today, when we look at our deployments, we see about 20%, maybe a little more, 20% to 25% where AMD is becoming the preferred accelerator of choice.
Speaker #5: And in those scenarios, Arista is clearly preferred because they're building best-of-breed building blocks for the NIC, for the network, for the I/O. And they want open standards, as opposed to a full-on vertical stack from one vendor.
Speaker #5: So you're right to point out that AMD, and in particular, it's a joy to work with Lisa and Forrest and the whole team. And we do very well in that.
Speaker #5: Multi-vendor open configurations.
Karl Ackerman: Thank-
Ben Reitzes: Thank-
Speaker #3: Our next question will come from the line of Tim Long with Barclays. Please go ahead.
Operator 2: Our next question will come from the line of Tim Long with Barclays. Please go ahead.
Ben Reitzes: Our next question will come from the line of Tim Long with Barclays. Please go ahead.
Speaker #9: Thank you. Yeah. Appreciate all the color Jayshree. Maybe we could touch a little bit on scale across. It's obviously gotten a lot of attention, particularly on the optics layer from some others in the industry.
Tim Long: Thank you. Yeah, appreciate all the color. Jayshree, maybe we could touch a little bit on, on Scale-Across. It's obviously gotten a lot of attention, particularly on the optics layer from, from some others in the industry. Obviously, you guys have been in DCI, which is kind of a similar type technology. But curious what you think as far as Arista's participation in more of these next-gen, Scale-Across networks. And is this something that would be good for, like, a Blue Box type of product, or would that more be in the Scale-Up? So if you could give a little color there, that would be great.
Tim Long: Thank you. Yeah, appreciate all the color. Jayshree, maybe we could touch a little bit on, on Scale-Across. It's obviously gotten a lot of attention, particularly on the optics layer from, from some others in the industry. Obviously, you guys have been in DCI, which is kind of a similar type technology. But curious what you think as far as Arista's participation in more of these next-gen, Scale-Across networks. And is this something that would be good for, like, a Blue Box type of product, or would that more be in the Scale-Up? So if you could give a little color there, that would be great.
Speaker #9: Obviously, you guys have been in DCI, which is kind of a similar type of technology. But I'm curious what you think as far as Arista's participation in more of these next-gen scale across networks.
Speaker #9: And is this something that would be good for a blue box type of product? Or would that more be in the scale-up? So if you could give a little color there, that would be great.
Speaker #5: Right. OK. So most of our participation today, we thought would be scale out. But what we are finding is, due to the distributed nature of where and how they can get the power, and the bisectional bandwidth growth, where essentially the throughput scale out or scale across is all about how much data you can move, right?
Jayshree Ullal: Right. Okay. So the, you know, most of our participation today, we thought would be scale out, but what we are finding is due to the distributed nature of where and how they can get the power and the bisectional bandwidth growth, where essentially the throughput scale out or scale across is all about how much data you can move, right? As the workloads become more and more complex, you have to make them more and more distributed because you just can't fit them in one data center, both from a power, bandwidth, throughput, capacity. Also, these GPUs are trying to minimize the collective degradation.
Jayshree Ullal: Right. Okay. So the, you know, most of our participation today, we thought would be scale out, but what we are finding is due to the distributed nature of where and how they can get the power and the bisectional bandwidth growth, where essentially the throughput scale out or scale across is all about how much data you can move, right? As the workloads become more and more complex, you have to make them more and more distributed because you just can't fit them in one data center, both from a power, bandwidth, throughput, capacity. Also, these GPUs are trying to minimize the collective degradation.
Speaker #5: As the workloads become more and more complex, you have to make them more and more distributed because you just can't fit them in one data center.
Speaker #5: Both from a power, bandwidth, and throughput capacity. Also, these GPUs are trying to minimize the collective degradation. So as you scale up or out, the communication patterns become very, very much of a bottleneck.
Jayshree Ullal: So as you scale up or out, the communication patterns become very, very much of a bottleneck, and one way to solve it is to extend this across data centers, both through fiber and, as you rightly pointed out, a very high injection bandwidth DCI routing. And then there's a sustained real-world utilization you need across all of these. So for all these reasons, we are pleasantly surprised with the role, the role of coherent long-haul optics, which we don't build, but we have worked in the past very greatly with, with companies that do, and they're seeing the lift, and the 7800 spine chassis as the flagship platform and preferred choice that has been designed by our engineering team now for several years for this robust configuration.
Jayshree Ullal: So as you scale up or out, the communication patterns become very, very much of a bottleneck, and one way to solve it is to extend this across data centers, both through fiber and, as you rightly pointed out, a very high injection bandwidth DCI routing. And then there's a sustained real-world utilization you need across all of these. So for all these reasons, we are pleasantly surprised with the role, the role of coherent long-haul optics, which we don't build, but we have worked in the past very greatly with, with companies that do, and they're seeing the lift, and the 7800 spine chassis as the flagship platform and preferred choice that has been designed by our engineering team now for several years for this robust configuration.
Speaker #5: And one way to solve it is to extend this across data centers, both through fiber. And as you rightly pointed out, a very high injection bandwidth DCI routing.
Speaker #5: And then there's a sustained real-world utilization you need across all of these. So for all these reasons, we are pleasantly surprised with the role of coherent long-haul optics, which we don't build, but we have worked in the past very greatly with companies that do.
Speaker #5: And they're seeing the lift. And the 7,800 spine chassis, as the flagship platform and preferred choice that has been designed by our engineering team now for several years for this robust configuration.
Speaker #5: So let's blue box there. And much, much more of a full-on Arista flagship box with EOS and all of the virtual output queuing and buffering to interconnect regional data centers.
Jayshree Ullal: So less Blue Box there and much, much more of a full-on Arista flagship box with EOS and all of the virtual output queuing and buffering to interconnect regional data centers, with extremely high levels of routing and high availability, too. So this really lends into everything Arista stands for, coming all together in a universal AI spine.
Jayshree Ullal: So less Blue Box there and much, much more of a full-on Arista flagship box with EOS and all of the virtual output queuing and buffering to interconnect regional data centers, with extremely high levels of routing and high availability, too. So this really lends into everything Arista stands for, coming all together in a universal AI spine.
Speaker #5: With extremely high levels of routing and high availability too. So this really lends into everything Arista stands for, coming all together in a universal AI spine.
Speaker #9: OK. Excellent. Thank you, Jayshree.
Tim Long: Okay, excellent. Thank you, Jayshree.
Tim Long: Okay, excellent. Thank you, Jayshree.
Speaker #5: Thank you.
Jayshree Ullal: Thank you.
Jayshree Ullal: Thank you.
Speaker #3: Our next question will come from the line of Carl Ackerman with BNP Paribas. Please go ahead.
Operator 2: Our next question will come from the line of Carl Ackerman with BNP Paribas. Please go ahead.
Jayshree Ullal: Our next question will come from the line of Carl Ackerman with BNP Paribas. Please go ahead.
Speaker #9: Yes. Thank you. Agentic AI should support an uptake in conventional server CPUs, where your switches have high share within data centers. And so given your upwardly revised outlook of 25% growth for this year, could you speak to the demand prospects you are seeing for front-end high-speed switching products that address agentic AI products?
Karl Ackerman: Yes, thank you. Agentic AI should support an uptake in conventional server CPUs, where you have, where your switches have high share within data centers. And so given your upwardly revised outlook of 25% growth for this year, could you speak to the demand process you are seeing for front-end high-speed switching products that address Agentic AI products? Thank you.
Karl Ackerman: Yes, thank you. Agentic AI should support an uptake in conventional server CPUs, where you have, where your switches have high share within data centers. And so given your upwardly revised outlook of 25% growth for this year, could you speak to the demand process you are seeing for front-end high-speed switching products that address Agentic AI products? Thank you.
Speaker #9: Thank you.
Speaker #5: Yeah. Exactly, Carl. I think in the beginning well, let's just go back in time in history. It's not that long ago. Three years ago, we had no AI.
Jayshree Ullal: Yeah. Exactly, Carl. I think in the beginning... Well, let's just go back in time in history. It's not that long ago. Three years ago, we had no, no AI. We were staring at InfiniBand being deployed everywhere in the back end, and we pretty much characterized our AI as only back end, just to be pure about it, right? Three years later, I'm actually telling you we might do, oh, north of $3 billion this year and growing, right? That number definitely includes the front end, as it's tied to the back end GPU clusters, and it's an all-Ethernet, all-AI system for Agentic AI applications.
Jayshree Ullal: Yeah. Exactly, Carl. I think in the beginning... Well, let's just go back in time in history. It's not that long ago. Three years ago, we had no, no AI. We were staring at InfiniBand being deployed everywhere in the back end, and we pretty much characterized our AI as only back end, just to be pure about it, right? Three years later, I'm actually telling you we might do, oh, north of $3 billion this year and growing, right? That number definitely includes the front end, as it's tied to the back end GPU clusters, and it's an all-Ethernet, all-AI system for Agentic AI applications.
Speaker #5: We were staring at InfiniBand being deployed everywhere in the back end. And we pretty much characterized our AI as only back end, just to be pure about it, right?
Speaker #5: Three years later, I'm actually telling you, we might do north of $3 billion this year and growing, right? That number definitely includes the front end.
Speaker #5: As it's tied to the back end GPU clusters, and it's an all Ethernet, all AI system, for agentic AI applications. Now, a lot of the agentic AI applications are mostly running with some of our largest cloud AI and specialty providers.
Jayshree Ullal: Now, a lot of the agentic AI applications are mostly running with some of our largest cloud AI and specialty providers, but I don't rule out the possibility, you could see this in our numbers, with north of 8,800 gig customers, that many of that is going to feed into the enterprise as well as agentic AI applications come for genomic sequencing, science... you know, automation of software. I don't know. I don't think, Ken, any of us believe that AI is eating software, but AI is definitely enabling better software, right? And we're, we're certainly seeing that in Ken's team as well in our adoption of that. So the rise of agentic AI will only increase not just the GPU, but all gradations of XPU that can be used in the back end and front end.
Jayshree Ullal: Now, a lot of the agentic AI applications are mostly running with some of our largest cloud AI and specialty providers, but I don't rule out the possibility, you could see this in our numbers, with north of 8,800 gig customers, that many of that is going to feed into the enterprise as well as agentic AI applications come for genomic sequencing, science... you know, automation of software. I don't know. I don't think, Ken, any of us believe that AI is eating software, but AI is definitely enabling better software, right? And we're, we're certainly seeing that in Ken's team as well in our adoption of that. So the rise of agentic AI will only increase not just the GPU, but all gradations of XPU that can be used in the back end and front end.
Speaker #5: But I don't rule out the possibility. You could see this in our numbers. With north of 8,800 gig customers, that many of that is going to feed into the enterprise as well.
Speaker #5: As agentic AI applications come for genomic sequencing, science, automation of software—I don't know. I don't think any of us believe that AI is eating software.
Speaker #5: But AI is definitely enabling better software, right? And we're certainly seeing that in Ken's team as well in our adoption of that. So the rise of agentic AI will only increase not just the GPU, but all gradations of XPU that can be used in the back end and front end.
Speaker #9: Thank you.
Chantelle Breithaupt: Thank you.
Karl Ackerman: Thank you.
Speaker #5: Thank you, Carl.
Jayshree Ullal: Thank you, Carl.
Jayshree Ullal: Thank you, Carl.
Operator 2: Our next question comes from the line of Simon Leopold with Raymond James. Please go ahead.
Jayshree Ullal: Our next question comes from the line of Simon Leopold with Raymond James. Please go ahead.
Speaker #3: comes from the line of Simon Leopold with Raymond James. Please go ahead.
Speaker #9: Thank you very much for taking the question. I wanted to come back on the issue around sort of what's going on with the memory market.
Simon Leopold: Thank you very much for taking the question. I wanted to come back on the issue around sort of what's going on with the memory market. So 2 aspects to this is, one, I'm wondering how much of a toll has been price hikes, you raising your prices to customers or and/or, whether or not within the substantial amount of purchase commitments you have, whether there's a significant aspect of memory in there, so you've pre-purchased memory effectively at much lower prices than their spot market today. Thank you.
Simon Leopold: Thank you very much for taking the question. I wanted to come back on the issue around sort of what's going on with the memory market. So 2 aspects to this is, one, I'm wondering how much of a toll has been price hikes, you raising your prices to customers or and/or, whether or not within the substantial amount of purchase commitments you have, whether there's a significant aspect of memory in there, so you've pre-purchased memory effectively at much lower prices than their spot market today. Thank you.
Speaker #9: So two aspects to this. One, I'm wondering how much of a tool has been price hikes you raising your prices to customers? Or and/or whether or not within the substantial amount of purchase commitments you have, whether there's a significant aspect of memory in there.
Speaker #9: So you've pre-purchased memory effectively at much lower prices than they're spot market today. Thank you.
Speaker #5: Thank you. OK. I wish I could tell you we did purchase all that memory that we needed. No, we didn't. But while our peers in the industry have done multiple price hikes already, especially those in the server market or memory-intensive switches, we have clearly been absorbing it.
Jayshree Ullal: Thank you. Okay, I wish I could tell you we did purchase all that memory that we needed. No, we didn't. But while our peers in the industry have done multiple price hikes already, especially those in the server market or memory-intensive switches, we have clearly been absorbing it, and memory is in our purchase commitments, but so is everything else. The entire silicon portfolio is in our purchase commitments. Due to some of the supply chain reactions, Todd and I have been reviewing this, and we do believe there will be a one-time increase on selected, especially memory-intensive SKUs, to deal with it, and we cannot absorb it if the prices keep going up the way they have in January and February.
Jayshree Ullal: Thank you. Okay, I wish I could tell you we did purchase all that memory that we needed. No, we didn't. But while our peers in the industry have done multiple price hikes already, especially those in the server market or memory-intensive switches, we have clearly been absorbing it, and memory is in our purchase commitments, but so is everything else. The entire silicon portfolio is in our purchase commitments. Due to some of the supply chain reactions, Todd and I have been reviewing this, and we do believe there will be a one-time increase on selected, especially memory-intensive SKUs, to deal with it, and we cannot absorb it if the prices keep going up the way they have in January and February.
Speaker #5: And memory is in our purchase commitments. But so is everything else. The entire silicon portfolio is in our purchase commitments. Due to some of the supply chain reactions, Todd and I have been reviewing this.
Speaker #5: And we do believe there will be a one-time increase on selected, especially memory-intensive SKUs to deal with it. And we cannot absorb it if the prices keep going up the way they have in January and February.
Jayshree Ullal: I would tell you that all the purchase commitments I have in my current, in Chantelle's current, commitments, are not enough. We need more memory.
Speaker #5: And I would tell you that all the purchase commitments I have in my current in Chantelle's current commitments are not enough. We need more memory.
Jayshree Ullal: I would tell you that all the purchase commitments I have in my current, in Chantelle's current, commitments, are not enough. We need more memory.
Speaker #9: Thank you.
Simon Leopold: Thank you.
Simon Leopold: Thank you.
Speaker #3: Our next question will come from the line of James Fish with Piper Sandler. Please go ahead.
Operator 2: Our next question will come from the line of James Fish with Piper Sandler. Please go ahead.
Simon Leopold: Our next question will come from the line of James Fish with Piper Sandler. Please go ahead.
James Fish: Hey, ladies, great quarter, great end to the year. Jayshree, are hyperscalers getting nervous now at all in ordering ahead? What's your sense of pull-in of demand potentially here, including for your own Blue Box initiative? And, Chantelle, for you, just going back to George's question: Are you, I know it's difficult to answer, but are you anticipating that that product deferred revenue is going to continue to grow through the year, or just, it's way too difficult to predict, and you've got customers that could just say, "You know, we accept, great, ship them all now," and so we end up with a big quarter, but product deferred down?
James Fish: Hey, ladies, great quarter, great end to the year. Jayshree, are hyperscalers getting nervous now at all in ordering ahead? What's your sense of pull-in of demand potentially here, including for your own Blue Box initiative? And, Chantelle, for you, just going back to George's question: Are you, I know it's difficult to answer, but are you anticipating that that product deferred revenue is going to continue to grow through the year, or just, it's way too difficult to predict, and you've got customers that could just say, "You know, we accept, great, ship them all now," and so we end up with a big quarter, but product deferred down?
Speaker #10: Hey, ladies. Great quarter. Great end of the year. Jayshree, our hyperscalers getting nervous now at all in ordering ahead? What's your sense of pull-in of demand potentially here, including for your own blue box initiative?
Speaker #10: And Chantelle, for you, just going back to George's question, are you I know it's difficult to answer, but are you anticipating that that product-deferred revenue is going to continue to grow through the year?
Speaker #10: Or just it's way too difficult to predict? And you've got customers that could just say, you know, we accept ship them all now. And so we end up with a big quarter, but product-deferred down.
Speaker #4: I'm going to let Chantelle answer this difficult question over and over again. Go ahead, Chantelle.
Jayshree Ullal: I'm going to let Chantelle answer this difficult question over and over again.
Jayshree Ullal: I'm going to let Chantelle answer this difficult question over and over again.
Chantelle Breithaupt: Sure.
Chantelle Breithaupt: Sure.
Speaker #5: Sure. Happy. Thank you, James. I appreciate it. So I think for deferred, generally, is so we don't guide deferred. But to try to give you more insight, there will be back to George's question, there will be certain deployments that get accepted.
Jayshree Ullal: Go ahead, Chantelle.
Jayshree Ullal: Go ahead, Chantelle.
Chantelle Breithaupt: Yeah, happy. Thank you, James. I appreciate it. So I think for deferred, generally, is so we, we don't guide deferred, but to try to give you more insight, there will be, back to George's question, there will be certain deployments that get accepted and released, but the part that's difficult is what comes into the balance, right, James? So I can't guide. That would be, that would be a wild guess on what's going to go in, which is not prudent, I think, from my perspective. So we'll continue to mention what's in it. We'll continue to show you through the balances. We'll talk about it in the script in the sense of the movement, but that's probably as much as I can tell you with a, well, you know, with a responsible answer looking forward.
Chantelle Breithaupt: Yeah, happy. Thank you, James. I appreciate it. So I think for deferred, generally, is so we, we don't guide deferred, but to try to give you more insight, there will be, back to George's question, there will be certain deployments that get accepted and released, but the part that's difficult is what comes into the balance, right, James? So I can't guide. That would be, that would be a wild guess on what's going to go in, which is not prudent, I think, from my perspective. So we'll continue to mention what's in it. We'll continue to show you through the balances. We'll talk about it in the script in the sense of the movement, but that's probably as much as I can tell you with a, well, you know, with a responsible answer looking forward.
Speaker #5: And release. But the part that's difficult is what comes into the balance, right, James? So I can't guide. That would be a wild guess on what's going to go in, which is not prudent, I think, from my perspective.
Speaker #5: So we'll continue to mention what's in it. We'll continue to show you through the balances. We'll talk about it in the script in the sense of the movement.
Speaker #5: But that's probably as much as I can tell you with the responsible answer looking forward.
Speaker #4: James, this is one of those times, no matter how many times you ask us this question in several different ways, the answer doesn't change.
Jayshree Ullal: James, this is one of those times, no matter how many times you ask us this question in several different ways, the answer doesn't change. Okay.
Jayshree Ullal: James, this is one of those times, no matter how many times you ask us this question in several different ways, the answer doesn't change. Okay.
Speaker #4: OK.
Speaker #10: I mean, we're all—insanity is doing the same thing over and over again, expecting a different outcome next time.
James Fish: I mean, we're all. Insanity is doing the same thing over and over again.
James Fish: I mean, we're all. Insanity is doing the same thing over and over again.
Chantelle Breithaupt: Yes, I know. I know.
Chantelle Breithaupt: Yes, I know. I know.
Speaker #5: Yeah. I know. I know.
Speaker #4: So on the hyperscaler, are they getting nervous? I don't think they're getting nervous. You've seen what a strong business they have, how much cash they put out, and how successful they are.
Jayshree Ullal: So on the hyperscaler, are they getting nervous? I don't think they're getting nervous. You know, you've seen what a strong business they have, how much cash they put out, and how successful they are. But I do think they're working more closely with us. Typically, we had a 3- to 6-month visibility. We're getting greater visibility.
Jayshree Ullal: So on the hyperscaler, are they getting nervous? I don't think they're getting nervous. You know, you've seen what a strong business they have, how much cash they put out, and how successful they are. But I do think they're working more closely with us. Typically, we had a 3- to 6-month visibility. We're getting greater visibility.
Speaker #4: But I do think they are working more closely with us. Typically, we had a three- to six-month visibility. We're getting greater visibility.
Speaker #3: Our next question will come from the line of Tal Leani with Bank of America. Please go ahead.
Operator 2: Our next question will come from the line of Tal Liani with Bank of America. Please go ahead.
Jayshree Ullal: Our next question will come from the line of Tal Liani with Bank of America. Please go ahead.
Speaker #11: Hi, guys. I almost had the same question for you that I asked last quarter. Because you increased the guidance—yeah, no, I'll explain.
Tal Liani: Hi, guys. I almost had the same question to you, what I asked you last quarter, because you grew-
Tal Liani: Hi, guys. I almost had the same question to you, what I asked you last quarter, because you grew-
Jayshree Ullal: We did it again.
Jayshree Ullal: We did it again.
Tal Liani: You increased the guidance.
Tal Liani: You increased the guidance.
Jayshree Ullal: We did it again, Tal Liani question.
Jayshree Ullal: We did it again, Tal Liani question.
Tal Liani: Yeah, no, it's— I'll explain. You increased the guidance, but the entire increase in the guidance is basically the cloud. And if I look at— It's very simple to dissect your numbers. If I remove campus and I remove cloud, and you provide these two numbers for both 25 and 26, the rest of the business, which is 60% of the business, you guide it to grow 0%. And in previous years, it was— I can make estimates. It was anywhere from 10% to 30% growth. So the question is, why are you guiding this way, that 60% of the business is not going to grow? Is it because the-
Tal Liani: Yeah, no, it's— I'll explain. You increased the guidance, but the entire increase in the guidance is basically the cloud. And if I look at— It's very simple to dissect your numbers. If I remove campus and I remove cloud, and you provide these two numbers for both 25 and 26, the rest of the business, which is 60% of the business, you guide it to grow 0%. And in previous years, it was— I can make estimates. It was anywhere from 10% to 30% growth. So the question is, why are you guiding this way, that 60% of the business is not going to grow? Is it because the-
Speaker #11: You increased the guidance, but the entire increase in the guidance is basically the cloud. And if I look at it, it's very simple to the sector numbers.
Speaker #11: If I remove campus and I remove cloud, and you provide these two numbers for both 25 and 26, the rest of the business, which is 60% of the business, you guide it to grow zero.
Speaker #11: And in previous years, it was I can make estimates. It was anywhere from 10% to 30% growth. So the question is, why are you guiding this way that 60% of the business is not going to grow?
Speaker #11: Is it because the conservatism?
Jayshree Ullal: Okay, can I-
Jayshree Ullal: Okay, can I-
Tal Liani: It's just conservatism?
Tal Liani: It's just conservatism?
Speaker #5: Yeah, yeah. No, can I pause you there? Because I know you like to dissect our maps several different ways and come up with conclusions.
Jayshree Ullal: Tal, no, can I pause you there? Because I know you like to dissect our math several different ways and come up with conclusions. We're not guiding that our business is going to be flat or we're not going to grow here or grow there. But generally, when something is very fast-paced and growing, then other things grow less. And exactly whether it will be flat or grow double digits or single digits, Tal, I – It's February. I don't know what the rest of the year will be, okay? So I take-
Jayshree Ullal: Tal, no, can I pause you there? Because I know you like to dissect our math several different ways and come up with conclusions. We're not guiding that our business is going to be flat or we're not going to grow here or grow there. But generally, when something is very fast-paced and growing, then other things grow less. And exactly whether it will be flat or grow double digits or single digits, Tal, I – It's February. I don't know what the rest of the year will be, okay? So I take-
Speaker #5: We're not guiding that our business is going to be flat, or we're not going to grow here or grow there. But generally, when something is very fast-paced and growing, then other things grow less.
Speaker #5: And exactly whether it will be flat or grow double digits or single digits, Tal, it's February. I don't know for the rest of the year will be, OK?
Speaker #5: So I.
Speaker #11: No, but that's the question. The question is, is there allocation here? Meaning, let's say you have only a set number of memory slots, so you allocate it to cloud, and then the rest of the business doesn't get it. Or is it just conservatism and lack of ability, or lack of visibility?
Tal Liani: No, but that's the question.
Tal Liani: No, but that's the question.
Jayshree Ullal: Take-
Jayshree Ullal: Take-
Tal Liani: The question is, is there allocation here? Meaning, if you... let's say you have only set number of memory slots, so you allocate it to cloud, and then the rest of the business doesn't get it. Or is it just conservatism and lack of ability to-
Tal Liani: The question is, is there allocation here? Meaning, if you... let's say you have only set number of memory slots, so you allocate it to cloud, and then the rest of the business doesn't get it. Or is it just conservatism and lack of ability to-
Jayshree Ullal: It's, it's-
Jayshree Ullal: It's, it's-
Tal Liani: Lack of ability.
Tal Liani: Lack of ability.
Jayshree Ullal: It's neither, it's neither of the above. It's... We, we don't allocate to our customers. It's first in, first served. And in fact, the enterprise customers get a very high sense of priority, as do our cloud. Customers come first. So, but allocation of memory may allow us to be in a situation where the demand is greater than our ability to supply. We don't know. It's too early in the year. We're confident-
Jayshree Ullal: It's neither, it's neither of the above. It's... We, we don't allocate to our customers. It's first in, first served. And in fact, the enterprise customers get a very high sense of priority, as do our cloud. Customers come first. So, but allocation of memory may allow us to be in a situation where the demand is greater than our ability to supply. We don't know. It's too early in the year. We're confident-
Speaker #5: It's either it's neither of the above. We don't allocate to our customers. It's first in, first served. And in fact, the enterprise customers get a very high sense of priority as do our cloud.
Speaker #5: Customers come first. So but allocation of memory may allow us to be in a situation where the demand is greater than our ability to supply.
Speaker #5: We don't know. It's too early in the year. We're confident that we can guide six months after our analyst day to a higher number.
Kenneth Duda: Got it.
Jayshree Ullal: -that we could guide, you know, 6 months after our analyst day to a higher number, but we don't know what the next 4 quarters will look like to the precision you're asking for.
Kenneth Duda: Got it.
Jayshree Ullal: -that we could guide, you know, 6 months after our analyst day to a higher number, but we don't know what the next 4 quarters will look like to the precision you're asking for.
Speaker #5: But we don't know what the next four quarters will look like to the precision you're asking for. Thank you.
Kenneth Duda: Got it. Thank you.
Tal Liani: Got it. Thank you.
Jayshree Ullal: Thank you.
Jayshree Ullal: Thank you.
Speaker #3: Our next question comes from the line of Atif Malik with Citi. Please go ahead.
Operator 2: Our next question comes from the line of Atif Malik with Citi. Please go ahead.
Jayshree Ullal: Our next question comes from the line of Atif Malik with Citi. Please go ahead.
Speaker #12: Hi. It's Adrienne Colby for Atif. Thank you for taking my question. I was hoping to ask about for an update on Arista's four large AI customers.
Adrienne Colby: Hi, it's Adrienne Colby for Atif. Thank you for taking my question. I was hoping to ask about, for an update on Arista's four large AI customers. I know that that fourth customer you talked about was a bit slower to ramp to 100,000 GPUs. Just wondering if you can update us on their progress there, and perhaps what's next for the other three customers that have already crossed that threshold. And lastly, is there any indication that that fifth customer that ran into funding challenges might come back to you?
Adrienne Colby: Hi, it's Adrienne Colby for Atif. Thank you for taking my question. I was hoping to ask about, for an update on Arista's four large AI customers. I know that that fourth customer you talked about was a bit slower to ramp to 100,000 GPUs. Just wondering if you can update us on their progress there, and perhaps what's next for the other three customers that have already crossed that threshold. And lastly, is there any indication that that fifth customer that ran into funding challenges might come back to you?
Speaker #12: I know that that fourth customer you talked about was a bit slower to ramp to 100,000 GPUs. Just wondering if you can update us on their progress there.
Speaker #12: And perhaps what's next for the other few customers that have already crossed that threshold. And lastly, is there any indication that that fifth customer, that ran into funding challenges, might come back to you?
Speaker #5: OK. Adrienne, I'll give you some update. I'm not sure I have precise updates. But we are in all four customers deploying AI with Ethernet.
Jayshree Ullal: Okay. Adrienne, I'll give you some update. I'm not sure I have precise updates, but we are in all four customers deploying AI with Ethernet. So that's the good news. Three of them have already deployed a cumulative of 100,000 GPUs and are now growing from there. And, you know, clearly migrating now into beyond pilots and production to other centers, power being the biggest constraint. Our fourth customer is migrating from InfiniBand, so it's still below 100,000 GPUs at this time. But I fully expect them to get there this year, and then we shall see how they get beyond that.
Jayshree Ullal: Okay. Adrienne, I'll give you some update. I'm not sure I have precise updates, but we are in all four customers deploying AI with Ethernet. So that's the good news. Three of them have already deployed a cumulative of 100,000 GPUs and are now growing from there. And, you know, clearly migrating now into beyond pilots and production to other centers, power being the biggest constraint. Our fourth customer is migrating from InfiniBand, so it's still below 100,000 GPUs at this time. But I fully expect them to get there this year, and then we shall see how they get beyond that.
Speaker #5: So that's the good news. Three of them have already deployed a cumulative of 100,000 GPUs. And are now growing from there. And clearly, my migrating now into beyond pilots and production to other centers, power being the biggest constraint.
Speaker #5: Our fourth customer is migrating from InfiniBand. So it's still below 100,000 GPUs at this time. But I fully expect them to get there this year.
Speaker #5: And then we shall see how they get beyond that.
Speaker #3: Our next question will come from the line of Michael Ng with Goldman Sachs. Please go ahead.
Operator 2: Our next question will come from the line of Michael Ng with Goldman Sachs. Please go ahead.
Jayshree Ullal: Our next question will come from the line of Michael Ng with Goldman Sachs. Please go ahead.
Speaker #13: Hey, good afternoon. Thank you for the question. I just have one, and one follow-up. First, I was wondering if you could talk a little bit about the new customer segmentations that you guys unveiled with Cloud and AI, and AI and Specialty.
Michael Ng: Hey, good afternoon. Thank you for the question. I just have one and one follow-up. First, I was wondering if you could talk a little bit about the new customer segmentations that you guys unveiled with, with cloud and AI, and AI and specialty. You know, what's the philosophy around that? And, you know, does that kind of signal more opportunity in places like Oracle and the Neoclouds? And then second, you know, with, with cloud and AI at 48% of revenue and A and B at a combined 36, you know, you have, you have 12% left over. Is that a hyperscale customer? Does it kind of imply that, you know, you have a new hyperscaler that is approaching 10%?
Michael Ng: Hey, good afternoon. Thank you for the question. I just have one and one follow-up. First, I was wondering if you could talk a little bit about the new customer segmentations that you guys unveiled with, with cloud and AI, and AI and specialty. You know, what's the philosophy around that? And, you know, does that kind of signal more opportunity in places like Oracle and the Neoclouds? And then second, you know, with, with cloud and AI at 48% of revenue and A and B at a combined 36, you know, you have, you have 12% left over. Is that a hyperscale customer? Does it kind of imply that, you know, you have a new hyperscaler that is approaching 10%?
Speaker #13: What's the philosophy around that? And does that kind of signal more opportunity in places like Oracle and the Neo Clouds? And then second, with Cloud and AI at 48% of revenue and AMB at a combined 36%, you have 12% left over.
Speaker #13: Is that a hyperscale customer? Does it kind of imply that you have a new hyperscaler that is approaching 10%? Because obviously, we thought that the next biggest one would have been Oracle.
Michael Ng: Because, obviously, you know, we thought that, you know, the next biggest one would have been Oracle, but that's moved out of cloud now. So any thoughts there would be great. Thank you.
Michael Ng: Because, obviously, you know, we thought that, you know, the next biggest one would have been Oracle, but that's moved out of cloud now. So any thoughts there would be great. Thank you.
Speaker #13: But that's moved out of cloud now. So any thoughts there would be great. Thank you.
Speaker #5: Yeah, yeah, sure, Michael. So well, first of all, my map is 26 to 16. So it's 42. So I don't have 12% unless you had 58.
Jayshree Ullal: Yeah. Yeah. Sure, Michael. So, well, first of all, my math is 26 plus 16, so it's 42, so I don't have 12% unless you had 58. It's really only 6%. So on the cloud and AI tightness, the way we classified that is, it's significantly large-scale customers with greater than 1 million servers, greater than 100,000 GPUs, an R&D focus on models and sometimes even their own XPUs. And this can, of course, change. Some others may come into it, but it's a very select few set of customers, you know, less than 5 or about 5. That's the way to think of it, right? On the change on the specialty cloud, as I said, we're noticing that some customers are really, really focused solely on AI with some cloud, as opposed to cloud with some AI.
Jayshree Ullal: Yeah. Yeah. Sure, Michael. So, well, first of all, my math is 26 plus 16, so it's 42, so I don't have 12% unless you had 58. It's really only 6%. So on the cloud and AI tightness, the way we classified that is, it's significantly large-scale customers with greater than 1 million servers, greater than 100,000 GPUs, an R&D focus on models and sometimes even their own XPUs. And this can, of course, change. Some others may come into it, but it's a very select few set of customers, you know, less than 5 or about 5. That's the way to think of it, right? On the change on the specialty cloud, as I said, we're noticing that some customers are really, really focused solely on AI with some cloud, as opposed to cloud with some AI.
Speaker #5: It's really only 6%. So on the Cloud and AI tightness, the way we classified that is it's significantly large-scale customers, with greater than a million servers, greater than 100,000 GPUs, an R&D focus on models, and sometimes even their own XPUs. And this can, of course, change—some others may come into it.
Speaker #5: But it's a very select few set of customers. Less than five or about five, that's the way to think of it. On the change on the specialty cloud, as I said, we're noticing that some customers are really, really focused solely on AI, which some cloud, as opposed to cloud with some AI.
Speaker #5: So when it's a heavily set AI-centric, especially with Oracle's AI acceleron and multi-tenant partnerships that they've created, they have naturally got a dual personality.
Jayshree Ullal: So when it's a heavily set AI-centric, we especially with Oracle's AI Acceleron and multitenant partnerships that they've created, they have naturally got a dual personality, some of which is OCI, the Oracle Cloud, but some of it is really AI, fully AI-based. So the shift in their strategy made us shift the category and bifurcate the two.
Jayshree Ullal: So when it's a heavily set AI-centric, we especially with Oracle's AI Acceleron and multitenant partnerships that they've created, they have naturally got a dual personality, some of which is OCI, the Oracle Cloud, but some of it is really AI, fully AI-based. So the shift in their strategy made us shift the category and bifurcate the two.
Speaker #5: Some of which is OCI, the Oracle Cloud. But some of it is really AI, fully AI-based. So the shift in their strategy made us shift the category.
Speaker #5: And bifurcate the two.
Speaker #13: Thank you, Jayshree.
Michael Ng: Thank you, Jayshree.
Michael Ng: Thank you, Jayshree.
Speaker #5: Thank you.
Jayshree Ullal: Thank you.
Jayshree Ullal: Thank you.
Speaker #1: But you know we have time for one last question.
Kenneth Duda: Regina, we have time for one last question.
Rudolph Araujo: Regina, we have time for one last question.
Speaker #3: Our final question will come from the line of Ryan Koontz with Needham & Company. Please go ahead.
Operator 2: Our final question will come from the line of Brian Koontz with Needham and Company. Please go ahead.
Operator: Our final question will come from the line of Brian Koontz with Needham and Company. Please go ahead.
Speaker #11: Great. Thanks for squeezing me in. Jayshree, in your prepared remarks, you talked about your telemetry capabilities. And wonder if you could expand on that and discuss where you're seeing that key differentiation, what sorts of use cases you're able to really seize the upper hand competitively with your telemetry capabilities.
Ryan Koontz: Great. Thanks for squeezing me in. Jayshree, in your prepared remarks, you talked about your telemetry capabilities, and I wonder if you could expand on that and discuss where are you seeing that key differentiation, what sorts of use cases you're able to really seize the upper hand competitively with your telemetry capabilities? Thank you.
Ryan Koontz: Great. Thanks for squeezing me in. Jayshree, in your prepared remarks, you talked about your telemetry capabilities, and I wonder if you could expand on that and discuss where are you seeing that key differentiation, what sorts of use cases you're able to really seize the upper hand competitively with your telemetry capabilities? Thank you.
Speaker #11: Thank you.
Speaker #5: Yeah, I'm going to say some, and I think Ken, who's been designing this and working on it, will say even more. Ken Duda, our President and CTO. So, telemetry is at the heart of both our EOS software stack, as well as our cloud vision for enterprise customers.
Jayshree Ullal: Yeah. I'm gonna say some, and I think Ken, who's been designing this and working on it, will say even more. Ken Duda, our President and CTO. So telemetry is at the heart of our both our EOS software stack as well as our CloudVision for enterprise customers. We have a real-time streaming telemetry that has been with us since the beginning of time, and it's constantly keeping track of all our switches. It isn't just a pretty management tool. And at the same time, our cloud customers and AI customers are seeking some of that visibility too, and so we have developed some deeper AI capabilities for telemetry as well. Over to you, Ken, for some more details.
Jayshree Ullal: Yeah. I'm gonna say some, and I think Ken, who's been designing this and working on it, will say even more. Ken Duda, our President and CTO. So telemetry is at the heart of our both our EOS software stack as well as our CloudVision for enterprise customers. We have a real-time streaming telemetry that has been with us since the beginning of time, and it's constantly keeping track of all our switches. It isn't just a pretty management tool. And at the same time, our cloud customers and AI customers are seeking some of that visibility too, and so we have developed some deeper AI capabilities for telemetry as well. Over to you, Ken, for some more details.
Speaker #5: We have a real-time streaming telemetry that has been with us since the beginning of time. And it's constantly keeping track of all our switches.
Speaker #5: It isn't just a pretty management tool. And at the same time, our cloud customers and AI customers are seeking some of that visibility, too.
Speaker #5: And so we have developed some deeper AI capabilities for telemetry as well. Over to you, Ken, for some more detail.
Speaker #11: Yeah, no, thanks for that question. That's great. Look, the EOS architecture is based on state orientation. This is the idea that we capture the state of the network and then stream that state out from the system database on the switches.
Kenneth Duda: Yeah, no, thanks for that question. That's great. Look, the EOS architecture is based on state orientation. This is the idea that we capture the state of the network and then stream that state out from the system database on the switches into whatever, you know, CloudVision or whatever system can then receive it. And we're extending that capability for AI with a combination of in-network data sources related to flow control, RDMA counters, buffering and congestion counters, and also host-level information, including what's going on in the RDMA stack on the host, what's going on with collectives, latencies, any flow control problems or buffering problems in the host NIC. Then we pull that information all together in CloudVision and give the operator a unified view of what's happening in the network and what's happening in the host.
Kenneth Duda: Yeah, no, thanks for that question. That's great. Look, the EOS architecture is based on state orientation. This is the idea that we capture the state of the network and then stream that state out from the system database on the switches into whatever, you know, CloudVision or whatever system can then receive it. And we're extending that capability for AI with a combination of in-network data sources related to flow control, RDMA counters, buffering and congestion counters, and also host-level information, including what's going on in the RDMA stack on the host, what's going on with collectives, latencies, any flow control problems or buffering problems in the host NIC. Then we pull that information all together in CloudVision and give the operator a unified view of what's happening in the network and what's happening in the host.
Speaker #11: Into whatever the cloud vision or whatever system can then receive it. And we were extending that capability for AI with a combination of in-network data sources related to flow control, RDMA counters, buffering and congestion counters, and also host-level information.
Speaker #11: Including what's going on in the RDMA stack on the host, what's going on with collectives, latencies, any flow control problems or buffering problems in the host NIC, and we pull those that information all together in Cloud Vision and give the operator a unified view of what's happening in the network and what's happening in the host.
Speaker #11: And this greatly aids our customers in building an overall working solution, because the interactions between the network and the host can be complicated and difficult to debug when it's different systems collecting them.
Kenneth Duda: This greatly aids our customers in building an overall working solution because the interactions between the network and the host can be complicated and difficult to debug when it's different systems collecting them.
Kenneth Duda: This greatly aids our customers in building an overall working solution because the interactions between the network and the host can be complicated and difficult to debug when it's different systems collecting them.
Speaker #5: Great job, Ken. I can't wait for that product.
Jayshree Ullal: Great job, Ken.
Jayshree Ullal: Great job, Ken.
Speaker #11: That's right. It's really helpful. Thank you.
Kenneth Duda: That's true.
Kenneth Duda: That's true.
Jayshree Ullal: I can't wait for that product.
Jayshree Ullal: I can't wait for that product.
Kenneth Duda: Really helpful. Thank you. This concludes Arista Networks' Q4 2025 earnings call. We have posted a presentation that provides additional information on our results, which you can access on the investor section of our website. Thank you for joining us today and for your interest in Arista.
Rudolph Araujo: Really helpful. Thank you. This concludes Arista Networks' Q4 2025 earnings call. We have posted a presentation that provides additional information on our results, which you can access on the investor section of our website. Thank you for joining us today and for your interest in Arista.
Speaker #1: This concludes Arista Networks' fourth quarter 2025 earnings call. We have posted a presentation that provides additional information on our results, which you can access on the Investor section of our website.
Speaker #1: Thank you for joining us today and for your interest in Arista.
Speaker #3: Thank you for joining, ladies and gentlemen. This concludes today's call. You may now disconnect.
Operator 2: Thank you for joining, ladies and gentlemen. This concludes today's call. You may now disconnect.
Operator: Thank you for joining, ladies and gentlemen. This concludes today's call. You may now disconnect.
Operator 1: Please wait. The conference will begin shortly.