Elastic (NYSE:ESTC), the search analytics platform, reported its second-quarter fiscal 2024 financial results, surpassing expectations in both revenue and non-GAAP operating margin. The company witnessed a 17% YoY increase in revenue, with Elastic Cloud growing 31% YoY. Significant advancements were seen in the use of Elasticsearch Relevance Engine (ESRE) for generative AI applications. The company also announced strategic collaborations with Amazon (NASDAQ:AMZN) Web Services and Google (NASDAQ:GOOGL) Cloud, and multiyear marketplace deals with DocuSign (NASDAQ:DOCU) and a leading video sharing platform.
Key takeaways from the earnings call include:
- Elastic reported a 17% YoY increase in revenue, with Elastic Cloud driving growth at 31% YoY.
- The company signed multiyear marketplace deals leveraging ESRE's semantic and vector search capabilities.
- Elastic is focusing on consolidating customers onto their platform, displacing incumbent solutions for observability and security.
- The company announced a global strategic collaboration agreement with Amazon Web Services and extended its collaboration with Google Cloud.
- Elastic delivered a non-GAAP operating margin of 13% for the quarter and remains committed to achieving its non-GAAP operating margin target for the full fiscal year.
Elastic's growth is largely attributed to its focus on generative AI applications and strategic collaborations with major cloud hyperscalers. The company recently announced a two-year collaboration agreement with Amazon Web Services to integrate Amazon Bedrock into the Elastic AI Assistant and is accelerating its joint activities and technology integrations with Google Cloud. It also highlighted the positive reception of its AI capabilities in search, observability, and security, which are expected to drive larger deals.
The company reported strong performance in the quarter, with almost $40 million in ARR Cloud revenue, driven by healthy consumption from customers across various geographies and industries. It also noted initial contributions to consumption from newer GenAI workloads and discussed its GenAI monetization strategy.
Ashutosh Kulkarni, an executive at Elastic, discussed the company's growth potential and competitive advantages, highlighting the expansion of the total addressable market (TAM) in search, the compelling capabilities of their AI assistants in observability and security, and the ease of use that their platform brings.
Elastic ended the quarter with over 1,220 customers with annual contract values above $100,000. For the full fiscal year, it expects total revenue in the range of $1.247 billion to $1.253 billion, representing 17% year-over-year growth at the midpoint. Despite potential macro concerns or consumption fluctuations in the second half, the company remains confident in its long-term growth prospects.
InvestingPro Insights
Elastic's recent financial results have not only beaten expectations but also showcased a robust strategic direction, particularly in the realm of generative AI and cloud partnerships. InvestingPro data underscores the company's financial dynamics with a market capitalization of approximately $7.93 billion and a noteworthy revenue growth of 21.03% over the last twelve months as of Q1 2024. These figures align with the company's reported 17% year-over-year revenue increase, reinforcing the strength of its market position.
Two key InvestingPro Tips offer additional insight into Elastic's financial health and future outlook. Firstly, Elastic holds more cash than debt on its balance sheet, providing a cushion against market volatility and enabling strategic investments. Secondly, while Elastic is not currently profitable, analysts predict the company will turn a profit this year, which may signal a pivotal point for potential investors.
For those looking to dive deeper into Elastic's financials and strategic position, InvestingPro provides a wealth of additional tips. Currently, there are 9 more InvestingPro Tips available, which could be particularly valuable for investors considering Elastic's potential in the rapidly evolving tech landscape.
As we approach Cyber Monday, it's an opportune time to consider an InvestingPro subscription, now available at a discount of up to 60%. To sweeten the deal, use the coupon code sfy23 to get an additional 10% off a 2-year InvestingPro+ subscription. With this subscription, investors can access real-time data, expert analysis, and exclusive insights that could be crucial in making informed decisions about companies like Elastic.
Full transcript - Elastic (ESTC) Q2 2024:
Operator: Good afternoon, and welcome to the Elastic Second Quarter Fiscal 2024 Earnings Results Conference Call. [Operator Instructions] Please note, this event is being recorded. I would now like to turn the conference over to Anthony Luscri, Vice President, Investor Relations. Please go ahead.
Anthony Luscri: Thank you. Good afternoon, and thank you for joining us on today's conference call to discuss Elastic's second quarter fiscal 2024 financial results. On the call, we have Ash Kulkarni, Chief Executive Officer; and Janesh Moorjani, Chief Financial Officer and Chief Operating Officer. Following their prepared remarks, we will take questions. Our press release was issued today after the close of market and is posted on our website. Slides, which are supplemental to the call, can also be found on the Elastic Investor Relations website at ir.elastic.co. Our discussion will include forward-looking statements, which may include predictions, estimates or expectations regarding the demand for our products and solutions and our future revenue and other information. These forward-looking statements are based on factors currently known to us, speak only as of the date of this call and are subject to risk and uncertainties that could cause actual results to differ materially. We disclaim any obligation to update or revise these forward-looking statements unless required by law. Please refer to the risks and uncertainties included in the press release that we issued earlier today included in the slides posted on the Investor Relations website and those more fully described in our filings with the Securities and Exchange Commission. We will also discuss certain non-GAAP financial measures. Disclosures regarding non-GAAP measures, including reconciliations with the most comparable GAAP measures, can be found in the press release and slides. The webcast replay of this call will be available on our company website under the Investor Relations link. Our third quarter fiscal 2024 quiet period begins at the close of business on Wednesday, January 17, 2024. On December 7, 2023, we will be participating in the Barclays (LON:BARC) Global Technology Conference. With that, I'll turn it over to Ash.
Ashutosh Kulkarni: Thank you, Anthony, and welcome back to Elastic, and thank you all for joining us today. I'm pleased with how we performed in the second quarter and what is still a challenging external environment. We exceeded our expectations across both revenue and non-GAAP operating margin. In Q2, revenue grew 17% year-over-year with Elastic Cloud growing 31% year-over-year, fueled by continued improvement in cloud consumption as well as our success in generative AI. And we exceeded our profitability goal, delivering non-GAAP operating margin of 13%. Elastic's mission is to enable everyone to find the answers that matter from all data in real time at scale. Search is at the core of all our solutions and everything we do, whether you are searching a website or searching for security threats in your organization. Search is also a critical part of the infrastructure for AI. We are proud that Elastic is the leading search analytics platform used by tens of thousands of customers and supported by a large community of users. Our thoughtful investment and innovation in AI has continued to drive customer excitement and engagement with Elastic, and this was visible in our business in Q2. During the quarter, we continued to see recurring trends drive momentum in our business. The first of these being generative AI. I met with dozens of customers across all geographies and a recurring theme was a strong desire to use Elasticsearch Relevance Engine, or ESRE, to build generative AI applications. Generative AI is driving a resurgence of interest in search as customers use semantic search, vector search and hybrid search to ground large language models with their private business context and ESRE provides the most comprehensive and enterprise-ready platform for these use cases. While it will take some time for generative AI spend to become a significant driver of our revenue, we are very excited about the long-term opportunity. As an example, we signed a multiyear marketplace deal with DocuSign, the world leader in eSignature and contract life cycle management solutions. More than 1.4 million customers and more than 1 billion users in over 180 countries use DocuSign solutions to make doing business smarter, easier and more secure. Search is an essential component of DocuSign's product and our advanced capabilities with semantic and vector search will enable DocuSign to extend its capabilities. We also signed a contract with a leading video sharing platform for Elastic Cloud via the Google Marketplace to provide hybrid search, blending AI, vector search, semantic search and Reciprocal Rank Fusion, or RRF, offering a platform that enables millions of users to create, edit and share videos, the company is using the Elasticsearch Relevance Engine that will ultimately serve as the core vector database for its millions of videos and associated metadata. The company has created and stored a vector embeddings in Elastic in order to provide watch list recommendations leading to a better search experience. In Q2, we saw a significant increase in the use of ESRE. ESRE includes a built-in vector database, the ability to bring in your own machine learning models and also ELSER, which is our own proprietary machine learning model for semantic search. This quarter, we saw a rapid adoption of ELSER, which we first released with ESRE launch. With ELSER, customers are able to quickly implement semantic search without any model training to power generative AI use cases. With the release of the even more efficient ELSER Model 2 earlier this month, we expect to see this momentum continue. We also saw hundreds of additional customers starting to use ESRE for vector search use cases in Q2, building on our momentum from the first quarter. We are very pleased with this growth and are also excited by the progress we have been making on the innovation front. In Elastic 8.11, we delivered support for dense vectors with up to 400 dimensions, which is already greater than what embedding models require. We also delivered the first version of our machine learning inference API to improve the overall developer experience when building generative AI applications with Elastic. When search powers AI, customers are able to quickly build generative AI applications while reducing hallucinations at the lowest possible cost. Elastic's prospects as a key component of the modern IT stack for generative AI remain extremely strong. The second trend we saw in Q2 was customers continuing to consolidate onto the Elastic platform for multiple use cases. We had many key wins where we displaced incumbent solutions for observability and security and helped customers save on their overall IT spend while gaining even greater value through our many innovations. For example, we closed a multiyear 8-figure deal with a leading global wealth management company. Having previously used a legacy vendor, the company moved to Elastic Security for SIEM for deeper threat hunting capabilities in order to keep up with data volume growth and threat sophistication. They are confident in our cloud native technology as well as the speed, scalability and flexibility of Elastic as well as our generative AI capabilities. Additionally, a leading risk transfer company in Europe signed a multiyear subscription with us and replaced their legacy security provider. Elastic stood out as their preferred solution to fortify their organization against security threats and strengthen their security posture. By leveraging Elastic SIEM and tapping into our advanced capabilities such as cross-cluster search, the company can now effectively monitor and protect its large complex environment from a single pane of glass on one unified platform. As customers continue to consolidate onto our platform, we have been investing in capabilities that make it possible for customers to migrate easily from incumbent solutions to Elastic. In 8.11, we launched a powerful new piped query language, Elasticsearch Query Language, or ES|QL. ES|QL is designed to transform enrich and simplify data investigation with concurrent processing. ES|QL enables data aggregation and analysis across a variety of data sources from a single query making it an incredibly powerful tool for data analysts, site reliability engineers and security operations center analysts alike. The excitement from our customers on this capability in technical preview has been tremendous. We have also been investing in our AI Assistance for observability and security that make it possible for customers to leverage the power of AI to aid the humans involved in the detection, diagnosis and remediation workflows in observability and security. These AI Assistance, which are in our enterprise subscription tier, are allowing us to leverage our leadership in AI even in the areas of observability and security. And this is something that we believe will continue to be a tailwind for us. The final trend in the quarter was around cloud consumption, where continued improvements helped drive cloud revenue growth. Customers remain focused on costs, but they have generally optimized their Elastic deployments and are now focused on driving new workloads to Elastic. This is an area where we continue to lean in to help our customers get the most out of Elastic. This customer-centric approach drives improved customer satisfaction and engagement and increases consumption over time. This drove our interest in acquiring Opster. Opster develops products for monitoring, managing and troubleshooting Elasticsearch and OpenSearch. They are the creators of AutoOps, a powerful platform that provides deep insight to detect and resolve issues with cluster health, improved search performance and reduce hardware costs. By joining forces with Opster, we will be able to help our customers get even more out of their Elasticsearch deployments and drive greater customer satisfaction and consumption. As we progress on our journey towards our serverless offerings, the kinds of management and monitoring capabilities that Opster has built will make our platform even more resilient and easier to use, and I'm very excited about this feature. Now on to our many product innovations in Q2. In addition to the ES|QL and generative AI innovations that I've already mentioned, the team delivered amazing capabilities across the Elastic platform. We added chat capabilities with the SQL integration to our AI Assistance. This allows customers to use natural language to explain a query and have the AI Assistant provide the ES|QL query syntax, explain what the query does and provide a prompt to run the requested query. In observability, Universal Profiling became generally available. And we also integrated it with application performance monitoring, or APM. With this new capability, users will be able to quickly correlate application performance issues with underlying system functions without needing to switch context from APM to Universal Profiling. When search powers observability, site reliability engineers have greater visibility across all signal types, reducing the time to resolve system issues. In security, we delivered Cloud Security Posture Management for Google Cloud. And now our customers can use Elastic to secure their workloads on Google Cloud in addition to their workloads on AWS. We also delivered out-of-the-box integrations with Wiz and Palo Alto (NASDAQ:PANW) Prisma Cloud to make it easier to get a view of the entire threat landscape in the Elastic platform. When search power security SOC analysts have greater visibility into difficult to detect threats, reducing the time to hunt and remediate threats. Finally, in search and generative AI, we delivered integrations with LangChain, LlamaIndex and Amazon Bedrock, further simplifying the developer experience and providing our customers with greater flexibility as they build generative AI applications. Now on to our go-to-market focus and investments. We see a tremendous opportunity ahead of us as our search analytics platform becomes a key part of the modern IT stack for building GenAI applications. We firmly believe that our relationships with the major cloud hyperscalers will be a key factor in our success in the future. And towards that end, we are continuing to invest in these relationships. We just announced a new two-year global strategic collaboration agreement with Amazon Web Services. This will accelerate the integration of Amazon Bedrock into the Elastic AI Assistant, enabling customers to get richer and more contextualized and relevant results by using their preferred large language model, coupled with the organization's unique IT environment and proprietary data sets. Also, building on our recent joint success with Google, we are accelerating and extending our joint go-to-market activities and technology integrations with Google Cloud. Our collaboration includes the powerful combination of the Elasticsearch Relevance Engine and Google Cloud's Vertex (NASDAQ:VRTX) AI platform, which empowers developers with a scalable tool set to build privacy-first generative AI applications. Beyond our investments with cloud hyperscalers, we are also investing in broadening our global reach with 12 Elastic user conferences across major cities in the Americas, EMEA and APJ. Based on the amazing customer reception we have seen to date, we expect over 5,000 in-person attendees and more than 100 customer and partner speakers to participate in ElasticON across these 12 events. Finally, I would like to reiterate our commitment to managing the business with discipline. We delivered a record non-GAAP operating margin of 13% for the quarter, which was better than our expectations, and we remain on track to deliver on our non-GAAP operating margin target for the full fiscal year. Janesh will talk further about this in a moment. To recap, we had an excellent quarter. I am pleased with how we manage the business with discipline, executed on our strategy, and I'm very excited about the second half of the year. At a time when companies are looking for ways to reduce costs and gain efficiencies without sacrificing innovation, especially around generative AI, Elastics' search analytics platform is becoming the natural choice for these businesses. We view generative AI as a massive tailwind that will continue to benefit our business in the years to come. In closing, I want to thank our team for their focused execution. And I also want to thank our customers, partners and investors for their continued support and confidence. Now I'll turn it over to Janesh to go through our financial results in more detail.
Janesh Moorjani: Thanks, Ash. We once again delivered a strong quarter driven by consistent execution despite a complex external environment. We were pleased that we came in above the high end of our guidance for the quarter, both on the top line and the bottom line. We delivered 17% year-over-year growth in total revenue in the second quarter, with Elastic Cloud driving our strong results accelerating to 31% year-over-year growth. We continued our focus on profitability, delivering another record quarter with non-GAAP operating margin of 13%, reflecting improved consumption trends versus Q1 and our strong investment discipline and demonstrating the leverage inherent in our business model as we continue to scale the business. As Ash mentioned, we continue to see strong customer engagement around generative AI use cases. We are increasingly seeing customers make technical decisions to select Elastic based on our product leadership position. The multiyear investments we have been making in generative AI are beginning to positively impact our go-to-market, especially around search specific use cases and we are positioned to be a long-term leader in this space. While we expect generative AI will present a meaningful revenue opportunity for us in the coming years, we believe it will take some time for the revenue from GenAI to become significant. During the second quarter, we saw continued improvements in cloud consumption patterns as customers increase their consumption against commitments that they had previously made. With that said, we continue to monitor consumption patterns against the backdrop of an evolving macro and geopolitical environment. While many customers have already gone through optimization of their consumption use cases, we do continue to see cost consciousness and spend management as themes in the market. Let's get deeper into the results for Q2 and our outlook. Total revenue in the second quarter was $311 million, up 17% year-over-year or 16% year-over-year on a constant currency basis. Subscription revenue in the second quarter totaled $288 million, up 19% year-over-year or 18% year-over-year in constant currency and comprised 93% of total revenue. Within subscriptions, revenue from Elastic Cloud was $135 million, growing 31% year-over-year on an as-reported basis or 30% year-over-year on a constant currency basis, reflecting the stronger consumption trends I just mentioned. Elastic Cloud represented 43% of total revenue in the quarter, up from 39% a year ago. Elastic Cloud revenue derived from month-to-month arrangements contributed 15% of total revenue, the same as in the prior quarter. Professional services revenue in the second quarter was $23 million, down 1% year-over-year on an as-reported basis and down 3% year-over-year on a constant currency basis. As we've said before, professional services revenue may fluctuate across quarters based on the timing of services delivery, and we do not expect it to vary significantly in mix over time. To add more context around overall deal flow, EMEA grew fastest during the quarter, followed by APJ and the Americas. We continue to see a healthy balance across the business based on geography solutions and verticals and this diversification reflects the breadth and popularity of our platform. Moving on to customer metrics. We ended the quarter with over 1,220 customers with annual contract values more than $100,000. Looking at customer additions more broadly, we ended the quarter with over 4,230 customers above $10,000 in ACV and approximately 20,700 total subscription customers. Our net expansion rate, which, as you know, is a trailing 12-month lagging indicator was approximately 110% in line with our expectation for the quarter. Now turning to profitability for which I'll discuss non-GAAP measures. Gross margin in the quarter was 76.8% versus 76.5% in the prior quarter, reflecting a slightly higher subscription mix. Our operating margin in the quarter was 13%, which was better than expected. The strong operating margin performance was driven by our revenue outperformance and our continued focus on managing our expenses as we invest thoughtfully to drive future growth. Diluted earnings per share in the second quarter was $0.37. Our free cash flow on an adjusted basis was negative $3 million in the quarter or negative 1% adjusted free cash flow margin, in line with the expectations we had previously shared. As we stated on our previous call, there were some cash collection and payment timing movements between the first and second quarter as well as onetime payments of $13 million in Q2 that related to previously completed acquisitions. For the full fiscal year, there is no change in our prior outlook, and we continue to expect free cash flow margin on an adjusted basis for fiscal '24 to be slightly above the non-GAAP operating margin for fiscal '24. We continue to maintain a strong balance sheet. We ended the second quarter with cash, cash equivalents and marketable securities of $966 million. Turning to guidance. While we were very pleased with our outperformance in the first half of fiscal '24, we continue to be prudent as we plan for the rest of the year. Despite the many moving parts in the broader macro climate, we anticipate that business conditions will remain largely unchanged. We do expect to see growth in both self-managed and cloud subscription revenue. Additionally, though we are seeing customers ramp their consumption, and we've been very pleased with that trend, we believe it is appropriate to anticipate that consumption patterns may continue to fluctuate in the near term. In terms of operating expenses, as we execute in the second half of this year, it will be important for us to exit the year with an appropriate level of investment to secure our success for next year. Therefore, we continue to invest with discipline in the business as we drive increasingly profitable growth on an annual basis. In addition to incremental organic investments in the second half, our model assumes approximately $12 million of seasonally higher expenses in the fourth quarter related to the timing of employee benefit costs and our engineering all-hands event. We have experienced similar seasonality in prior years, and these expenses were anticipated in the guidance that we had initially laid out for the year. With that background for the third quarter of fiscal '24, we expect total revenue in the range of $319 million to $321 million, representing 17% year-over-year growth at the midpoint or 16% on a constant currency basis. We expect non-GAAP operating margin for the third quarter of fiscal '24 in the range of 11.5% to 12% and non-GAAP earnings per share in the range of $0.30 to $0.32 using between 103 million and 104 million diluted weighted average ordinary shares outstanding. For full fiscal '24, we are raising our outlook and now expect total revenue in the range of $1.247 billion to $1.253 billion, representing 17% year-over-year growth at the midpoint or 16% on a constant currency basis. We expect non-GAAP operating margin for full fiscal '24 in the range of 10.25% to 10.75% and non-GAAP earnings per share in the range of $1.06 to $1.15 using between 102 million and 104 million diluted weighted average ordinary shares outstanding. Looking beyond this fiscal year, we continue to expect to grow revenue faster than overall expenses in fiscal '25, further expanding our non-GAAP operating margin. In summary, we are pleased with our strong performance in the first half and are confident in our outlook for the rest of the year. And with that, let's go ahead and take questions. Operator?
Operator: [Operator Instructions] The first question is from Brent Thill with Jefferies. Please go ahead.
Brent Thill: Thanks. Ash, I'm curious if you could just talk about the timing of revenue impact with AI and how you expect that to unfold. And quickly for Janesh, there was a few client questions around NRR lower, new logo growth also slower, but now seeing, you know, good strength in cloud, can you just break that apart and give us a little better view of what you're seeing? Thanks.
Ashutosh Kulkarni: Thanks for the question, Brent. So, you know, just first of all, very pleased with the overall performance in Q2, and particularly, like I talked about in the prepared remarks, the adoption of ESRE. And as you know, you know, ESRE includes not just our native vector search capability, but so much more beyond that. And we are seeing customers really adopting that incredibly well. And, you know, we also saw in the quarter that customers are making their purchase decisions for all kinds of use cases, looking at the kind of leadership position that we have in generative AI and seeing that the innovations that we are driving there is going to help them in all kinds of use cases. In terms of the monetization, what I'd say is, it's still early days. It's going to take some time for customers to ramp the usage of these generative AI workloads. So, it isn't a significant contribution to revenue at this time, you know, but we are pleased with the contribution to consumption that we're already seeing in these early phases from these GenAI use cases. Let me turn it to Janesh for the second question.
Janesh Moorjani: Hi, Brent. So, on the net expansion rates and cloud consumption trends, you know, first off, when I just step back and look at Q2 overall, we are very happy with our overall performance. When you look at the numbers on revenue and on cloud growth, there was a lot for us to be pleased about. The net expansion rate, as you know, is a lagging indicator and it's a trailing 12-month measure. So even as cloud consumption ramps up like we saw here in Q2, it will take some time for that consumption to be fully reflected in the net expansion rate. And that's why when I think about the business, I usually look at revenue as the best indicator of current performance. And so, I think the net expansion rate can continue to move a few points in either direction in the near term. But generally, we expect that the trends in consumption over time should alleviate some of the downward pressure that we had experienced in the net expansion rate over the past few quarters. The other piece that you mentioned was on consumption overall. If I just step back and think about some of the consumption trends we saw, pretty healthy patterns across industries and across geographies. It was broad-based. Our sense is that optimization has stabilized. Customers are still cost-conscious, as we mentioned, but generally, they are where I think they want to be on the optimization. So, we see customers ramping their consumption towards their commitment levels, and we are very pleased that we're able to help them scale their usage and realize the value of our solutions. So, those are some of the puts and takes, and we're quite happy with Q2 and looking forward to the rest of the year.
Brent Thill: Thanks. Welcome back, Anthony.
Operator: The next question is from Raimo Lenschow with Barclays. Please go ahead.
Raimo Lenschow: Hi, thanks. And congrats from me as well. Two quick questions, one for Ash and one for Janesh. Ash, if you think about it, you're obviously the strong player in search, and now with the AI capabilities and ESRE, you kind of, you know, are able to revisit that installed base. Line, what do you see in terms of, like the overall reviving that search kind of client base and kind of reengaging with them a lot more here? And what are you seeing there in pipeline, customer conversations, et cetera? Because that seems to me like a nice big opportunity to just kind of revisit and kind of reengage with clients there. And the second question for Janesh, is like, you talked about the consumption trends. Just on that note, if I look at the implied guidance for Q4, that does look like a, you know, very kind of small additional number coming in there. Is there anything specific on Q4? Is that just conservatism? Thank you.
Ashutosh Kulkarni: Yes, Raimo, thanks for the question. So, you know, I think, like I mentioned even in my prepared remarks, the generative AI is really driving a resurgence of interest in search. And, you know, I've been on the road quite a bit meeting with our customers. We recently, in just this past few months, had multiple ElasticON events, first here in San Francisco, then in Frankfurt, then in Amsterdam, literally hundreds of customers, many customer speakers. So, I had the opportunity to meet with many of our clients there. Also at AWS re:Invent earlier this week, I was there in person meeting with our customers and some of our partners. And across the board, what we are hearing, what I'm seeing is a significant interest in generative AI. And a lot of it is around use cases that, you know, we would traditionally bucket into the category of search. And the thing that we are seeing is, a lot of the interest is around trying to completely change customer experiences, trying to completely change support experiences. And there's use cases across the board in every vertical. So, that's something that we feel really, really good about in terms of the long-term position and the long-term view for us, and over the long haul, what it can do in terms of TAM expansion in the overall area of search. You know, the other thing that I'll say is, right now what I'm seeing is, a lot of the use cases that are being put into production are internal facing. So, you know, customers are building chat experiences or customers are building applications that are viewed by either their internal support engineering teams, or their internal SRE teams, or their internal, you know, employee portals, and so on. And that's largely because they are getting a level of comfort with these large language models. And that's also where they really find a lot of value in Elastic, because, you know, the ability that Elastic provides to ground these large language models in the context of their businesses is something that they see a lot of value from in terms of reducing hallucinations and so on. And over time, we believe that that's going to then make it possible for them to expand to external end-user-facing use cases. And that then again is going to be another expansion of the overall opportunity. So, very excited, and absolutely this is something that we are leaning-in on. And let me turn it to Janesh on your second question.
Janesh Moorjani: Hi, Raimo. So, as I think about the guidance and trying to unpack that, maybe just commenting on the second half first, and then I'll touch on Q4 as well. You know, overall for the second half, the way we approached it was just recognizing that we've seen good, strong consumption patterns here as customers are scaling their usage up to their committed levels. And if I think about the external environment, the macro is generally stable. But as we said, cost consciousness continues to be a theme in the market and is important for customers. So, we've benefited from that to a degree as customers have made greater commitments to us. But we've also seen, you know, in the past that that can cause consumption to fluctuate if customers drive operational changes. So, we've simply considered that possibility of potential consumption fluctuation in the future as we built our guidance. And just for clarity, we've not actually seen any big shift in the external environment, but we just think it's best to plan prudently. We're executing really well, we're excited, we're confident about the rest of the year. And specific to Q4, I'll just point out that the only unique thing about Q4 is it's a slightly shorter quarter for us. Given that 2024 will be a leap year, Q4 will have 90 days instead of 92 days. So, that's just something that creates a bit of a headwind in Q4.
Raimo Lenschow: Okay. Makes sense. Thank you. Congrats again.
Operator: The next question is from Ittai Kidron with Oppenheimer. Please go ahead.
Harshil Thakkar: Hi guys. This is Harshil on for Ittai. Can you hear me?
Ashutosh Kulkarni: Yes.
Harshil Thakkar: Got it. So, earlier this year, you guys gave us an update on the $2 billion revenue target and how that timeline had been extended a bit with the challenging macro. But now, you know, with the environment seemingly a bit more stable, consumption starting to improve, and the momentum you're seeing with generative AI, I'm just curious, you know, is there anything you can share on that timeline? And, you know, versus eight months ago, has that maybe moved a bit forward?
Janesh Moorjani: Hi, Harshil, this is Janesh. So, you know, we were through that $2 billion goal sometime back, but the way we think about this fundamentally is that we've got a significant opportunity ahead of us, and we are working hard to prosecute that opportunity. You've seen tremendous momentum here from the standpoint of the overall business, and particularly in terms of cloud growth as we address that opportunity. And all of that is additionally fueled by the momentum that we are seeing in generative AI. So, we don't want to get too far ahead and start to predict future revenue growth beyond this year at this stage. But there's no question in our minds that we are working hard to build a multi-billion dollar Company at scale in the future. And we'll provide you with appropriate updates as we go on that. But for now, we are focused on executing in this year and feel very good about the back half of the year.
Harshil Thakkar: Got it. That's helpful. And then just on NRR, is this a 110% level? Is this an area where we should expect it to kind of bottom out? And as we look to fiscal '25, what levers do you see that could get NRR back up to the historical level?
Janesh Moorjani: Yes, as I mentioned just a couple of minutes ago, because the net expansion rate is a lagging indicator, even as cloud consumption ramps, it just takes time for that to be reflected in the net expansion rate. So, I think it can move a couple of points in either direction in the near term. But over time, what will help drive the net expansion rate is increasing consumption. And, you know, as we move forward, as consumption ramps, that will alleviate some of the downward pressure that we had experienced previously and growth in cloud and our rates of consumption will help overall as we progress into the future.
Harshil Thakkar: Got it. Very helpful. Thanks, guys.
Operator: The next question is from Pinjalim Bora with JPMorgan (NYSE:JPM). Please go ahead.
Pinjalim Bora: Oh, great. Thanks, guys. Congrats on the quarter. Ash, seems like ESRE is opening up a lot of customer conversations. There's a lot of positivity. How often are these conversations expanding into something more than just AI, say, in security, observability? Do you think the AI as the entry point to drive larger deals across the board is a motion that could accelerate your growth? And then one for Janesh. Any way to understand the cloud consumption trends so far in November in Q3?
Ashutosh Kulkarni: Hi, Pinjalim, thanks for the question. So, you know, the way we see it is that, in the areas of search, there is a clear expansion of the TAM that is likely going to happen just given the momentum that we're seeing, the resurgence of interest in search and the kinds of use cases that people are both imagining and starting to build, you know, that's going to be something that we feel in the long term is going to be very material. In the areas of observability and security, the AI assistants that we have launched and the kinds of really compelling capabilities that we've delivered, you know, the ability through natural language to auto-generate ESQL commands and then to understand what the queries mean and then have to, you know, have the system automatically execute them through the prompt, just makes the life of a site reliability engineer for observability or a SOC analyst for security so much easier. You know, they are able to do their work with, you know, the system, guiding them through the whole journey, everything from detection to diagnosis to remediation. And that, we feel, is very powerful. And that's actually, you know, something that our sales teams will often lead with when they are having a conversation around observability and security. You know, that demonstration, the ease of use that it brings to our overall platform is incredibly compelling, and we believe that that's going to help us really change and improve our competitive positioning in both observability and security. And that's something that we are seeing. You know, we are - I talked about the fact that we had, you know, many customers consolidate onto our platform, competitive wins where we displaced incumbents. You know, a lot of those discussions we lead with the AI Assistant and how we showcase our platform. So that's definitely something that I'm very excited about.
Janesh Moorjani: And Pinjalim, in terms of the observations on November, I think it's a little too early to tell. November isn't even over yet. And, you know, as we've said before, there can be fluctuations within a single month as we look at the pool of customers. So we tend not to rely too much on a single month of data. So I can't share a specific view on November just yet, but I can tell you that the trends that we experienced in the quarter that I described earlier were broad-based and we felt very good about that in Q2.
Pinjalim Bora: Got it. Thank you.
Operator: The next question is from Brad Reback with Stifel. Please go ahead.
Brad Reback: Great. Thanks very much. Janesh, last year you had a really strong booking quarter as customers began to increase their commits. How should we think about that comp for this quarter?
Janesh Moorjani: Hi, Brad, your voice was a little bit muffled, but I think you were asking about year-over-year comps on - from a commitment standpoint.
Brad Reback: Yes.
Janesh Moorjani: And the way I think about - great. The way I think about that is, you know, customers are continuing to make commitments to Elastic. We've seen that strength in commitment as we think about just ongoing execution that we have, as we think about the engagement that our field teams have with them. Ash talked about all of the trends in generative AI that are continuing to provide a good tailwind to that. So, we feel very good about our overall position in front of customers. You know, in terms of how that translates into specific commitments for Q3, it's too early to tell for that. And again, we aren't going to sort of provide forward views on commitment or bookings-oriented measures. We feel very good about the revenue outlook we've provided and I think that continues to be the primary measure for us in the business.
Brad Reback: That's great. And just a quick follow-up on something you said earlier on the SaaS side as customers are scaling to their committed levels, are those customers at their committed levels still below that and opportunity to move higher or are they consuming in excess of that? Thanks.
Janesh Moorjani: Yes, it's obviously a broad pool of customers, so different customers will be at different stages. The point I was making earlier is that, as customers have been ramping, they're approaching as a general matter, the levels of commitment that they had. We - there are obviously some customers within that that are consuming above those levels, and then it becomes a good opportunity for us to go and drive expansion conversations with them. And there are some customers that are still below those levels, but on balance, we've seen customers continuing to ramp as the quarter has progressed.
Brad Reback: Excellent. Thanks very much.
Operator: The next question is from Matt Hedberg with RBC Capital Markets. Please go ahead.
Matthew Hedberg: Great. Thanks for taking my questions and congrats from me as well. Ash, I was wondering, you know, on the initial GenAI interest, you know, obviously, there's a lot of other competitors out there that have solutions. I'm sort of curious, you know, when customers are faced with a choice, you know, what is sort of that tipping factor thus far? Is it, you know, some of the role-level security? Is it, you know, just - I'm just sort of curious, are there some commonalities of why Elastic versus others at this point?
Ashutosh Kulkarni: Yes, absolutely. You know, I'd say I'd break it down to, you know, roughly four categories of clear differentiation that our customers keep telling us they see in our product and platform, you know, as they evaluate us against all the options out there and the reason why they choose us. The first and foremost is, and I hear this very consistently, that they find that our vector database functionality is absolutely stellar in terms of how it scales, in terms of the fact that it's built deep into the platform, and it's built in such a way that, you know, you get the benefit of all the other functionality that we've built over the years. So that's first and foremost something that we hear over and over again. The second thing that, you know, we are very proud of is, you know, generative AI, it's more than just having a vector database, right. Because there's so much more that you need to do in terms of finding absolutely the most relevant information to pass as context to the large language model. And that requires more than just a vector database, it requires you to have semantic search, hybrid search functionality because often that will result in the most optimized and most correct, you know, most relevant information, more advanced features like Reciprocal Rank Fusion. And this - you know, when you talk about context, there are even other things like personalization, geolocation, filtering, all of these aspects that you just get as part of it, along with the privacy and the role level security and so on, which you mentioned. So, that's the second big reason because there is that set of capabilities that is so compelling. The third, I'd say, is areas around just our openness, right? So, we are very - we've always had this mindset of being very open as a Company, providing our customers a lot of choice. So we are LLM-agnostic. We have excellent partnerships with all the major cloud vendors, with Azure OpenAI, with Google Vertex AI, with AWS Bedrock. You saw the recent announcement of the Strategic Collaboration Agreement with AWS around Bedrock. And we also support, you know, a lot of the open source, the community LLMs, like Llama 2 and so on from Meta (NASDAQ:META). And that just means that, you know, when somebody uses us, they get a platform that gives them that choice of LLMs because we don't believe there's going to be one LLM to rule them all in the future. And that choice matters to customers. And then the last thing I'd say is just the incumbency. There's so much, when you talk about all this unstructured complex, messy data that is so critical, that's what the customers are trying to tap into to build these generative AI applications, a lot of that data is already sitting in Elastic clusters on, you know, literally tens of thousands of customers, right. So, you know, for them it becomes the easy button. They're able to just use our platform, use all that data that they've already ingested in, and now build these GenAI applications on top of it. And that, you know, that sort of makes it a very compelling proposition.
Matthew Hedberg: Super comprehensive. Thanks for that. And then I guess, you know, obviously it's still early and the acquisition of Splunk (NASDAQ:SPLK) hasn't gone through yet. But I'm curious, you know, has there been any initial feedback from customers on what that might mean for, you know, existing Splunk customers? And I'm just sort of curious if that's starting to show up at all in any customer conversations that you're seeing.
Ashutosh Kulkarni: Look, I think I've said this many times that when it comes to the core markets in observability and security that we play in, whether it's log analytics for observability or, you know, security analytics or SIM for security, you know, in those markets, we have very few competitors that operate at our scale. And I'm not going to talk about, you know, any one particular competitor. But what I will say is that, given that, you know, we are one of very few, you know, our ability to take share from others by having customers move to our platform consolidate onto our platform because we have a more scalable offering, you know, we are really differentiating our offering with generative AI, with the AI assistance that we have built. And probably one of the most exciting things is the Elasticsearch Query Language, right, ESQL, the uptake and the interest in that has been just absolutely phenomenal, because not only is it easy to use, it's this piped query language that gives them the ability to iterate over their work and it's making it super easy for customers to migrate off of existing incumbent solutions onto Elastic. So, everything that's happening in the market right now, we feel is really supporting our ability to continue to have a very strong future.
Matthew Hedberg: Great to hear. And also, welcome back, Anthony.
Operator: The next question is from Kash Rangan with Goldman Sachs (NYSE:GS). Please go ahead. Excuse me. The next question is from Tyler Radke with Citi. Please go ahead.
Yitchuin Wong: Hi, good afternoon. This is Yitchuin Wong on for Tyler here. Thanks for taking the question. Congrats on a great quarter. I guess I want to drill in a little bit on the top numbers. That looks really strong. Almost $40 million is like a record high. Is there a one-time factor that would cause this not to be sustainable going forward? And like kind of how we look at the contribution here? Is it more the customer consumption easing or kind of more GenAI use cases that drove that $40 million ARR Cloud revenue here?
Janesh Moorjani: Yes. Hi Yitchuin great to talk to you. And as I mentioned earlier, we saw healthy consumption from customers across geographies and across different industry segments. So, it was relatively broad-based. There was, you know, nothing stand out in terms of one or two customers that caused any kind of distortions. Overall, what I'd say is, optimization trends seem to have stabilized and while customers are still focused on making sure that they, you know, get value in terms of their investments, I think they are generally where they wanted to be in terms of those optimizations. And we - so we saw customers ramping their consumption, but in terms of GenAI, we think it's still early days overall for GenAI workloads and it'll take time for customers to ramp their usage for GenAI. But we are quite pleased with the initial contribution to consumption from some of these newer GenAI workloads and we do expect that those will continue to grow over time. So, you know, as I think about the outlook on consumption, as I've said before, there can be some fluctuations. And as I think about the guide, what we've simply tried to do is balance the strength that we've seen in execution in the first half against potential broader macro concerns or potential consumption fluctuations that might be out there in the second half. So, we feel really good about the back half and that's the way we've approached it.
Yitchuin Wong: Got it. That makes sense. I guess with kind of the GenAI still anything you can do the total, but how do you view kind of the new bookings on the consumption in the quarter on that net new of front of GenAI?
Janesh Moorjani: Yes, as I said overall, we saw pretty good strength in terms of commitments that customers have made to us that's broad-based, it's for GenAI-specific workloads and Ash provided a couple of examples of those in his prepared remarks. And for the - more broadly for our overall business as well. So, we felt pretty good about that.
Yitchuin Wong: Makes sense. Alright. Thank you.
Janesh Moorjani: Thank you.
Operator: The next question is from Kash Rangan with Goldman Sachs. Please go ahead.
Kash Rangan: Hi, thank you so much. Happy holidays and congratulations on the quarter. On this GenAI thing, curious to get your updated thoughts, Ash, on how - what's the Monetization strategy for GenAI? At one level, looking at the compelling explanation that you have, it makes Elastic more accessible, so it's easy to start to use the system using natural language search. So, ESQL becomes a lot more accessible. Or is the market with GenAI opening up brand new use cases? Is it, one, where the accessibility of the platform just gets better so the monetization of the TAM becomes easier? Or is it that and new use cases that you could not otherwise target with the existing Elastic architecture that opens up more avenues, which is the right way to think about what is incremental opportunity and how do you put a price tag on your generative AI efforts? Thank you so much. Two questions there.
Ashutosh Kulkarni: Yes, Kash. Thank s for the question. And, you know, and let me first address the use cases, and then I'll get to the monetization. So, if you think about the use cases, I'd break it into two categories. So, one is for search, and in the area of search, what we are seeing is generative AI is really broadening the TAM. There are lots of things that were not possible or were not easy in the past that now suddenly become both possible and, you know, relatively, you know, capable of implementation. So, one example would be video search or image search at scale. You know, that's the kind of stuff that works incredibly well with vector search and hybrid search, but not so much with, you know, traditional just lexical search. Another example is the fact that, you know, when you think about the kinds of customer service examples or the kinds of customer service use cases that people are trying to build with GenAI, these are experiences that just would not be possible with just search in the past. And now with the combination of semantic search and hybrid search, now you can build these conversational kinds of applications to improve the overall search experience. And that's driving interest and in the future and even now, implementations that we believe are really, really exciting. So, it's going to open up the TAM in the long term. How much? You know, it's I think, TBD. We'll get a better sense as we progress through the quarters and the years on just how much of an expansion of the search TAM gets created by this bao wave of generative AI. Then when you think about security and observability in that area, I don't think it's as much of the TAM increasing, but rather. Generative AI, making it easier both for newer types of users to use the Elastic platform to solve observability and security use cases. And it also allows us to differentiate our platform a lot better, because now we can we can make it easier for customers to use the Elastic platform. And you know we've talked about this, we have a very strong focus on just improving the overall usability of our platform and generative AI is just making that much better. And then in terms of monetization, what I'll what I'll remind you is you know anytime you are using as Ray. You are fundamentally using the machine-learning capabilities on our platform, which are in our paid Platinum tier. And then our AI assistance for observability and security. Our only available at our enterprise tier. And also these jobs - these machine-learning jobs for generative AI did tend to be a lot more compute-intensive, so there are various different vectors for us to monetize the work that we're doing.
Kash Rangan: Amazing. Thank you so much once again. Congrats for the quarter.
Operator: The next question is from Koji Ikeda with Bank of America (NYSE:BAC) Securities. Please go ahead.
Koji Ikeda: Hi, guys, thanks for taking the question. I wanted to ask a question on Opster acquisition. You know, two parts here. Could you give us a sense of maybe the revenue scale and cash used for the acquisition? And then a little bit more strategically? When looking at the Opster website, you know, I saw things such as cluster health visibility, and improving the search performance. But I also saw that Opster helps reduce hardware cost, which sounds like it could be a little bit cannibalistic in a sense. So how should we be thinking about the long-term strategy for Opster from a monetization standpoint?
Janesh Moorjani: Hi, Koji, maybe we can take those in reverse order. Ash, maybe you can touch on the strategy first and then I'll touch on the financials.
Ashutosh Kulkarni: Yes. So, when you - Koji, by the way, great question. So, you know, I'm super excited about the Opster acquisition. Like you said, what AutoOps really lets you do is, it lets you both monitor and manage the overall system, the cluster, and optimize it, right? Make sure that it's running in the best way possible. It's able to reduce and detect issues before they happen. So you're able to make sure that, you're able to have the system stay healthy at massive scale, which is very, very important for our customers. And that's something that drives customer satisfaction. It's something that, you know, encourages customers to then do more with our platform and drive more consumption. So it's really a play around consumption to ensure that customers are able to do more with our platform and keep consuming in a way that is good for them and is good for us. Now, your point about optimizing the hardware look, that is something that we believe is also a key part of our strategy in general, because the way we monetize is, we monetize by having a lot of these capabilities in our paid tiers and in cloud. And through that, it drives customers to adopt both higher tiers and also to adopt Elastic cloud. And we want our customers to constantly be spending less on infrastructure, but we monetize all of that by having them move to higher tiers because of which their spend on Elastic increases. So we believe that it's a win-win for customers, it's a win-win for us. And that's what makes this such a compelling proposition.
Janesh Moorjani: And Koji, in terms of size, Opster is a small, but mighty team. And in terms of revenue, we're not expecting any meaningful revenue contribution from Opster. As Ash described, the intent is really to fold the Opster technology into the broader Elastic Stack. And even in terms of expenses, we've just built that into the model that we've already provided.
Koji Ikeda: Got it. Super helpful. Thanks guys. Thanks for taking the question.
Janesh Moorjani: Thank you.
Operator: The next question is from Rob Owens with Piper Sandler. Please go ahead.
Rob Owens: Great. Thanks for taking my question. I was hoping you guys could drill down a bit on the security market and specifically SIEM applications as we're starting to hear from others of a SIEM replacement cycle? Thanks.
Ashutosh Kulkarni: Yes. Hi, thanks for the question. So, look, our position in security analytics or SIEM, as you called it, remains very strong. So I talked about the fact that in the prepared remarks that we had a lot of success in the quarter. And we've seen this in the past several quarters of getting customers to consolidate onto our platform, displacing incumbents. And we are seeing the benefit of both the capabilities, the scale functionality in our platform, you know, some of the newer GenAI functionality through the assistance that we've delivered. And it's really helping us compete very, very effectively. And just the market dynamics seem to be playing in our favor. So we are very excited about it and I feel very good about the future.
Rob Owens: All right. Thank you.
Operator: The next question is from Andrew Nowinski with Wells Fargo (NYSE:WFC). Please go ahead.
Andrew Nowinski: Okay, thank you. Nice quarter. I just want to ask more specifically about ESRE. I know it's only available in that platinum and enterprise tiers. So I'm wondering, are you seeing customers already upgrading to those tiers or any sort of uptick in new logos that are landing with those two tiers?
Ashutosh Kulkarni: Hi, Andrew. So we've generally seen a steady increase in the adoption of our higher tiers over time, particularly on the enterprise tier, and we continue to expect that this will help drive growth for Elastic cloud looking ahead. You know, as I think about all of the additions to the tiers and what drives customers to the higher tiers, clearly a lot of the generative AI functionality is helpful in that regard. The security and observability AI assistance will only be in the enterprise tier. But as a more general matter, we've added features to our higher subscription tiers over time and done that. For example, ML is only in platinum, but even beyond that platform level, features like searchable snapshots is in enterprise. So that's consistent with our overall strategy to continue to provide more value and then monetize that value in the higher tiers and that's been working quite nicely. As I mentioned, we've continued to see a steady increase over time in our adoption of higher tiers.
Andrew Nowinski: Okay, thank you. And then I just had a quick question on, as it relates to your vector search. I know you've had it for a number of years and I felt like, you know, where customers are actually storing their data in an Elastic data lake. It makes, maybe your vector search more appealing to them, specifically like a security use case. So I'm wondering, are you competing with MongoDB (NASDAQ:MDB) in vector search use cases, where they're looking specifically for vector search?
Ashutosh Kulkarni: Yes. So look, when you think about what it takes to build generative AI applications, you know, vector search is a component of the overall solution, not really the whole solution. You know, so what we see is, when you're building generative AI applications, you need effective connectors to build and bring all of this unstructured data into your environment. You need to be able to provide capabilities like semantic search and hybrid search and re-ranking capabilities. There is need for things like filtering, for providing geo-location, and often other kinds of personalization capabilities. All of these things that are all about relevance and context that go way beyond just vector search. And what we have seen is customers increasingly are looking to our platform because we have that complete breadth of capabilities that makes it possible for them to build generative AI applications on one platform. Now, what I will say is that, you know, fundamentally in that space, we don't see the database vendors showing up as competition. Now, there might be, you know, if all of your data is in a particular database, if all of your transactional data as an example is in a particular database, and you have some very simple use case, where you just want to search across some of that data and you're using the native basic vector capabilities in that data store, you know, that's very likely, and that's frankly, you know, customers in a much simpler use case that we don't really play in that space, right? That's very native to that particular platform. For us, we are going after the much broader market of customers that are trying to build generative AI applications and that requires way more than just a vector data store and that's what we provide, soup to nuts.
Andrew Nowinski: Thank you.
Operator: The next question is from Shrenik Kothari with Robert Baird. Please go ahead.
Shrenik Kothari: Hi, thanks for taking my question and congrats on the great execution, Ash and Janesh. Welcome back, Anthony. So Ash, you touched upon evolution in Elastics go to market strategy before shift towards more efficient and unified vision for customer engagement and partnerships in light of the recent realignments, including the consolidation under the CRO. And then today, you highlighted generative AI beginning to positively impact the go to market around surge. Of course, you spoke about relationships with major hyperscalers and beyond that, investments into broadening the Global Reach User Conference. Can you talk about which of these factors like the Hyperscaler Partnerships User Conference directly spreading awareness are bigger drivers currently? And in terms of the strategic realignment of go-to-market, how are you positioning effectively in the AI landscape now? Thanks a lot.
Ashutosh Kulkarni: Shrenik, thanks for the question. What I will say is that, look, all of the things that we are doing matter, right? Because at the end of the day, our ability to best satisfy our customers comes from the fact that we are able to provide them unique, differentiated, functionality with the integrations with all the ecosystem partners. And we have relationships with these ecosystem partners like the cloud hyperscalers that allow us to reach and service those customers very, very effectively. And our focus not just on commitments, but also on consumption, you know, all of that is paying off, right? So it's about having a very cohesive strategy. And I'm very excited about the fact that the team has been executing very well. I'm very proud of the work that they've been doing, and we feel really good about both the second half of the year and the future.
Shrenik Kothari: Got it. Thanks a lot, Ash.
Operator: The next question is from Kingsley Crane (NYSE:CR) with Canaccord. Please go ahead.
Kingsley Crane: Great. Thanks for taking my question. Congrats on the quarter. I just want to ask a quick one on cloud. Great performance this quarter. As we look at Q4, should we expect similar levels of seasonality versus last year? I think last year was when the cloud cost optimization started in April?
Janesh Moorjani: Hi, Kingsley, this is Janesh. Maybe I'll take that. Overall, when we look at the approach that we've had - we had really strong performance in the first half of this year. We've seen customers start to ramp their consumption towards the levels that they had committed, and we've continued to see them make strong commitments. As I mentioned earlier, when I talked about guidance, the way we've approached it is simply to consider the fact that it's still an economic environment out there, that is stable, you know, but similar to what it was before. And there's room for potential fluctuations in consumption patterns. So we're just trying to make sure that we balance the strength of our execution that we've seen so far in the first half of the year against any potential risks that might be out there. And again, just for abundant clarity, we've not actually experienced anything different, yet, but we're just being thoughtful as we build our plan for the year.
Kingsley Crane: Makes sense. Thanks for the color, Janesh.
Janesh Moorjani: Thank you.
Operator: The next question is from Mike Cikos with Needham and Company. Please go ahead.
Michael Cikos: Hi, thank you. And Janesh, if I could just build off Kingsley's question there. I know that there have been a number of people who are asking about the guidance construction and what it means for Elastic cloud. I guess my question said plainly, does your guidance currently anticipate the gap between commitments and consumption widening, as we get into the second half of this year versus what was delivered in the second quarter?
Janesh Moorjani: I think the answer to that question mathematically would depend on what level of commitments we are assuming for Q3 and Q4. And I'm not going to unpack that level of detail. I think what we've seen is that consumption is ramping quite nicely. We're doing everything that we can to help customers derive value. It's worked nicely for us for the last few quarters, and we'll continue to work hard towards that end for the next couple of quarters and beyond.
Michael Cikos: Got it. And if I could - just a quick follow-up for Ash here, but on the generative AI adoption, I'd be curious to hear what kind of customers you are seeing adopt? Are these more potentially digital natives or mid-sized organizations that might be nimbler? And the reason I ask is, we picked up in our checks that maybe some of the larger enterprises are still running evaluations to understand GenAI, how to orchestrate these workflows, just given the compute intensity behind them. So anything incremental on the flavor of customers that are starting to adopt these technologies would be really helpful?
Ashutosh Kulkarni: Yes. Thanks for the question. We are seeing success across the board, large customers as well as digital natives across verticals. I gave at least two examples in my prepared remarks. Those are both, you know, those were large customers. One I - one we noted as DocuSign, the other, you know, was the video sharing platform. And what I would say is that generally, even the larger organizations, one of the reasons why we are having success in those environments is because we are proven in those environments. The value of incumbency, the fact that they trust our platform, they've been doing all kinds of large scale search applications on our platform in the past. And now, as they see the functionality and the innovations that we are driving, it's very - you know, it's a much faster process for them to get productive and be able to build GenAI applications on our platform than other places. And we've seen that over and over again, where they might start playing around with some pure vector database and they quickly realize that they need more than that and they look to our platform and they choose our platform. So I feel really good about it. And it's not just about the digital natives, it's actually much broader than that.
Michael Cikos: Thank you for clarifying. And I will turn it over to my colleague. Thank you.
Operator: This concludes our question-and-answer session. I would like to turn the conference back over to Ash Kulkarni for any closing remarks.
Ashutosh Kulkarni: Thank you all very much for joining our call today. I could not be more excited about the opportunity and our unique position in generative AI, as a leading AI-powered search analytics platform. We are pleased with our strong performance in the first half and confident about the second half of the year. We look forward to updating you on our progress as we go. Have a great rest of the evening and thank you.
Operator: The conference has now concluded. Thank you for attending today's presentation. You may now disconnect.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.