ElastiCon Paris: When the search engine becomes the ‘nervous system’ of AI
If you thought ElasticSearch was only used to power the search bar on your e-commerce site, the Paris edition of Elasticon has just changed the game. Under the slogan ‘Forge the Future’, the message was clear: search is no longer an end in itself, it is the fuel for action.
We are witnessing the emergence of ‘Context Engineering’. The challenge is no longer simply to find a document, but to provide the right information, at the right time, to an AI agent capable of taking action.
Here’s how Elastic is redefining its platform for this new era, from physical storage to cognitive orchestration.
1. DiskBBQ: Store vectors for the price of a barbecue
AI is expensive, especially in terms of RAM. Until now, the leading algorithm for vector search, HNSW (Hierarchical Navigable Small World), required all data to be loaded into RAM in order to perform well. The result? As soon as the volume of data increased, performance plummeted or the bill skyrocketed.
Elastic has unveiled a pragmatic and bold technical response: DiskBBQ (Better Binary Quantisation).
Far from being a simple patch, it is a rearchitecture based on IVF (Inverted File Index) indexing optimised specifically for SSDs and binary quantisation.
- The promise: to drastically reduce the memory footprint (by an order of magnitude) by compressing vectors, while maintaining acceptable search performance.
- The verdict: for a minimal loss of accuracy (91% recall rate, compared to 92% for HNSW), we gain linear scalability.
Where HNSW becomes prohibitive or unstable due to lack of RAM, DiskBBQ handles millions of vectors on simple disks. It is the missing piece that enables businesses to index all their vector data without skyrocketing their cloud bills.
-
Agentic Architecture: Standardising with MCP
Classic RAG (Retrieval-Augmented Generation) is evolving. It is no longer just about text generation, but about autonomous agents capable of performing a series of actions. Elastic has introduced its Agent Builder, a low-code interface for orchestrating these new intelligences.
But the real strategic finesse lies in the adoption of MCP (Model Context Protocol).
Instead of creating a closed ecosystem, Elastic is betting on this open standard. MCP acts as a ‘universal plug’ that allows developers to connect their Elasticsearch agents to any compatible tool (Slack, GitHub, Google Drive) via a standardised API. Your database is no longer just a place to store information; it becomes an active hub capable of calling on external tools to perform complex tasks, such as generating a financial report and sending it by email.
3. From the ‘Black Box’ to Precise Control: Linear Retriever versus RRF
In the hybrid search war (keywords + vectors), RRF (Reciprocal Rank Fusion) is often the default solution. It is effective, but it is a “black box” that is difficult to adjust.
For engineers who need surgical precision (particularly in e-commerce or legal fields), Elastic has introduced the Linear Retriever. This feature allows weights to be defined explicitly:
“I want the keyword to weigh 30% and the semantic concept 70%.”
This is a return to control for developers, allowing them to refine rankings much more finely than with automatic fusion algorithms.
-
Active Security: When Observability Fights Back
Finally, the most impressive demonstration of ‘agentic’ AI has been in the field of security. Everyone is talking about flashing red dashboards.
The Attack Discovery system, powered by LLMs, analyses complex attack chains and suggests corrective measures. During the demo, the assistant not only identified malware (APT28), but also suggested creating the incident channel on Slack itself and inviting the on-call team.
This marks the transition from passive observability (‘Look, it’s burning’) to active remediation (‘I saw smoke, called the fire brigade and turned off the gas’).
-
‘Connected to the cloud’: AI and AutoOps without sacrificing sovereignty
This is the direct answer to the ‘French dilemma’: how can you benefit from the power of cloud services while keeping your sensitive data on your own servers (on-premises)?
Elastic has come up with a trump card for ‘self-managed’ users with its Cloud Connected architecture. The principle is to open a secure channel to move only the ‘brain’ to the cloud, while keeping the memory (the data) on your premises. This unlocks two critical capabilities:
- On-demand AI (Elastic Inference Service): You can use powerful models (such as ELSER or Jina AI) to vectorise your data without having to manage complex GPU infrastructure in-house.
- Autonomous maintenance (AutoOps): This is the big new feature. The cloud service analyses your cluster’s technical metrics (without reading your customer data) to detect performance issues and propose automatic resolutions.
It’s the best of both worlds: the sovereignty of local storage and the intelligence (AI and operational) of the cloud.
Conclusion
ElastiCon Paris confirmed that the platform has completed its transformation. It no longer just searches; it acts. By providing the fundamental building blocks — optimised SSD storage (DiskBBQ), agent orchestration (MCP), controllable hybrid search and active security — Elastic is positioning itself as the essential infrastructure for building the intelligent applications of tomorrow.




