Back to Blog
    Observability
    ELK
    OpenTelemetry
    OpenSearch

    Engineers Are Quietly Abandoning ELK - And Building Their Own Observability Stacks

    March 10, 2026
    5 min read read
    # “Engineers Are Quietly Abandoning ELK — And Building Their Own Observability Stacks” ## The Stack That Dominated Logs for a Decade For years, the ELK stack was the default answer to one question. How do you manage logs? Elasticsearch handled indexing. Logstash processed incoming data. Kibana visualized everything in dashboards. Together they formed one of the most widely adopted observability platforms in modern infrastructure. But something has been changing. Across engineering teams, a quiet shift is happening. Developers are experimenting with alternative stacks built around **OpenTelemetry, OpenSearch, and specialized tracing systems**. The goal isn’t just replacing ELK—it’s building observability pipelines that scale better and cost less. One engineer described running such a system in production while ingesting **close to a billion logs and spans per day**, claiming the overall infrastructure cost remained surprisingly small. That claim alone explains why the conversation is gaining attention. Because observability costs are becoming a serious problem. ## Why ELK Became a Problem for Many Teams The original ELK stack was powerful, but it wasn’t designed for the scale of modern distributed systems. Microservices changed everything. Instead of a handful of applications generating logs, companies now run hundreds of services producing logs, traces, metrics, and events simultaneously. Observability pipelines suddenly have to ingest massive volumes of telemetry. In those environments, ELK deployments often become expensive and operationally heavy. Elasticsearch clusters grow large. Logstash pipelines become complicated. Storage costs increase as telemetry volumes grow. Many engineers eventually reach the same conclusion. ELK works. But maintaining it can become a full-time infrastructure project. That’s why teams started experimenting with different architectures. ## The Alternative Stack That Keeps Appearing One increasingly popular replacement architecture combines three core components. OpenTelemetry for telemetry collection. OpenSearch for storage and indexing. Jaeger for distributed tracing. OpenTelemetry acts as the central telemetry pipeline. Applications send metrics, logs, and traces through the OpenTelemetry collector. From there, the collector routes data to storage systems like OpenSearch. Tracing data flows into Jaeger, which specializes in visualizing request paths across distributed systems. This approach changes how observability pipelines work. Instead of relying on Logstash as the processing layer, OpenTelemetry collectors handle ingestion and routing directly. And that simplifies the architecture significantly. ## Why OpenTelemetry Changed the Game OpenTelemetry is one of the biggest reasons new observability stacks look different from older ones. Instead of separate logging agents, metrics collectors, and tracing libraries, OpenTelemetry provides a unified instrumentation framework. Applications emit telemetry in a consistent format. Collectors process that telemetry and route it to different backends. In practice, this means one system can handle multiple observability signals. Logs. Metrics. Traces. One engineer pointed out that OpenTelemetry collectors are lightweight enough to run in **agent mode**, forwarding logs and telemetry wherever they need to go. That flexibility makes it easier to design custom observability pipelines without relying on heavy ingestion tools. ## The OpenSearch Debate OpenSearch often appears in these alternative stacks because it provides an open-source search engine similar to Elasticsearch. But the relationship between the two technologies has become complicated. Some engineers argue that OpenSearch performs extremely well in observability workloads, even when handling massive telemetry volumes. Others insist Elasticsearch remains faster and more capable based on independent benchmarking. Critics frequently point out that simply processing billions of spans doesn’t automatically prove OpenSearch is superior. Performance depends heavily on cluster configuration, query patterns, and indexing strategies. In other words, the debate isn’t settled. And that’s exactly why engineers continue experimenting with different stacks. ## The Hidden Cost Problem One reason teams explore alternatives to ELK has less to do with performance and more to do with economics. Observability data grows fast. Every request, log entry, and trace generates telemetry. As systems scale, storage costs can explode. Some engineers say they’ve managed to ingest massive telemetry volumes using OpenSearch at relatively low cost—particularly when running self-managed clusters instead of managed cloud offerings. But there’s an important caveat. The same engineer who praised OpenSearch also warned against using **AWS’s managed OpenSearch service**, describing it as extremely expensive. That warning highlights a broader trend. The difference between self-managed observability stacks and managed services can dramatically affect costs. ## The Other Tools Appearing in These Stacks Observability stacks rarely stay simple. As teams experiment with alternatives, new tools often enter the pipeline. One engineer mentioned using **Vector**, a lightweight log processing system written in Rust, as a replacement for tools like Filebeat or Logstash. In their setup, Vector handled log normalization across roughly **250 applications** while remaining fast and resource-efficient. Others mentioned tracing systems built around VictoriaMetrics or different telemetry storage backends. This experimentation shows how quickly the observability ecosystem is evolving. Teams are no longer locked into monolithic stacks. They’re assembling pipelines from specialized components. ## The Community Pushback Whenever someone proposes replacing established observability tools, criticism appears quickly. In the case of the OpenSearch alternative stack, some engineers challenged the article’s claims. Critics argued that Elastic remains one of the top contributors to the OpenTelemetry ecosystem and offers strong native integration with OTLP telemetry pipelines. Others pointed out that independent benchmarks often show Elasticsearch outperforming OpenSearch in certain workloads. These disagreements highlight an important reality. Observability isn’t a solved problem. There are multiple ways to build telemetry pipelines, and each approach has tradeoffs. ## The Real Shift Happening Despite the debates about specific tools, something bigger is happening inside engineering teams. Observability is becoming modular. Instead of deploying a single monolithic platform, teams are building pipelines from specialized components. OpenTelemetry handles instrumentation. Collectors route telemetry signals. Different storage engines handle logs, metrics, and traces. This architecture gives teams more control over cost and scalability. But it also requires deeper expertise. Because once you abandon monolithic stacks, you become responsible for designing the pipeline yourself. ## The Future of Observability Stacks The rise of OpenTelemetry suggests that future observability platforms will look very different from the stacks of the past. Instrumentation standards are separating telemetry collection from storage systems. Logs, metrics, and traces are becoming portable signals rather than vendor-specific data formats. And infrastructure teams are experimenting with combinations of tools rather than committing to a single ecosystem. That shift may eventually reshape the entire observability market. Because once telemetry pipelines become modular… The question stops being “Which observability platform should we use?” And becomes something much more interesting. “How do we design the pipeline ourselves?”