Open Source for Better Observability

Dotan Horovits (@horovits)
6 min readOct 19, 2021

Monitoring cloud-native systems is hard. You’ve got highly distributed apps spanning tens and hundreds of nodes, services and instances. You’ve got additional layers and dimensions — not just bare metal and OS, but also node, pod, namespace, deployment version, Kubernetes’ control plane and more.

To make things more interesting, any typical system these days uses many third-party frameworks, whether open source or cloud services. We didn’t write them but we need to monitor them, nonetheless.

Observability as a data analytics problem

The way to address the monitoring challenge is with observability. But what is observability in IT systems anyway? Simply put (and formal definitions aside), observability is the capability to ask and answer questions based on telemetry data. The reason I like this definition is that it makes it clear that observability is essentially a data analytics problem. We bring together different telemetry signals of different types and different sources into one conceptual data lake, and then ask and answer questions to understand our system.

Observability is typically built on three pillars — Metrics, Logs, and Traces. Let’s see how they tell us the “what”, “why” and “where”, and enable us to answer questions about our system:

Three pillars of Observability

Simply put, Metrics help us detect the issues and tell us what happened: Is the service down? Was the endpoint slow to respond? Metrics are essentially numerical data, which is efficient to collect, process, store, aggregate, and manipulate. On the other hand, this numerical data doesn’t contain much context to accompany it. Once the system emits metrics, the backend collects them, aggregates them, stores them in a time series database, and exposes a designated query language for time series data.

Next, Logs help us diagnose the issues and tell us why they happened. Logs are perfect for that end, as the developer who writes the application code outputs all the relevant context for that code unto logs. Textual and verbose, however, logs take up more storage space, and require parsing and full text indexing to effectively search for ad-hoc queries by any field in the logs.

Finally, Traces help us isolate issues and tell us where they happened. As a request comes into the system, it flows through a chain of interacting microservices, which we can trace with Distributed Tracing. Each call in the chain creates and emits a span for that service and operation (think of it as a structured log), which includes context such as start time, duration, and parent span. This context is propagated through the call chain. A tracing backend then collects the emitted spans, and reconstructs the trace according to causality. It then visualizes, typically with the famous timeline view (gantt chart), for further trace analysis.

Role of Open Source: Success and Challenges

Now let’s move to the role of open source in observability, and which open source projects lead the domain.

Open source is the new norm

Open source is the new norm, with 60 percent of organizations using open source monitoring tools, according to 451 Research. According to the Cloud Native Computing Foundation (CNCF), The most commonly adopted observability tools are open source, as shown in the End User Technology Radar. And, Gartner predicts that, by 2025, 70% of new cloud-native application monitoring will use open-source instrumentation, rather than vendor-specific agents for improved interoperability.

Tool sprawl is a serious challenge

But, the wealth of available observability tools creates a consolidation issue. Half of companies are using five or more tools, while a third of them are using ten or more, according to the CNCF. Tool sprawl is a challenge not just for operating and managing the tools, but also for observability itself: Observability in itself is a data analytics problem, and tools create additional data silos.

Relicensing is changing OSS landscape

Another new challenge we’re seeing is OSS project relicensing. In the past year alone, we’ve witnessed several relicensing moves for leading OSS projects, moving to a more restrictive license, a copyleft license (such as GNU AGPL), or even to a non-open-source license (non OSI-compliant, such as SSPL). Typically this happens by a vendor that controls the project, not by a foundation. It could mean that source code is available, but you’re restricted in your usage or modification, or may even need to open-source your own code in some cases.

This pushes some users to look for alternatives. Among these users you can find other OSS projects that can’t consume these licenses, or even commercial companies such as Google who ban use of AGPL and other licenses. Google Open Source says on AGPL that “the risks heavily outweigh the benefits”.

The leading open source tools for logs, metrics and traces

The open source landscape for observability is quite dynamic. Many of the OSS projects emerged as recently as the past couple of years alone. Funny enough, many are called OpenSomething which adds quite a bit of confusion to the mix. Let’s go through the open source projects according to the signal types:

Open Source Software For Metrics,

  • Prometheus, a CNCF graduate project (second after Kubernetes), is a monitoring system with a dimensional data model, flexible PromQL query language, efficient time series database, and modern alerting approach with AlertManager;
  • OpenMetrics, another CNCF project, offers a format for exposing metrics, which has become a de-facto standard across the industry; and
  • Grafana, a project by Grafana Labs, offering a powerful analytics and visualization tool that’s exceptionally popular in combination with Prometheus.
    Relicensing update: On Apr 2021 Grafana project was relicensed from Apache2 to AGPLv3 by Grafana Labs.

Relicensing update: On Apr 2021 Grafana project was relicensed from Apache2 to AGPLv3 by Grafana Labs.

Open Source Software For Logs,

  • ELK Stack, led by Elastic B.V., has been the leading open source choice for a good few years. It is comprised of Elasticsearch text distributed data store, Logstash data collection and processing engine and Kibana visualization tool;
    Relicensing update: On Feb 2021 Elasticsearch and Kibana projects were relicensed from Apache2 to a non-OSS dual license (SSPL and Elastic License) by Elastic B.V.
  • OpenSearch, is a fork of ElasticSearch and Kibana OSS projects, aimed to keep these popular projects open source. The project is led by AWS, which also contributed OpenDistro for Elasticsearch — a set of open source plugins for Elasticsearch; and
  • Loki, led by Grafana Labs, is a log aggregation system specialized for interoperability with Prometheus. Loki doesn’t perform full-text indexing, but rather only indexes labels used in Prometheus.
    Relicensing update: On Apr 2021 Loki project was relicensed from Apache2 to AGPLv3 by Grafana Labs.

Open Source Software For Traces,

  • Jaeger offers a distributed tracing system released as open source by Uber Technologies, which is now a CNCF graduated project;
  • Zipkin, a more veteran Java-based distributed tracing system to collect and look up data from distributed systems; and
  • Skywalking, an open source APM system, including monitoring, tracing, diagnosing capabilities for distributed system in Cloud Native architecture.

Unified telemetry collection with OpenTelemetry

Having a variety of tools to choose from also brings up a challenge in telemetry data collection. Organizations find themselves multiple libraries for the logging, metrics, traces, with each vendor having its own APIs, SDKs, agents and collectors.

OpenTelemetry is a novel project under the CNCF that offers a unified set of vendor-agnostic APIs, SDKs and tools for generating and collecting telemetry data, and then exporting it to a variety of analysis tools. The beauty of OpenTelemetry is that it offers an observability framework that works across metrics, traces and logs. You get one API and SDK per programming language for extracting all of your application’s observability data, together with a standard collector, a transmission protocol (OTLP) and more.

OpenTelemetry.io

OpenTelemetry (or OTel as it’s commonly nicknamed) was created under the CNCF out of the merge of OpenMetrics and OpenTracing projects, and was officially accepted to CNCF incubation in August 2021. More importantly, the project is widely adopted by all the major vendors, all the monitoring tools, the cloud providers and many others. As such, it’s well positioned to become the go-to platform for generating and collecting observability data.

Open source standards such as OpenTelemetry and OpenMetrics are converging the industry, preventing vendor lock-in and bringing us a step closer to unified observability. I expect we’ll be seeing these projects becoming de-facto standards, as well as additional such efforts for unified observability to address the data storage, querying, correlation and other aspects.

This article originally ran on Container Journal on September 28, 2021

--

--

Dotan Horovits (@horovits)
Dotan Horovits (@horovits)

Written by Dotan Horovits (@horovits)

Technology evangelist, CNCF Ambassador, open source enthusiast, DevOps aficionado. Social: @horovits YouTube: @horovits Podcast: OpenObservability Talks

Responses (1)