Mirko BezAlessandro Brofferio

Find answers quickly, correlate OpenTelemetry traces with existing ECS logs in Elastic Observability

In this blog we will discuss how EDOT enables you to collect existing ECS logs while ensuring a seamless and transparent move to OTel semantic conventions. The key benefit is that applications can continue sending logs as they do today, which minimizes the effort and impact on application developers.

17 min read
Find answers quickly, correlate OpenTelemetry traces with existing ECS logs in Elastic Observability

OpenTelemetry (OTel) is the undisputed standard for vendor-neutral instrumentation. However, most established organizations don't start from a blank slate. You likely have a mature ecosystem of applications already logging in Elastic Common Schema (ECS), supported by years of refined dashboards and alerting rules.

The challenge is clear: How do you adopt OTel’s unified observability without abandoning your proven ECS-based logging?

In this guide, we’ll demonstrate how to bridge this gap using the Elastic Distribution of OpenTelemetry (EDOT). We will first show you how to leverage the EDOT Collector to ingest your logs into Elasticsearch, ensuring a seamless transition that unlocks the full power of OTel’s distributed tracing without breaking your current workflows.

Once the data is flowing, we will explore how Elasticsearch's underlying mapping architecture to allow that your existing filters and visualizations remain fully functional through two key features:

  • Field Aliases: We’ll explain how Elastic uses aliases to ensure that legacy dashboards looking for

    log.level
    (ECS) still work perfectly, even as your new telemetry arrives as
    severity_text
    (OTel).

  • Passthrough Fields: We’ll show how Elastic’s native OTel mapping structures use passthrough fields to handle OTel attributes. This ensures your data remains searchable and performant without the need for complex, manual schema migrations.

By combining EDOT for ingestion with these intelligent mapping structures, you can maintain your existing Java ECS logging while evolving toward a unified, OTel-native future.

The ECS Foundation

We begin with a Java application using Log4j2 and the ecs-java-plugin. This setup generates structured JSON logs in the Elastic Common Schema (ECS) that Elastic handles natively leveraging the ECS logging plugins that easily integrate with common logging libraries across various programming languages.

The following provides a Log4j2 Configuration Extract and this setup assumes prior configuration of Log4j2 dependencies to include the required ECS plugin libraries:

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="DEBUG">
    <Appenders>
        <Console name="LogToConsole" target="SYSTEM_OUT">
            <EcsLayout serviceName="logger-app" serviceVersion="v1.0.0"/>
        </Console>
    </Appenders>
    <Loggers>
        <Root level="info">
            <AppenderRef ref="LogToConsole"/>
        </Root>
    </Loggers>
</Configuration>

Note:

<EcsLayout serviceName="logger-app" serviceVersion="v1.0.0"/>
we will come back to this setting later in the blog article, as with Kubernets deployments these values can be automatically populated by the EDOT Collector and the setting could be simplified to
<EcsLayout/>

Introducing the Elastic Distribution of OpenTelemetry (EDOT)

The Elastic Distribution of OpenTelemetry (EDOT) is more than just a repackaging; it is a curated set of OTel components (Collector and SDKs) optimized for Elastic Observability. Released in v8.15, it allows you to collect traces, metrics, and logs using standard OTel receivers while benefiting from Elastic-contributed enhancements like powerful log parsing and Kubernetes metadata enrichment.

EDOT's Primary Benefits:

Deliver Enhanced Features Earlier: Provides features not yet available in "vanilla" OTel components, which Elastic continuously contributes upstream.

Enhanced OTel Support: Offers enterprise-grade support and maintenance for fixes outside of standard OTel release cycles.

The question then becomes: How can users transition their ingestion architecture to an OTel-native approach while maintaining the ability to collect logs in ECS format?

This involves replacing classic collection and instrumentation components (like Elastic Agent and the Elastic APM Java Agent). Let us show you how this can be done step by step replacing it with the full suite of components provided by EDOT. A comprehensive view of the EDOT architecture components in Kubernets is shown below.

In a Kubernetes environment, EDOT components are typically installed via an OTel Operator and HELM chart. The main components are:

  • EDOT Collector Cluster: deployment used to collect cluster-wide metrics.
  • EDOT Collector Daemon: daemonset used to collect node metrics, logs, and application telemetry data.
  • EDOT Collector Gateway: performs pre-processing, aggregation, and ingestion of data into Elastic.

Elastic provides a curated configuration file for all the EDOT components available as part of the the OpenTelemetry Operator using the

opentelemetry-kube-stack
Helm chart. Downloadable from here.

Achieving Correlation: SDK + Logging Context

To link a log line to a specific trace, the EDOT Java SDK performs a "handshake" with your logging library. When a trace is active, the SDK extracts the

trace_id
and
span_id
and injects them into the Mapped Diagnostic Context (MDC) of Log4j2. Even though your logs are in ECS format, they now carry the OTel DNA required for correlation. While the EDOT SDK can collect logs directly, a generally more resilient approach is to stick to file collection. This is important because if the OTel Collector is down, logs written to a file are buffered locally on the disk, preventing the data loss that can occur if the SDK's in-memory queue reaches its limit and starts discarding new logs. For an in-depth discussion on this topic we refer to the OpenTelemetry Documentation.

Zero-Code Instrumentation

The EDOT Java SDK is a customized version of the OpenTelemetry Java Agent. In Kubernetes, zero-code Java autoinstrumentation is supported by adding an annotation in the pod template configuration in the deployment manifest:

apiVersion: apps/v1
kind: Deployment
...
spec:
  ..
  template:
    metadata:
      # Auto-Instrumentation
      annotations:
        instrumentation.opentelemetry.io/inject-java: "opentelemetry-operator-system/elastic-instrumentation"

Collecting and Processing Logs with the EDOT Collector

This is the most critical step. Our logs are now JSON, they are in the console output, and they contain trace IDs. Now, we need the EDOT Collector to pick them up and map them to the OpenTelemetry Log Data Model.

EDOT Collector Configuration: Dynamic Workload Discovery and filelog receiver

Applications running on containers become moving targets for monitoring systems. To handle this, we rely on Dynamic workload discovery on Kubernetes. This allows the EDOT Collector to track pod lifecycles and dynamically attach log collection configurations based on specific annotations relying on the

k8s_observer
and the
receiver_creator
component.

In our example, we have a Deployment with a Pod consisting of one container. We use Kubernetes annotations to:

  1. Enable auto-instrumentation (Java).

  2. Enable log collection for this pod.

  3. Instruct the collector to parse the output as JSON immediately (json-parser configuration).

  4. Add custom attributes (e.g. identify the Application souce code)

Deployment Manifest Example

apiVersion: apps/v1
kind: Deployment
metadata:
  name: logger-app-deployment
  labels:
    app: logger-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: logger-app
  template:
    metadata:
      annotations:
        # 1. Turn on Auto-Instrumentation
        instrumentation.opentelemetry.io/inject-java: "opentelemetry-operator-system/elastic-instrumentation"
        # 2. Enable Log Collection for this pod
        io.opentelemetry.discovery.logs/enabled: "true"
        # 3. Provide the parsing "hint" (Treat logs as JSON)
        io.opentelemetry.discovery.logs.ecs-log-producer/config: |
            operators:
            - type: container
              id: container-parser
            - type: json_parser
              id: json-parser
         # 4. Identify this application as Java (To allow for user interface rendering in Kibana)
        resource.opentelemetry.io/telemetry.sdk.language: "java"
      ...

This setup provides a bare-minimum configuration for ingesting ECS library logs. Crucially, it decouples log collection from application logic. Developers simply need to provide a hint via annotations that their logs are in JSON format (structurally guaranteed by the ECS libraries). We then define the standardized enrichment and processing rules centrally at the processor level in the (Daemon) EDOT Collector.

This centralization ensures consistency across the platform: if we need to update our standard formatting or enrichment strategies later, we apply the change once in the collector, and it automatically propagates to all services without developers needing to touch their manifests.

(Daemon) EDOT Collector Configuration

To enable this, we configure a Receiver Creator in the Daemon Collector. This component uses the

k8s_observer
extension to monitor the Kubernetes environment and automatically discover the target pods based on the annotations above.

daemon:
  ...
  config:
    ...
    extensions:
      extensions:
        k8s_observer:
          auth_type: serviceAccount
          node: ${env:K8S_NODE_NAME}
          observe_nodes: true
          observe_pods: true
          observe_services: true
          ...
    receivers:
        receiver_creator/logs:
          watch_observers: [k8s_observer]
          discovery:
            enabled: true
    ...
...

Finally, we reference the

receiver_creator
in the pipeline instead of a static filelog receiver and we make sure to include the
k8s_observer
extension:

daemon:
  ...
  config:
    ...
    service:
      extensions:
      - k8s_observer
      pipelines:
        # Pipeline for node-level logs
        logs/node:
          receivers:
            # - filelog             # We disable direct filelog receiver
            - receiver_creator/logs # Using the configured receiver_creator instead of filelog
          processors:
            - batch
            - k8sattributes
            - resourcedetection/system
          exporters:
            - otlp/gateway # Forward to the Gateway Collector for ingestion

The Transformation Layer

While the logs are structured, OTel sees them as generic attributes. We use the OpenTelemetry Transformation Language (OTTL) within a

transform
processor to "promote" ECS fields to top-level OTel fields. To finalize the pipeline, we use the transform processor, which allows us to modify and restructure telemetry signals using the OpenTelemetry Transformation Language (OTTL).

We use the processor to promote specific ECS fields into the top-level OpenTelemetry fields and renaming attributes according to OpenTelemetry Semantic Conventions:

  • Promote the
    message
    attribute to the top-level
    Body
    field.
  • Promote the
    log.level
    attribute to the OTel
    SeverityText
    field.
  • Move the
    @timestamp
    attribute to the OTel
    Time
    field.
  • Map
    trace_id
    and
    span_id
    to the right log context.

The following provides a sample

transform
configuration:

 processors:
    transform/ecs_handler:
      log_statements:
      - context: log
        conditions:
          - log.attributes["ecs.version"] != nil
        statements:
          # Map ECS fields to OTel Log Model
          - set(log.body, log.attributes["message"])
          - set(log.time, Time(log.attributes["@timestamp"], "%Y-%m-%dT%H:%M:%SZ"))
          - set(log.trace_id.string, log.attributes["trace_id"])
          - set(log.span_id.string, log.attributes["span_id"])
          - set(log.severity_text, log.attributes["log.level"])
          # Cleanup original keys to save space
          - delete_key(log.attributes, "message")
          - delete_key(log.attributes, "trace_id")
          - delete_key(log.attributes, "span_id")

          # Add here additional transformations as needed...

Note: When working with EDOT Collector and the OpenTelemetry Kube-Stack Helm Chart, resource attributes such as

service.name
and
service.version
are automatically populated based on a set of well-defined rules by the
k8sattributes
processor. Thus, on Kubernetes we do not need to extract those fields from the log content itself.

Make sure to use the newly created processor in the logs pipeline for the Daemon Collector:

service:
  pipelines:
    logs/node:
      receivers:
        - receiver_creator/logs
      processors:
        - batch
        - k8sattributes
        - resourcedetection/system
        - transform/ecs_handler          # Newly created transform processor
      exporters:
        - otlp/gateway

The Compatibility Layer: Bridging ECS and OTel

To bridge the gap between the Elastic Common Schema (ECS) and OpenTelemetry (OTel), Elastic provides a "compatibility layer" built directly into its Observability solution relying on existing index templates and mappings. This architecture allows you to send OTel-native data while still using your legacy ECS-based dashboards, saved searches, and other associated objects.

This "bridge" relies on two key features:

  • Bridging ECS and OTel with Passthrough: OpenTelemetry (OTel) data often uses deeply nested structures (e.g.,

    resource.attributes.*
    ). Elasticsearch uses the Passthrough object type to "promote" these nested attributes to the top level when performing a search query. Any new metadata added by the OTel collector is automatically searchable without the user needing to know the full JSON path. This creates a "virtual flattening" layer and makes sure that all fields that match in name are automatically compatible, even though thery're stored in different namespaces (attributes/resource.attributes for OTel vs top-level for ECS). To learn more about fields and attributes alignment between ECS and Otel SemanticConvention refer to this page.

  • Bridging with Field Aliases: Elastic relies on OTel mapping templates that include

    Field Aliases
    . These aliases link OTel semantic names back to their equivalent ECS fields at query to handle fields that do not align with Otel naming convention.

The Benefit: If you have an existing dashboard looking for

message
(ECS), but your data is now indexed as
body.text
(OTEL), an alias allows the dashboard to aggregate and visualize data from both sources simultaneously. This ensures that your existing filters and KQL queries also work flawlessly whether the data originated from a Filebeat agent or a modern OTel SDK Agent.

Some more details about field aliases and pass-through objects can be found here.

Here is an example of the provided mapping template:

{
  "mappings": {
    ...
    "properties": {
      "log": {
          "properties": {
            "level": {
              "type": "alias",
              "path": "severity_text"
            }
          }
        },
      "message": {
        "type": "alias",
        "path": "body.text"
      }
    ...
    }
  }
 }

This architectural approach provides three major advantages for teams in transition:

  • Zero Reindexing: You don't have to rewrite or migrate old data. Aliases resolve at query time, meaning your old indices and new indices can coexist in the same visualization.

  • Future-Proofing: As OTel becomes the primary standard (following the donation of ECS to the OTel project), Elastic is shifting its native UI to look for OTel fields first. These mappings ensure that your legacy ECS-native data still appears in OTel-native views.

  • Unified Observability: It enables "Correlation by Default." Because the aliases link trace_id (OTel) and trace.id (ECS), you can jump from a legacy log to a modern OTel trace without losing context or breaking the drill-down path.

Sending data to Elasticsearch

If you are running Elastic Serverless or the latest Elastic Cloud Hosted (ECH) v9.2+, you now have access to a managed OTLP endpoint. This native functionality allows you to route telemetry directly from your Collector Gateway to Elasticsearch using the OTLP protocol.

Because we mapped our ECS fields to the OTel model in the collector, Elasticsearch recognizes the correlation immediately. You get the best of both worlds: Legacy Compatibility: Your old ECS-based dashboards still work (with minor tweaks). Modern Power: You can now click "View Trace" directly from a log entry in Kibana's Observability UI.

Conclusion

Transitioning to OpenTelemetry doesn't have to be a "big bang" migration. By using the EDOT SDK and Collector, you can: Protect your investment in ECS-based logging libraries. Centralize complexity by handling schema translation in the collector rather than the application. Enable full correlation between traces and logs with zero code changes.

Share this article