faro-web-sdk icon indicating copy to clipboard operation
faro-web-sdk copied to clipboard

Grafana say ok but does not forward to tempo

Open devnewton opened this issue 2 years ago • 2 comments

I tried to send trace with faro to a Grafana Agent to be forwarded to Grafana Tempo:

  • the Grafana Agent responds 'ok' to Faro trace request ;
  • the trace is not logged by Grafana Agent but no error ;
  • the trace is not logged by Grafana Tempo but no error.

The Faro to Grafana Agent communication seems fine: logs from Faro are correctly forwared by this Grafana Agent instance to a Grafana Loki.

The Grafana Agent to Grafana Tempo seems fine: traces from a java backend sources work with this Grafana Tempo instance.

Javascript code using faro web sdk:

import { TracingInstrumentation } from "./_snowpack/pkg/@grafana/faro-web-tracing.js";
import { initializeFaro, getWebInstrumentations } from "./_snowpack/pkg/@grafana/faro-web-sdk.js";

initializeFaro({
  url: "http://localhost:12347/collect";,
  instrumentations: [...getWebInstrumentations(), new TracingInstrumentation()],
  app: {
    name: "myapp",
    version: "1.0.0",
  },
});

const { trace, context } = faro.api.getOTEL();

const tracer = trace.getTracer('default');
const span = tracer.startSpan('prout');
context.with(trace.setSpan(context.active(), span), () => {
  span.end();
});

Grafana Agent says ok:

image

Grafana Agent configuration:

logs:
  positions_directory: /tmp/loki-pos
  configs:
    - name: default
      scrape_configs: []
      clients:
        - url: http://grafana-loki:3100/loki/api/v1/push
traces:
  configs:
    - name: default
      automatic_logging:
        backend: stdout
        roots: true
      remote_write:
        - endpoint: http://grafana-tempo:4317
          insecure: true
      receivers:
        otlp:
          protocols:
            http:
            grpc:
integrations:
  app_agent_receiver_configs:
    - autoscrape:
        enable: false
      logs_instance: 'default'
      traces_instance: 'default'
      server:
        host: 0.0.0.0
        port: 12347
        cors_allowed_origins:
          - 'http://localhost:8081'
          - 'http://localhost:8082'
      logs_labels: # labels to add to loki log record
        app: frontend # static valuefrontend
        kind: # value will be taken from log items. exception, log, measurement, etc
      logs_send_timeout: 5000
      sourcemaps:
        download: true # will download source file, extract source map location,
        # download source map and use it to transform stack trace locations

Grafana Tempo config:

search_enabled: true
metrics_generator_enabled: false

server:
  http_listen_port: 3200

distributor:
  log_received_traces: true
  log_received_spans:
    enabled: true
  receivers:
    otlp:
      protocols:
        http:
        grpc:

ingester:
  trace_idle_period: 10s               # the length of time after a trace has not received spans to consider it complete and flush it
  max_block_bytes: 1_000_000           # cut the head block when it hits this size or ...
  max_block_duration: 5m               #   this much time passes

compactor:
  compaction:
    compaction_window: 1h              # blocks in this time window will be compacted together
    max_block_bytes: 100_000_000       # maximum size of compacted blocks
    block_retention: 1h
    compacted_block_retention: 10m

metrics_generator:
  registry:
    external_labels:
      source: tempo
      cluster: docker-compose
  storage:
    path: /tmp/tempo/generator/wal
    remote_write:
      - url: http://prometheus:9090/api/v1/write
        send_exemplars: true

storage:
  trace:
    backend: local                     # backend configuration to use
    block:
      bloom_filter_false_positive: .05 # bloom filter false positive rate.  lower values create larger filters but fewer false positives
      index_downsample_bytes: 1000     # number of bytes per index record
      encoding: zstd                   # block encoding/compression.  options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd, s2
    wal:
      path: /tmp/tempo/wal             # where to store the the wal locally
      encoding: snappy                 # wal encoding/compression.  options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd, s2
    local:
      path: /tmp/tempo/blocks
    pool:
      max_workers: 100                 # worker pool determines the number of parallel requests to the object store backend
      queue_depth: 10000

overrides:
  metrics_generator_processors: [service-graphs, span-metrics]

devnewton avatar Nov 17 '22 20:11 devnewton

Hi! There was OTLP traces protocol mismatch when grafana-agent updated to latest otel-collector. Please try latest (1.0.0-beta3) version of faro-web-sdk where this is resolved

domasx2 avatar Dec 01 '22 13:12 domasx2

Hi @devnewton - did Domas's suggestions resolve your issue? Wondering if we have work to do on this issue. Let me know thanks.

eskirk avatar Aug 14 '23 23:08 eskirk