langflow icon indicating copy to clipboard operation
langflow copied to clipboard

Langfuse traces are not captured within the self-hosted instances

Open dprokhorov17 opened this issue 8 months ago • 1 comments

Bug Description

I created the following docker compose file:

# docker-compose.yml
# stack: Langfuse + Langflow sharing one Postgres instance
# Run with:  docker compose up -d

services:
# ──────────────────────────────
  # Langflow UI / API
  langflow:
    build:
      context: .
      dockerfile: Dockerfile
    depends_on:
      postgres:
        condition: service_healthy
    ports:
      - "7777:7860"
    environment:
      # talk to shared postgres, own DB & role
      LANGFLOW_DATABASE_URL: postgresql://langflow:langflow@postgres:5432/langflow
      LANGFLOW_CONFIG_DIR: app/langflow
      LANGFUSE_SECRET_KEY: ****
      LANGFUSE_PUBLIC_KEY: pk-lf-2c52a0cf-4411-4ee1-af66-a115e53e02fb
      LANGFUSE_HOST: http://192.168.xxx.xxx:4444
    volumes:
      - langflow-data:/app/langflow
    command: langflow run --host 0.0.0.0

# ──────────────────────────────
  # Langfuse worker
  langfuse-worker:
    image: langfuse/langfuse-worker:3
    restart: always
    depends_on: &langfuse-deps
      postgres:
        condition: service_healthy
      clickhouse:
        condition: service_healthy
      minio:
        condition: service_healthy
      redis:
        condition: service_healthy
    ports:
      - 127.0.0.1:3030:3030
    environment: &langfuse-env
      # shared postgres
      DATABASE_URL: postgresql://postgres:postgres@postgres:5432/postgres
      SALT: mysalt                            # CHANGEME
      ENCRYPTION_KEY: "0000000000000000000000000000000000000000000000000000000000000000"  # CHANGEME
      TELEMETRY_ENABLED: "true"
      LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES: "true"
      CLICKHOUSE_URL: http://clickhouse:8123
      CLICKHOUSE_MIGRATION_URL: clickhouse://clickhouse:9000
      CLICKHOUSE_USER: clickhouse
      CLICKHOUSE_PASSWORD: clickhouse        # CHANGEME
      REDIS_HOST: redis
      REDIS_PORT: "6379"
      REDIS_AUTH: myredissecret              # CHANGEME
      # S3 / MinIO
      LANGFUSE_S3_EVENT_UPLOAD_BUCKET: langfuse
      LANGFUSE_S3_MEDIA_UPLOAD_BUCKET: langfuse
      LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT: http://minio:9000
      LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT: http://minio:9000
      LANGFUSE_S3_BATCH_EXPORT_ENDPOINT: http://minio:9000
      LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID: minio
      LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY: miniosecret            # CHANGEME
      LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID: minio
      LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY: miniosecret            # CHANGEME
      LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID: minio
      LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY: miniosecret            # CHANGEME

# ──────────────────────────────
  # Langfuse web app
  langfuse-web:
    image: langfuse/langfuse:3
    restart: always
    depends_on: *langfuse-deps
    ports:
      - "4444:3000"
    environment:
      <<: *langfuse-env
      NEXTAUTH_URL: http://localhost:4444
      NEXTAUTH_SECRET: mysecret               # CHANGEME
      LANGFUSE_INIT_ORG_ID:
      LANGFUSE_INIT_ORG_NAME:
      LANGFUSE_INIT_PROJECT_ID:
      LANGFUSE_INIT_PROJECT_NAME:
      LANGFUSE_INIT_PROJECT_PUBLIC_KEY:
      LANGFUSE_INIT_PROJECT_SECRET_KEY:
      LANGFUSE_INIT_USER_EMAIL:
      LANGFUSE_INIT_USER_NAME:
      LANGFUSE_INIT_USER_PASSWORD:

# ──────────────────────────────
  # ClickHouse for analytics
  clickhouse:
    image: clickhouse/clickhouse-server
    restart: always
    user: "101:101"
    environment:
      CLICKHOUSE_DB: default
      CLICKHOUSE_USER: clickhouse
      CLICKHOUSE_PASSWORD: clickhouse         # CHANGEME
    volumes:
      - langfuse_clickhouse_data:/var/lib/clickhouse
      - langfuse_clickhouse_logs:/var/log/clickhouse-server
    ports:
      - 127.0.0.1:8123:8123    # HTTP
      - 127.0.0.1:9777:9000    # Native
    healthcheck:
      test: wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1
      interval: 5s
      timeout: 5s
      retries: 10
      start_period: 1s

# ──────────────────────────────
  # MinIO (S3‑compatible storage)
  minio:
    image: minio/minio
    restart: always
    entrypoint: sh
    command: -c 'mkdir -p /data/langfuse && minio server --address ":9000" --console-address ":9001" /data'
    environment:
      MINIO_ROOT_USER: minio
      MINIO_ROOT_PASSWORD: miniosecret        # CHANGEME
    ports:
      - "9444:9000"       # S3 API
      - 127.0.0.1:9888:9001   # MinIO console
    volumes:
      - langfuse_minio_data:/data
    healthcheck:
      test: ["CMD", "mc", "ready", "local"]
      interval: 1s
      timeout: 5s
      retries: 5
      start_period: 1s

# ──────────────────────────────
  # Redis for Langfuse queues
  redis:
    image: redis:7
    restart: always
    command: --requirepass myredissecret      # CHANGEME
    ports:
      - 127.0.0.1:6379:6379
    healthcheck:
      test: ["CMD", "redis-cli", "ping", "-a", "myredissecret"]
      interval: 3s
      timeout: 10s
      retries: 10

# ──────────────────────────────
  # Shared Postgres (5432)
  postgres:
    image: postgres:latest
    restart: always
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 3s
      timeout: 3s
      retries: 10
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres             # CHANGEME
      POSTGRES_DB: postgres                   # default DB for Langfuse
      # create extra role & DB for Langflow
      LANGFLOW_DB: langflow
      LANGFLOW_USER: langflow
      LANGFLOW_PASS: langflow
    ports:
      - 127.0.0.1:6666:5432
    volumes:
      - langfuse_postgres_data:/var/lib/postgresql/data
      # init script to create role/db for Langflow
      - ./postgres-init/99_create_langflow.sql:/docker-entrypoint-initdb.d/99_create_langflow.sql:ro

# ──────────────────────────────
volumes:
  langflow-data:
  langfuse_postgres_data:
  langfuse_clickhouse_data:
  langfuse_clickhouse_logs:
  langfuse_minio_data:


Here is my flow:

Image

While all instances are running fine, I don't see any captured traces by langfuse. My LM Studio component is actually a vLLM server also running within a docker container. What is the problem here?

This command:

docker compose exec langflow python -c "import requests, os; addr = os.environ.get('LANGFUSE_HOST'); print(addr); res = requests.get(addr, timeout=5); print(res.status_code)"

also returns the 200 response.

Reproduction

I following this example: https://docs.langflow.org/integrations-langfuse

Expected behavior

Langfuse captures all traces.

Who can help?

@italojohnny

Operating System

Docker

Langflow Version

latest

Python Version

None

Screenshot

No response

Flow File

No response

dprokhorov17 avatar Apr 24 '25 16:04 dprokhorov17

I found a similar issue that was previously reported and resolved. The issue titled Traces not being captured anymore was closed after a pull request was merged to ensure that end_all_traces is called at the end of all flow build events [1].

To address your current issue, here are some steps you can take:

  1. Verify Environment Variables: Double-check that the LANGFUSE_SECRET_KEY, LANGFUSE_PUBLIC_KEY, and LANGFUSE_HOST are correctly set in your docker-compose.yml file. Ensure there are no typos or incorrect values.

  2. Check Tracing Activation: Ensure that tracing is not deactivated in your settings. The deactivate_tracing setting should not be set to True.

  3. Tracer Initialization: Verify that the tracers are initialized correctly. You can check if the trace_context is set and contains the expected tracers.

  4. Review Logs for Errors: Check the logs for any error messages related to tracer initialization or trace processing. Look for messages like "Error initializing tracers" or "Error processing trace_func".

  5. Langfuse Integration: Ensure that Langfuse credentials are correctly set as environment variables. Verify that LANGFUSE_SECRET_KEY, LANGFUSE_PUBLIC_KEY, and LANGFUSE_HOST are set in the environment where Langflow is running.

  6. Service Health: Confirm that all services, including Langflow and Langfuse, are running and healthy. Use the provided health checks in your Docker setup to verify this.

  7. Connection Verification: Since your command returns a 200 response, connectivity seems fine. However, ensure that the Langflow API keys are correctly added to your configuration.

By following these steps, you should be able to identify and resolve the issue with Langfuse not capturing traces as expected.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

dosubot[bot] avatar Apr 24 '25 16:04 dosubot[bot]

I’m encountering this same issue. Self hosting k8s on my end.

Either through the service domain name or the private name (both verified are reaching) the traces will not come through.

I configured the environment variables in the same way as suggested by the docs.

Can add my manifests once I get back home.


Edit: I got it to work. In my k8s manifests (ported from the docker file), I missed the mkdir -p /data/langfuse arg in minio. This creates the bucket. Without the bucket things won't work. Errors should in langfuse, and langflow.

This is probably separate cause to OP, sorry not much help. But it is working at my end now. If you have any questions about my setup I can help take a look.

Cheers

mattcoulter7 avatar May 03 '25 04:05 mattcoulter7