Unable to Find Opik’s Instance URL in Local Deployment
I am trying to use Opik's Local Deployment Strategy and have followed all the steps mentioned in the official guide. My Opik Docker Compose setup is running successfully. However, when I try to start my FastAPI service, it prompts me for an Opik instance URL, which I am unable to find.
I have already set the following environment variable: export OPIK_URL_OVERRIDE="http://localhost:5173/api"
Despite this, my FastAPI application still asks for the instance URL. As I am new to the platform, I would appreciate guidance on:
- Where exactly I can find the Opik instance URL in a local deployment.
- Whether I am missing any additional configuration steps?
Below is the image for the reference:
Hi @rehancc , a few questions:
- Can you access http://localhost:5173 in the browser ? we want to verify opik is up and running
- Can you share run command for your fast api service ?
Hey @Nimrod007,
Yes, I can access http://localhost:5173/ in the browser. My FastAPI service runs in a separate Docker container. I tried running both containers on the same network, but the issue persists. I also attempted creating a single Docker Compose file that includes both my FastAPI service and Opik. However, I still get the same error: "opik.exceptions.ConfigurationError: Opik URL is not specified - Please set your Opik instance URL using the environment variable OPIK_URL_OVERRIDE or provide it as an argument." even though OPIK_URL_OVERRIDE is correctly defined in the .env file.
@rehancc to better debug this:
- in your fastapi service try to print env vars and look if
OPIK_URL_OVERRIDEis present something like:
from fastapi import FastAPI
import os
app = FastAPI()
@app.on_event("startup")
async def print_env_vars():
# Print all environment variables
for key, value in os.environ.items():
print(f"{key}={value}")
# Or print specific ones
print("MY_ENV_VAR =", os.getenv("MY_ENV_VAR"))
since you can reach opik in your browser at http://localhost:5173/ you dont need fast api service and opik to be on the same docker network. they can run by themself is needed.
Hi @rehancc,
Can you let us know if the problem persists?
Closing this ticket for now due to no response. If the issue persists or you need further assistance, feel free to reopen it anytime.
I can't fix this. Please help!)
I did everything. changed the OPIK_URL_OVERRIDE to internal.docker port so that it could access the host on http://localhost:5173/api. Opik is up & running on http://localhost:5173/
@Mukhsin0508 from this screenshot it looks like everything is up and running, are you running fast api from a different docker compose setup?
When running two separate docker-compose projects or docker in general, localhost won’t work across containers because each compose project creates its own Docker network.
So your FastAPI container cannot reach services in the Opik docker-compose using localhost, that would point back to the FastAPI container itself.
you can fix this with creating a shared Docker network:
something like docker network create shared-net command and using that in both services.
another option (easier but less optimal) is to check your host IP
ipconfig getifaddr en0
or a similar command based on your OS and use that for the fastapi service.
🔧 How We Fixed Opik Integration with Multi-Container Docker Setup
Problem Statement:
When running two separate Docker Compose projects (Opik and Our Service), containers couldn't communicate because each Docker Compose creates its own isolated network. Using localhost inside a container refers to the container itself, not the host machine.
✅ Step-by-Step Solution
Step 1: Create a Shared Docker Network
Docker Compose projects use isolated networks by default. We created a shared bridge network for inter-project communication:
docker network create shared-opik-your_service_name
This allows containers from different Docker Compose stacks to communicate with each other.
Step 2: Connect Opik Containers to Shared Network
Connect all Opik containers that need to be accessible from your services:
docker network connect shared-opik-your_service_name opik-frontend-1
docker network connect shared-opik-your_service_name opik-backend-1
docker network connect shared-opik-your_service_name opik-python-backend-1
Verify:
docker network inspect shared-opik-your_service_name --format '{{range .Containers}}{{.Name}}{{"\n"}}{{end}}'
Step 3: Update your_service_name docker-compose.yml
Add the shared network to your_service_name's docker-compose.yml:
networks:
your_service_name_network:
driver: bridge
shared-opik-your_service_name:
external: true # Use existing network created with 'docker network create'
Connect the services (and any other services that need Opik) to both networks:
your_service__container_name:
# ... existing config ...
networks:
- your_service_name_network # For internal services (Postgres, Redis, etc.)
- shared-opik-your_service_name # For Opik communication
Step 4: Update Environment Variables
Update .env to use the container name instead of localhost:
Before:
OPIK_URL_OVERRIDE=http://localhost:5173/api
After:
OPIK_URL_OVERRIDE=http://opik-frontend-1:5173/api
Why: Inside Docker networks, containers communicate using their container names as hostnames, not localhost or host.docker.internal.
Step 5: Restart Services to Apply Changes
Recreate your_service_name containers to join the new network
docker-compose up -d your_service_name
Verify bot joined both networks
docker inspect your_service_name --format '{{range $key, $value := .NetworkSettings.Networks}}{{$key}} {{end}}' # Output: shared-opik-your_service_name your_service_name_network
Step 6: Fix Opik Infrastructure Issues
During testing, we discovered ClickHouse (Opik's database) had crashed. This caused 500 errors when sending traces.
# Fix:
cd /path/to/opik
./opik.sh # Restart all Opik services
# Verify Opik health:
./opik.sh --verify
Step 7: Restart Opik Frontend
After reconnecting opik-backend-1 to the shared network, the frontend's nginx needed to reload its proxy configuration:
docker restart opik-frontend-1
This fixed the 502 Bad Gateway errors.
🚀 For Production Deployment
Your concern about production is 100% valid. Here's what you should ask on GitHub:
GitHub Issue Template:
Question: Production Deployment Strategy for Multi-Service Opik Integration
Current Setup
- Running Opik locally with
opik.shfor development - Multiple microservices (FastAPI, Celery workers, Telegram bot) sending traces to Opik
- Services run in separate Docker Compose stacks, connected via shared bridge network
Challenge
This works great for local development, but for self-hosted production we need:
- Centralized Opik instance accessible from multiple servers/hosts
- Scalability - Handle traces from 10+ services across different machines
- Reliability - High availability, data persistence
- Security - Authentication, TLS/SSL
- Network accessibility - Services on different cloud instances need to reach Opik
Questions
-
What's the recommended production deployment model?
- Self-hosted Opik on Kubernetes Helm chart?
- Docker Swarm/compose with external network?
-
How to handle multi-host scenarios?
- Do we need a load balancer in front of Opik?
- Should each service use
OPIK_URL_OVERRIDEpointing to a public URL? - Best practices for network configuration across VPCs/subnets?
-
Authentication & Security:
- How to configure API keys for multi-tenant scenarios?
- TLS/SSL certificate management?
- Workspace isolation for different projects?
-
Scaling considerations:
- Can Opik handle 100k+ traces per day?
- ClickHouse/MySQL sizing recommendations?
- Horizontal scaling options for Python/Java backends?
Current Development Setup (for reference)
#```
Shared network approach
docker network create shared-opik-your_service_name
docker network connect shared-opik-your_service_name opik-frontend-1
# Services connect via: OPIK_URL_OVERRIDE=http://opik-frontend-1:5173/api
Environment
- Opik version: 1.9.8
- Deployment: Docker Compose
- Services: Python (FastAPI, CrewAI)
Any guidance or reference architectures would be greatly appreciated!
@Mukhsin0508 we recommend using our k8 helm chart https://www.comet.com/docs/opik/self-host/kubernetes for production use cases, its already battle tested and gets frequent update and future support to all new features.
@Mukhsin0508 we recommend using our k8 helm chart https://www.comet.com/docs/opik/self-host/kubernetes for production use cases, its already battle tested and gets frequent update and future support to all new features.
Thank you bro!