temporal
temporal copied to clipboard
temporal frontend unable to connect to elasticsearch visibilitystore
I have a temporal setup as following: defaultstore: postgres advancedVisibilityStore: es-visibility (the database behind es-visiblity is opensearch cluster) I ran some workflows and I see the data going into the opensearch. But the temporal UI does not show any data. Neither do I see any error logs from the temporal pods. The configmap for the temporal services look as following: (.venv) temporal prashanth$ kubectl get cm ft1-mm-temporal-frontend-config -o yaml apiVersion: v1 data: config_template.yaml: |- log: stdout: true level: "debug,info"
persistence:
defaultStore: default
advancedVisibilityStore: es-visibility
numHistoryShards: 512
datastores:
default:
sql:
pluginName: "postgres12"
driverName: "postgres12"
databaseName: "temporal"
connectAddr: "db-console-pg.ft1.dev.xxx.com:5432"
connectProtocol: "tcp"
user: temporal
password: "{{ .Env.TEMPORAL_STORE_PASSWORD }}"
maxConnLifetime: 1h
maxConns: 20
secretName: ""
visibility:
sql:
pluginName: "postgres12"
driverName: "postgres12"
databaseName: "temporal"
connectAddr: "db-console-pg.ft1.dev.xxx.com:5432"
connectProtocol: "tcp"
user: "temporal"
password: "{{ .Env.TEMPORAL_VISIBILITY_STORE_PASSWORD }}"
maxConnLifetime: 1h
maxConns: 20
secretName: ""
es-visibility:
elasticsearch:
version: "v7"
url:
scheme: "https"
host: "es.mgmt.dev.xxx.com:443"
username: "temporal_visibility"
password: "<placeholder>"
logLevel: "error"
indices:
visibility: "temporal-visibility"
global:
membership:
name: temporal
maxJoinDuration: 30s
broadcastAddress: {{ default .Env.POD_IP "0.0.0.0" }}
pprof:
port: 7936
metrics:
tags:
type: frontend
prometheus:
timerType: histogram
listenAddress: "0.0.0.0:9090"
services:
frontend:
rpc:
grpcPort: 7233
membershipPort: 7933
bindOnIP: "0.0.0.0"
history:
rpc:
grpcPort: 7234
membershipPort: 7934
bindOnIP: "0.0.0.0"
matching:
rpc:
grpcPort: 7235
membershipPort: 7935
bindOnIP: "0.0.0.0"
worker:
rpc:
grpcPort: 7239
membershipPort: 7939
bindOnIP: "0.0.0.0"
clusterMetadata:
enableGlobalDomain: false
failoverVersionIncrement: 10
masterClusterName: "active"
currentClusterName: "active"
clusterInformation:
active:
enabled: true
initialFailoverVersion: 1
rpcName: "temporal-frontend"
rpcAddress: "127.0.0.1:7933"
dcRedirectionPolicy:
policy: "noop"
toDC: ""
archival:
status: "disabled"
publicClient:
hostPort: "ft1-mm-temporal-frontend:7233"
dynamicConfigClient:
filepath: "/etc/temporal/dynamic_config/dynamic_config.yaml"
pollInterval: "10s"
UI:
Can you try to remove all filters? Or try using temporal/CLI to see if you can list workflow from using the CLI.
After changing the index name from temporal-visibility to temporal_visibility_v1_dev in the es-visibility visibilityStore configuration, the UI started working as expected.