graph-node icon indicating copy to clipboard operation
graph-node copied to clipboard

[Bug] Chain1/block table failed to provision when using postgres with replica 2

Open insider89 opened this issue 5 months ago • 0 comments

Bug report

I am using the Postgres HA with replica 2 setup(pgpool on top of it). In 50% when I run graph-node, it failed to create chain1 DB and block table(graph-no point to pgpool).

I am running it on k8s. My setup: 2 postgresql, 1 pgpool, 1 index node, 2 query nodes. PostgreSQL uses REPMGR for primary/standby switch.

Running on the latest v0.39.1(didn't try different version).

Error that I see in the index graph-node logs:

Waiting for IPFS (ipfs.console.settlemint.com:443)
Jun 17 15:14:29.347 INFO Graph Node version: 0.36.0 (2025-06-03)
Jun 17 15:14:29.353 WARN GRAPH_POI_ACCESS_TOKEN not set; might leak POIs to the public via GraphQL
Jun 17 15:14:29.353 INFO Reading configuration file `/etc/graph-node/config.toml`
Jun 17 15:14:29.430 WARN No fork base URL specified, subgraph forking is disabled
Jun 17 15:14:29.430 INFO Starting up
Jun 17 15:14:29.432 INFO Connecting to IPFS server at 'https://ipfs.console.settlemint.com/'
[2025-06-17T15:14:29Z DEBUG reqwest::connect] starting new connection: https://ipfs.console.settlemint.com/
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::dns] resolve; host=ipfs.console.settlemint.com
[2025-06-17T15:14:29Z DEBUG reqwest::connect] starting new connection: https://ipfs.console.settlemint.com/
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::dns] resolve; host=ipfs.console.settlemint.com
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::http] connecting to 104.18.6.145:443
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::http] connecting to 104.18.7.145:443
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::http] connected to 104.18.7.145:443
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::http] connected to 104.18.6.145:443
Jun 17 15:14:29.871 INFO Successfully connected to IPFS RPC API at: 'https://ipfs.console.settlemint.com/'
Jun 17 15:14:29.872 INFO Connecting to IPFS server at 'https://ipfs.network.thegraph.com/'
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::pool] pooling idle connection for ("https", ipfs.console.settlemint.com)
[2025-06-17T15:14:29Z DEBUG reqwest::connect] starting new connection: https://ipfs.network.thegraph.com/
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::dns] resolve; host=ipfs.network.thegraph.com
[2025-06-17T15:14:29Z DEBUG reqwest::connect] starting new connection: https://ipfs.network.thegraph.com/
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::dns] resolve; host=ipfs.network.thegraph.com
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::http] connecting to 104.18.40.31:443
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::http] connecting to 104.18.40.31:443
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::http] connected to 104.18.40.31:443
[2025-06-17T15:14:29Z DEBUG hyper_util::client::legacy::connect::http] connected to 104.18.40.31:443
[2025-06-17T15:14:30Z DEBUG hyper_util::client::legacy::pool] pooling idle connection for ("https", ipfs.network.thegraph.com)
Jun 17 15:14:30.027 INFO Successfully connected to IPFS RPC API at: 'https://ipfs.network.thegraph.com/'
Jun 17 15:14:30.027 INFO Creating a pool of 2 IPFS clients
Jun 17 15:14:30.054 WARN Expensive queries file not set to a valid file: /etc/graph-node/expensive-queries.txt
[2025-06-17T15:14:30Z DEBUG tokio_postgres::prepare] preparing query s0: LISTEN store_events
[2025-06-17T15:14:30Z DEBUG tokio_postgres::query[] executing statement s0 with parameters: ]
Jun 17 15:14:30.206 INFO Connecting to Postgres, weight: 1, conn_pool_size: 15, url: postgresql://postgres:HIDDEN_PASSWORD@asdf-eb068-postgres-pgpool:5432/asdf-eb068, pool: main, shard: primary
Jun 17 15:14:30.209 INFO Pool successfully connected to Postgres, pool: main, shard: primary, component: Store
Jun 17 15:14:30.697 INFO Dropping cross-shard views, pool: main, shard: primary, component: ConnectionPool
Jun 17 15:14:30.790 INFO Setting up fdw, pool: main, shard: primary, component: ConnectionPool
Jun 17 15:14:30.895 INFO Running migrations, pool: main, shard: primary, component: ConnectionPool
Jun 17 15:14:32.298 INFO Migrations finished, pool: main, shard: primary, component: ConnectionPool
Jun 17 15:14:32.312 INFO Mapping primary, pool: main, shard: primary, component: ConnectionPool
Jun 17 15:14:32.317 INFO Creating cross-shard views, pool: main, shard: primary, component: ConnectionPool
Jun 17 15:14:32.518 INFO Setup finished, shards: 1
[2025-06-17T15:14:32Z DEBUG tokio_postgres::prepare] preparing query s1: LISTEN chain_head_updates
[2025-06-17T15:14:32Z DEBUG tokio_postgres::query[] executing statement s1 with parameters: ]
Jun 17 15:14:32.604 INFO Starting graphman server at: http://localhost:8050, component: GraphmanServer
Jun 17 15:14:32.623 INFO Creating transport, capabilities: archive, traces, url: https://besu-7b06e.console.k8s.orb.local/sm_aat_feb68ca7f181689b, provider: settlemint
Jun 17 15:14:32.648 INFO All network providers have checks enabled. To be considered valid they will have to pass the following checks: [ExtendedBlocksCheck]
Jun 17 15:14:32.648 INFO All network providers have checks enabled. To be considered valid they will have to pass the following checks: [ExtendedBlocksCheck]
Jun 17 15:14:32.648 INFO All network providers have checks enabled. To be considered valid they will have to pass the following checks: [ExtendedBlocksCheck]
[2025-06-17T15:14:32Z DEBUG web3::transports::http[] [id:0[] sending request: "{\"jsonrpc\":\"2.0\",\"method\":\"net_version\",\"params\":],\"id\":0}"
[2025-06-17T15:14:32Z DEBUG reqwest::connect] starting new connection: https://besu-7b06e.console.k8s.orb.local/
[2025-06-17T15:14:32Z DEBUG hyper_util::client::legacy::connect::dns] resolve; host=besu-7b06e.console.k8s.orb.local
[2025-06-17T15:14:32Z DEBUG web3::transports::http[] [id:1] sending request: "{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBlockByNumber\",\"params\":[\"0x0\",false],\"id\":1}"
[2025-06-17T15:14:32Z DEBUG reqwest::connect] starting new connection: https://besu-7b06e.console.k8s.orb.local/
[2025-06-17T15:14:32Z DEBUG hyper_util::client::legacy::connect::dns] resolve; host=besu-7b06e.console.k8s.orb.local
[2025-06-17T15:14:32Z DEBUG hyper_util::client::legacy::connect::http[] connecting to [fd07:b51a:cc66:0:cafe::3]:443
[2025-06-17T15:14:32Z DEBUG hyper_util::client::legacy::connect::http[] connecting to [fd07:b51a:cc66:0:cafe::3]:443
[2025-06-17T15:14:32Z DEBUG hyper_util::client::legacy::connect::http] connecting to 198.19.248.3:443
[2025-06-17T15:14:32Z DEBUG hyper_util::client::legacy::connect::http] connecting to 198.19.248.3:443
[2025-06-17T15:14:33Z DEBUG hyper_util::client::legacy::connect::http] connected to 198.19.248.3:443
[2025-06-17T15:14:33Z DEBUG hyper_util::client::legacy::connect::http] connected to 198.19.248.3:443
[2025-06-17T15:14:33Z DEBUG web3::transports::http[] [id:1[] received response: "{\"jsonrpc\":\"2.0\",\"id\":1,\"result\":{\"number\":\"0x0\",\"hash\":\"0xdbbe2dcfb2e83cfedac6c05020038608e00be618dcdfa20d91b5bf5180d8380b\",\"mixHash\":\"0x63746963616c2062797a616e74696e65206661756c7420746f6c6572616e6365\",\"parentHash\":\"0x0000000000000000000000000000000000000000000000000000000000000000\",\"nonce\":\"0x0000000000000000\",\"sha3Uncles\":\"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347\",\"logsBloom\":\"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\",\"transactionsRoot\":\"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421\",\"stateRoot\":\"0x6324aea52dda56bb757cf5f2db42962cdae2d7d6aec866df010d60f873d42077\",\"receiptsRoot\":\"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421\",\"miner\":\"0x0000000000000000000000000000000000000000\",\"difficulty\":\"0x1\",\"totalDifficulty\":\"0x1\",\"extraData\":\"0xf83aa00000000000000000000000000000000000000000000000000000000000000000d5948edea7c55bc254de2cf25844895ff02d65575ff0c080c0\",\"baseFeePerGas\":\"0x3b9aca00\",\"size\":\"0x23e\",\"gasLimit\":\"0x1fffffffffffff\",\"gasUsed\":\"0x0\",\"timestamp\":\"0x0\",\"uncles\":[],\"transactions\":]}}"
[2025-06-17T15:14:33Z DEBUG hyper_util::client::legacy::pool] pooling idle connection for ("https", besu-7b06e.console.k8s.orb.local)
[2025-06-17T15:14:33Z DEBUG hyper_util::client::legacy::pool] pooling idle connection for ("https", besu-7b06e.console.k8s.orb.local)
[2025-06-17T15:14:33Z DEBUG web3::transports::http[] [id:0] received response: "{\"jsonrpc\":\"2.0\",\"id\":0,\"result\":\"46826\"}"

thread 'tokio-runtime-worker' panicked at /graph-node/node/src/chain.rs:400:22:
must be able to create store if one is not yet setup for the chain: store error: Record not found

Stack backtrace:
   0: anyhow::error::<impl core::convert::From<E> for anyhow::Error>::from
   1: <graph_store_postgres::block_store::BlockStore as graph::components::store::traits::BlockStore>::create_chain_store
   2: graph_node::main_inner::{{closure}}::{{closure}}::{{closure}}
   3: std::panic::catch_unwind
   4: <futures_util::future::future::catch_unwind::CatchUnwind<Fut> as core::future::future::Future>::poll
   5: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
   6: tokio::runtime::task::core::Core<T,S>::poll
   7: std::panic::catch_unwind
   8: tokio::runtime::task::harness::Harness<T,S>::poll
   9: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  10: tokio::runtime::scheduler::multi_thread::worker::Context::run
  11: tokio::runtime::context::scoped::Scoped<T>::set
  12: tokio::runtime::context::runtime::enter_runtime
  13: tokio::runtime::scheduler::multi_thread::worker::run
  14: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
  15: tokio::runtime::task::core::Core<T,S>::poll
  16: std::panic::catch_unwind
  17: tokio::runtime::task::harness::poll_future
  18: tokio::runtime::task::harness::Harness<T,S>::poll_inner
  19: tokio::runtime::task::harness::Harness<T,S>::poll
  20: tokio::runtime::task::UnownedTask<S>::run
  21: tokio::runtime::blocking::pool::Inner::run
  22: std::sys::backtrace::__rust_begin_short_backtrace
  23: core::ops::function::FnOnce::call_once{{vtable.shim}}
  24: std::sys::pal::unix::thread::Thread::new::thread_start
  25: <unknown>
  26: __clone
stack backtrace:
   0: __rustc::rust_begin_unwind
   1: core::panicking::panic_fmt
   2: core::result::unwrap_failed
   3: graph_node::main_inner::{{closure}}::{{closure}}::{{closure}}
   4: std::panic::catch_unwind
   5: <futures_util::future::future::catch_unwind::CatchUnwind<Fut> as core::future::future::Future>::poll
   6: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
   7: tokio::runtime::task::core::Core<T,S>::poll
   8: std::panic::catch_unwind
   9: tokio::runtime::task::harness::Harness<T,S>::poll
  10: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  11: tokio::runtime::scheduler::multi_thread::worker::Context::run
  12: tokio::runtime::context::scoped::Scoped<T>::set
  13: tokio::runtime::context::runtime::enter_runtime
  14: tokio::runtime::scheduler::multi_thread::worker::run
  15: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
  16: tokio::runtime::task::core::Core<T,S>::poll
  17: std::panic::catch_unwind
  18: tokio::runtime::task::harness::poll_future
  19: tokio::runtime::task::harness::Harness<T,S>::poll_inner
  20: tokio::runtime::task::harness::Harness<T,S>::poll
  21: tokio::runtime::task::UnownedTask<S>::run
  22: tokio::runtime::blocking::pool::Inner::run
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Panic in tokio task, aborting!



When I use replica 1 for PostgreSQL, I never got this error. With replica 2 in 50% cases I got this issue.

After k8s kill index pod, I got error:

Jun 17 15:17:59.672 ERRO Trying again after block polling failed: Ingestor error: store error: relation "chain1.blocks" does not exist, provider: settlemint, component: EthereumPollingBlockIngestor

I am connection to 2 IPFS: 1 private and 1 public.

My IPFS deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    prometheus.io/path: /metrics
    prometheus.io/port: "8040"
    prometheus.io/scrape: "true"
    pulumi.com/deletionPropagationPolicy: background
    pulumi.com/patchForce: "true"
    pulumi.com/skipAwait: "false"
    reloader.stakater.com/auto: "true"
  creationTimestamp: "2025-06-17T15:14:23Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: 281d6511-59fb-4c2c-8663-6c7accc4e492
    app.kubernetes.io/name: asdf-eb068-index-node
    kots.io/app-slug: settlemint-platform
    settlemint.com/application-slug: test
    settlemint.com/logging: 281d6511-59fb-4c2c-8663-6c7accc4e492
    settlemint.com/service-type: HAGraphPostgresMiddleware
    settlemint.com/workspace-slug: adf
  name: asdf-eb068-index-node
  namespace: deployments
  resourceVersion: "7139"
  uid: 5c73b49c-4da1-4915-8b28-7c8311a215f4
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: 281d6511-59fb-4c2c-8663-6c7accc4e492
      app.kubernetes.io/name: asdf-eb068-index-node
      kots.io/app-slug: settlemint-platform
      settlemint.com/application-slug: test
      settlemint.com/logging: 281d6511-59fb-4c2c-8663-6c7accc4e492
      settlemint.com/service-type: HAGraphPostgresMiddleware
      settlemint.com/workspace-slug: adf
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        backup.velero.io/backup-volumes: ""
        prometheus.io/path: /metrics
        prometheus.io/port: "8040"
        prometheus.io/scrape: "true"
        pulumi.com/deletionPropagationPolicy: background
        pulumi.com/patchForce: "true"
        pulumi.com/skipAwait: "false"
        reloader.stakater.com/auto: "true"
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: 281d6511-59fb-4c2c-8663-6c7accc4e492
        app.kubernetes.io/name: asdf-eb068-index-node
        kots.io/app-slug: settlemint-platform
        settlemint.com/application-slug: test
        settlemint.com/logging: 281d6511-59fb-4c2c-8663-6c7accc4e492
        settlemint.com/restart: restart-a3cdd
        settlemint.com/service-type: HAGraphPostgresMiddleware
        settlemint.com/workspace-slug: adf
      name: asdf-eb068-index-node
      namespace: deployments
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/instance: 281d6511-59fb-4c2c-8663-6c7accc4e492
                  app.kubernetes.io/name: asdf-eb068-index-node
                  kots.io/app-slug: settlemint-platform
                  settlemint.com/application-slug: test
                  settlemint.com/logging: 281d6511-59fb-4c2c-8663-6c7accc4e492
                  settlemint.com/service-type: HAGraphPostgresMiddleware
                  settlemint.com/workspace-slug: adf
              topologyKey: kubernetes.io/hostname
            weight: 100
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/instance: 281d6511-59fb-4c2c-8663-6c7accc4e492
                  app.kubernetes.io/name: asdf-eb068-index-node
                  kots.io/app-slug: settlemint-platform
                  settlemint.com/application-slug: test
                  settlemint.com/logging: 281d6511-59fb-4c2c-8663-6c7accc4e492
                  settlemint.com/service-type: HAGraphPostgresMiddleware
                  settlemint.com/workspace-slug: adf
              topologyKey: topology.kubernetes.io/zone
            weight: 50
      containers:
      - command:
        - bash
        - /custom-bin/start.sh
        env:
        - name: BLOCK_INGESTOR
          value: asdf_eb068_index_node
        - name: node_id
          value: asdf_eb068_index_node
        - name: node_role
          value: index-node
        - name: ipfs
          value: https://ipfs.console.settlemint.com
        - name: PG_PASS
          valueFrom:
            secretKeyRef:
              key: postgres_pass
              name: asdf-eb068
        envFrom:
        - secretRef:
            name: asdf-eb068
        - configMapRef:
            name: asdf-eb068-env
        image: docker.io/graphprotocol/graph-node:v0.39.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          exec:
            command:
            - /bin/bash
            - -c
            - "echo -e \"GET /hello HTTP/1.1\r\nHost: localhost\r\n\r\n\" | nc -w
              1 localhost 8040 | grep -q \"ethereum_chain_head_number\" || (echo \"ethereum_chain_head_number
              not found\" && exit 1)"
          failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 3
        name: index-node
        ports:
        - containerPort: 8000
          name: graphql
          protocol: TCP
        - containerPort: 8001
          name: graphql-ws
          protocol: TCP
        - containerPort: 8020
          name: json-rpc
          protocol: TCP
        - containerPort: 8040
          name: metrics
          protocol: TCP
        - containerPort: 8030
          name: index
          protocol: TCP
        - containerPort: 8050
          name: graphman
          protocol: TCP
        readinessProbe:
          failureThreshold: 10
          httpGet:
            path: /
            port: metrics
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 3
        resources:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 50m
            memory: 512Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 2016
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: FallbackToLogsOnError
        volumeMounts:
        - mountPath: /etc/graph-node
          name: config
          readOnly: true
        - mountPath: /custom-bin
          name: start
      dnsPolicy: ClusterFirst
      enableServiceLinks: false
      imagePullSecrets:
      - name: image-pull-secret-docker
      - name: image-pull-secret-ghcr
      - name: image-pull-secret-harbor
      initContainers:
      - args:
        - until nc -z -w2 asdf-eb068-postgres-pgpool 5432; do echo 'waiting for postgres';
          sleep 2; done;
        command:
        - /bin/sh
        - -c
        image: docker.io/graphprotocol/graph-node:v0.39.1
        imagePullPolicy: IfNotPresent
        name: wait-for-rta
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 2016
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 2016
        runAsNonRoot: true
        runAsUser: 2016
      serviceAccount: asdf-eb068
      serviceAccountName: asdf-eb068
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: kubernetes.io/arch
        operator: Equal
        value: arm64
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            app.kubernetes.io/instance: 281d6511-59fb-4c2c-8663-6c7accc4e492
            app.kubernetes.io/name: asdf-eb068-index-node
            kots.io/app-slug: settlemint-platform
            settlemint.com/application-slug: test
            settlemint.com/logging: 281d6511-59fb-4c2c-8663-6c7accc4e492
            settlemint.com/service-type: HAGraphPostgresMiddleware
            settlemint.com/workspace-slug: adf
        maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: ScheduleAnyway
      volumes:
      - name: config
        secret:
          defaultMode: 420
          secretName: asdf-eb068
      - configMap:
          defaultMode: 511
          name: asdf-eb068
        name: start
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2025-06-17T15:14:23Z"
    lastUpdateTime: "2025-06-17T15:15:43Z"
    message: ReplicaSet "asdf-eb068-index-node-5cb6fbf4b7" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2025-06-17T15:19:03Z"
    lastUpdateTime: "2025-06-17T15:19:03Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

Configmap with env variables:

apiVersion: v1
data:
  ETHEREUM_POLLING_INTERVAL: "1000"
  EXPERIMENTAL_SUBGRAPH_VERSION_SWITCHING_MODE: synced
  GRAPH_ALLOW_NON_DETERMINISTIC_FULLTEXT_SEARCH: "true"
  GRAPH_ALLOW_NON_DETERMINISTIC_IPFS: "true"
  GRAPH_CHAIN_HEAD_WATCHER_TIMEOUT: "5"
  GRAPH_DISABLE_GRAFTS: "false"
  GRAPH_ENABLE_PROMETHEUS_METRICS: "true"
  GRAPH_ETH_CALL_GAS: "50000000"
  GRAPH_ETHEREUM_BLOCK_BATCH_SIZE: "100"
  GRAPH_ETHEREUM_BLOCK_INGESTOR_MAX_CONCURRENT_JSON_RPC_CALLS: "100"
  GRAPH_ETHEREUM_CLEANUP_BLOCKS: "true"
  GRAPH_ETHEREUM_MAX_BLOCK_RANGE_SIZE: "1000"
  GRAPH_ETHEREUM_REQUEST_RETRIES: "10"
  GRAPH_ETHEREUM_TARGET_TRIGGERS_PER_BLOCK_RANGE: "100"
  GRAPH_GETH_ETH_CALL_ERRORS: out of gas
  GRAPH_IPFS_TIMEOUT: "30"
  GRAPH_KILL_IF_UNRESPONSIVE: "true"
  GRAPH_LOAD_BIN_SIZE: "10"
  GRAPH_LOAD_WINDOW_SIZE: "3600"
  GRAPH_LOG: info
  GRAPH_LOG_LEVEL: debug
  GRAPH_LOG_QUERY_TIMING: gql
  GRAPH_MAX_GAS_PER_HANDLER: "1_000_000_000_000_000"
  GRAPH_MAX_SPEC_VERSION: 1.2.0
  GRAPH_NODE_CONFIG: /etc/graph-node/config.toml
  GRAPH_PARALLEL_BLOCK_CONSTRAINTS: "true"
  GRAPH_POSTPONE_ATTRIBUTE_INDEX_CREATION: "true"
  GRAPH_PROMETHEUS_HOST: 0.0.0.0
  GRAPH_QUERY_CACHE_BLOCKS: "6"
  GRAPH_QUERY_CACHE_MAX_MEM: "3000"
  GRAPH_QUERY_CACHE_STALE_PERIOD: "1000"
  GRAPH_STATIC_FILTERS_THRESHOLD: "10000"
  GRAPH_STORE_WRITE_BATCH_DURATION: "0"
  GRAPH_STORE_WRITE_BATCH_SIZE: "0"
  GRAPHMAN_SERVER_AUTH_TOKEN: 83804301ee9c2f134886
  RUST_BACKTRACE: "1"
  RUST_LOG: debug
  SUBGRAPH: kit:QmZzstuwHww8ppr1jbxGRA2Wb6crGxRHt1LWybNoQWvpgX
kind: ConfigMap
metadata:
  annotations:
    prometheus.io/scrape: "true"
    pulumi.com/deletionPropagationPolicy: background
    pulumi.com/patchForce: "true"
    pulumi.com/skipAwait: "false"
    reloader.stakater.com/auto: "true"
  creationTimestamp: "2025-06-17T15:14:22Z"
  labels:
    app.kubernetes.io/instance: 281d6511-59fb-4c2c-8663-6c7accc4e492
    app.kubernetes.io/name: asdf-eb068-env
    kots.io/app-slug: settlemint-platform
    settlemint.com/application-slug: test
    settlemint.com/logging: 281d6511-59fb-4c2c-8663-6c7accc4e492
    settlemint.com/service-type: HAGraphPostgresMiddleware
    settlemint.com/workspace-slug: adf
  name: asdf-eb068-env
  namespace: deployments
  resourceVersion: "6831"
  uid: 5197f588-d851-4fbb-a68f-aa71e90ba331

Relevant log output


IPFS hash

QmZzstuwHww8ppr1jbxGRA2Wb6crGxRHt1LWybNoQWvpgX

Subgraph name or link to explorer

No response

Some information to help us out

  • [ ] Tick this box if this bug is caused by a regression found in the latest release.
  • [x] Tick this box if this bug is specific to the hosted service.
  • [x] I have searched the issue tracker to make sure this issue is not a duplicate.

OS information

Linux

insider89 avatar Jun 17 '25 15:06 insider89