hydra-booster icon indicating copy to clipboard operation
hydra-booster copied to clipboard

Hydra upgrade

Open petar opened this issue 4 years ago • 1 comments

We've identified a number of weaknesses in the hydra design and implementation, which cause ungraceful failures (worker crashes) and downtimes when utilization spikes. The problem occurred in the window 7/7/2021-7/21/2021.

Problem analysis (theory)

The backend Postgres database can become overloaded under high volume of DHT requests to the hydras. This causes query times to the database to increase. This in turn causes DHT requests to backup in the provider manager loop, which in turn causes the hydra nodes to crash.

Corrective steps

  • [x] Ensure the entire fleet of hydra heads (across machines) always uses the same sequence of balanced IDs: https://github.com/libp2p/hydra-booster/issues/128 Resolved by https://github.com/libp2p/hydra-booster/pull/130
  • [x] Ensure ID/address mappings persist across restarts (design goal)
  • [x] Fix aggregate metrics to use fast approximate Postgres queries (as opposed to slow exact queries) https://github.com/libp2p/hydra-booster/issues/133
  • [ ] Upgrades in DHT provider manager:
    • [ ] Use multiple threads in the provider loop (diminishes the effect of individual straggler requests to the datastore) https://github.com/libp2p/go-libp2p-kad-dht/issues/729
    • [ ] Gracefully decline quality of service when under load https://github.com/libp2p/go-libp2p-kad-dht/pull/730
    • [ ] Fully decline service at a configurable peak level of load
  • [ ] Monitor (via metrics) the query latency of the backing Postgres database (at the infra level)
  • [ ] Setup automatic pprof dumps near out-of-memory events, perhaps using https://github.com/ipfs-shipyard/go-dumpotron (at infra level)

Acceptance criteria

  • Verify that a sustained increased request load at the hydra level does not propagate to the Postgres backing datastore. This should be ensured by measures for graceful degradation of quality (above) at the DHT provider manager.

petar avatar Jul 29 '21 13:07 petar

@petar : thanks for putting this together. A few comments/questions coming to mind:

  1. I'm not saying we need to backfill now, but in future I think it would be ideal to include the data that lead us to our theory.
  2. Do we know why we're crashing now vs. not previously?
  3. What's the impact to Hydra nodes crashing? Does the whole network see impact? Or is our ability to monitor/inspect the network impaired?
  4. Is there anything else architecturally or infra wise we could do that would help here? I'm not saying we should, but for example, would AWS RDS Postgres Aurora help here?

You don't need to answer these questions here. They are the things that came to mind while reading this.

BigLep avatar Aug 03 '21 05:08 BigLep