graph-node
graph-node copied to clipboard
[Bug] graphman drop sgdXXX results in all deployments with same deployment hash being dropped
Bug report
In an attempt to drop one deployment with multiple copies, I used graphman drop. The confirmation made no mention that multiple deployments were going to be removed, but both deployments were removed when I confirmed. Is this expected behavior? Because it doesn't seem like it from the user perspective.
Relevant log output
$ graphman --config graph_node_config.toml drop sgd1591
Found 1 deployment(s) to remove:
name | indexer-agent/yGw4wgaMhW
deployment | QmTMKqty5yZvZtB3SwzXUG92aZUH1YQw3VjByGw4wgaMhW
Continue? [y/N] y
unassigning QmTMKqty5yZvZtB3SwzXUG92aZUH1YQw3VjByGw4wgaMhW[1591]
Removing subgraph indexer-agent/yGw4wgaMhW
Recording unused deployments. This might take a while.
id | 603
shard | arbitrum
namespace | sgd603
subgraphs |
entities | 4117337
----------+-------------------------------------------------------------------
id | 1591
shard | arbitrum_vip
namespace | sgd1591
subgraphs |
entities | 4859231
Recorded 2 unused deployments
==================================== 1 ====================================
removing sgd603 from arbitrum
deployment id: QmTMKqty5yZvZtB3SwzXUG92aZUH1YQw3VjByGw4wgaMhW
entities: 4117337
done removing sgd603 from arbitrum in 2.2s
==================================== 2 ====================================
removing sgd1591 from arbitrum_vip
deployment id: QmTMKqty5yZvZtB3SwzXUG92aZUH1YQw3VjByGw4wgaMhW
entities: 4859231
done removing sgd1591 from arbitrum_vip in 1.0s
IPFS hash
No response
Subgraph name or link to explorer
No response
Some information to help us out
- [ ] Tick this box if this bug is caused by a regression found in the latest release.
- [ ] Tick this box if this bug is specific to the hosted service.
- [X] I have searched the issue tracker to make sure this issue is not a duplicate.
OS information
Linux
You can remove a single namespace using the following commands:
graphman unassign sgdxxxgraphman unused recordgraphman unused remove
Hau (Pinax) ran into this bug in December. Tehn (another indexer) ran into this bug today.
Just adding some signal to this. It would be great if we could get this fixed as soon as possible. It's a major setback when subgraphs that takes months to sync are lost to bugs.
Looks like this issue has been open for 6 months with no activity. Is it still relevant? If not, please remember to close it.