mission-control-indexer
mission-control-indexer copied to clipboard
Discrepancies on subgraph stakes
Currently I'm technically out of GRT to stake on my current subgraphs, when I should have (theoretically) enough to stake on the ones I've configured.
My indexing rules are as follows: global -> 20k allocation amount, 2 parallel allocations (default), minSignal 75, minStake 100. Uniswap -> 100k allocation amount Balancer -> 100k allocation amount UMA -> 200k allocation amount
According to my calculations (and using my grafana dashboard) I'm indexing 59 subgraphs with those indexing rules, which means it should be using around 3M GRT (20k * 2 * 56 + 800k for UMA + Balancer + Uniswap). Since I have 3.9M GRT, I was expecting to have no issues on those indexing rules, but my agent is reporting issues to stake on the Balancer subgraph because of insufficient funds
[2020-11-16 16:15:28.779 +0000] WARN (IndexerAgent/1 on indexer-agent-587bf8fb6-w6269): Failed to reconcile indexer and network:
err: {
"type": "Error",
"message": "Failed to allocate 100000.0 GRT to '0xc28e9c1c32b51fa39ca4586716dcd0ab70ff5ae41c9d2690806138834a800911': indexer only has 11362.567738266921047099 GRT stake free for allocating",
"stack":
Error: Failed to allocate 100000.0 GRT to '0xc28e9c1c32b51fa39ca4586716dcd0ab70ff5ae41c9d2690806138834a800911': indexer only has 11362.567738266921047099 GRT stake free for allocating
at Network.<anonymous> (/opt/indexer/packages/indexer-agent/dist/network.js:444:23)
at Generator.next (<anonymous>)
at fulfilled (/opt/indexer/packages/indexer-agent/dist/network.js:24:58)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
}
Checking in the explorer, it seems as if I have actually exhausted all my stake (3.9M):
Checking it on the tsuki-graph app, it seems as I only staked 51 subgraphs (instead of the 59 I was supposedly staking), and also I'm having over 2 parallel allocations in one of the Uniswap subgraphs (QmRuorV4Ck1sVdpfpAAwfYXnf3cfSkbDwZvvzWud9SH8Dg
)
https://tsuki-graph.netlify.app/user/0x9bae7565cdebe993ff6198c82f4b71c101d491f2
Here are my indexing rules (merged and non-merged) merged.txt non-merged.txt
Here's a log of the indexer-agent suggested by @trader-payne agent.log
You know what's really weird, is that only three allocations should be working based on your rules, if my current understanding of the logic behind decisionBasis never
is correct.
QmbS7vmAWXJqsJEnAbvNSBYQXxCUmZWHFQygHPbyb3vy3N │ 100000.0 │ rules
QmWTrJJ9W8h3JE19FhCzzPYsJ2tgXZCdUqnbyuo64ToTBN │ 100000.0 │ rules
QmVviUCWpxxZuwekyHe87vspJsaugG1KDkfqEwjm3788ku │ 200000.0 │ rules
These are the only allocations with decisionBasis
NOT set to never
But I'm not entirely sure if having minStake
or minSignal
overrides setting a subgraph to decisionBasis never
, in my opinion, it shouldn't.
@juanmardefago when is the last time you changed the rules to these allocations? Maybe the agent didn't have time to settle down the old allocations. Did you have the other subgraphs set to decisionBasis rules
before this?
@trader-payne I've had these indexing rules for about a week now
As far as I understand, since I have a global setting, all subgraphs that have more than 75 signal and 100 stake would be indexed unless they appear on the indexing rules table with a never
decisionBasis. So there are lots of subgraphs that would be indexed and not shown on the table
Apart from that, I handle those 3 particular ones with a higher allocation amount, and also handle lot of other ones with a never
decisionBasis as they are either non-mainnet subgraphs (ropsten, rinkeby and xDai ones that slipped into this phase by mistake) or they are subgraphs that are not working currently (every subgraph that fails and is displayed in the grafana dashboard, basically)
Oh and by the way, I've had similar indexing rules every since they released them (I've had only a global setting with minStake 1 and allocationAmount 1 before)
As far as I understand, since I have a global setting, all subgraphs that have more than 75 signal and 100 stake would be indexed unless they appear on the indexing rules table with a never decisionBasis. So there are lots of subgraphs that would be indexed and not shown on the table
Yeah, you're right, I missed that detail.
================================================================================
Now looking at your allocations based on both of the available explorers, I see the following.
On @alexo382's explorer you have:
protofire - 51 subgraphs | 107 allocations | 3.9m GRT
Maybe Alex can explain how the logic works when displaying that 3.9M GRT number, because it differs from my calculations that are simply based on the numbers that are displayed there. From my calculations, you should have had 1.36M GRT allocated.
On the official graph explorer page you have:
But obviously, the allocations aren't fully parsed and they're capped at 100 (known bug, will be fixed in the next explorer version)
It's really, really weird 😐 I may have an idea of what happened, something that Jannis mentioned a while ago when I was looking at rather the same kind of issue. Some allocations don't close at the right time, and might've thrown you into a continuous loop with more allocated GRT than you have.
Just saw your comment, I just finished updating my dashboard to show both the number the user has staked and the sum of the allocations. Also, I'm removing the allocations that come with a null
originalName
, I've seen this happen for several users. Strange.
In protofire's case, it looks something like this:
protofire - 79 subgraphs | 165 allocations | 3,912,587.444 GRT staked | 3,820,000 GRT allocated
So then we have the following broken down by # of subgraphs * # of allocations * allocatedTokens
75 * 2 * 20000
1 * 6 * 20000
1 * 2 * 200000
1 * 2 * 100000
1 * 1 * 100000
This makes for 3,820,000 GRT. Not sure if this is entirely correct, but we're getting closer to the truth :D
Note: by counting the null
allocations, we would get 3.86M which would probably get rounded up to 3.9M.
Thanks for the input @alexo382 !
Taking those stats in consideration, then it might mean that the subgraphs with never
decisionBasis are being allocated anyways (plus some other subgraphs? since I can only account for 70 subgraphs in my infrastructure according to graphana)
By the way, the explorer shows 12.6k GRT available, so there might be another thing we are not accounting for, since even with the null
allocations the math is still not 100% checking out :/
@fordN can you have a look whenever you have time?
@Jannis, Hi! This is what I was talking about (in Discord). I have 1 parallelAllocation. And now:
- There are duplicates
- These takes have occupied the allocations of other subgraphs that are in my rules
Duplicates:
Deployment | Allocated | Started |
---|---|---|
Qma3LJSzh4q5RGS5inHMLiGCBVTQ16PnGeRJscCDcQsWkH | 198.0kGRT | 4 hours ago |
Qma3LJSzh4q5RGS5inHMLiGCBVTQ16PnGeRJscCDcQsWkH | 198.0kGRT | 2 days ago |
QmbS7vmAWXJqsJEnAbvNSBYQXxCUmZWHFQygHPbyb3vy3N | 1.0mGRT | 4 hours ago |
QmbS7vmAWXJqsJEnAbvNSBYQXxCUmZWHFQygHPbyb3vy3N | 1.0mGRT | 2 days ago |
QmbykNCTDoRx5NMgJgoXY7YARoV3xTRNfs4cWifsc3fm7u | 660.0kGRT | 4 hours ago |
QmbykNCTDoRx5NMgJgoXY7YARoV3xTRNfs4cWifsc3fm7u | 660.0kGRT | a day ago |
QmdoMvZbk2YDyDVQjhS73RfqymffMxz4EW1uS9RhEEfqog | 267.0kGRT | 14 hours ago |
QmdoMvZbk2YDyDVQjhS73RfqymffMxz4EW1uS9RhEEfqog | 267.0kGRT | 2 days ago |
QmZ92X4yhsjb7hnttAQMHeG1mH9C2SbLmiSGirHpU74iLW | 198.0kGRT | 4 hours ago |
QmZ92X4yhsjb7hnttAQMHeG1mH9C2SbLmiSGirHpU74iLW | 198.0kGRT | a day ago |
QmZWZWLQqY7FqsjjTsShheSZDSmj1Q3RLogNSwME3yep4Q | 377.0kGRT | 4 hours ago |
QmZWZWLQqY7FqsjjTsShheSZDSmj1Q3RLogNSwME3yep4Q | 377.0kGRT | 2 days ago |
Agent log:
Nov 17 00:17:24 graph-indexer-agent[28340]: {"level":40,"time":1605568644355,"pid":28340,"hostname":"Graph-TN-Legiojuve","name":"IndexerAgent","err":{"type":"Error","message":"Failed to allocate 409000.0 GRT to '0x774ee8d63615c56409970fffddd06e692a219a224e83f706f067a20814747d73': indexer only has 196132.943757249204226918 GRT stake free for allocating","stack":"Error: Failed to allocate 409000.0 GRT to '0x774ee8d63615c56409970fffddd06e692a219a224e83f706f067a20814747d73': indexer only has 196132.943757249204226918 GRT stake free for allocating\n at Network.<anonymous> (/usr/lib/node_modules/@graphprotocol/indexer-agent/dist/network.js:444:23)\n at Generator.next (<anonymous>)\n at fulfilled (/usr/lib/node_modules/@graphprotocol/indexer-agent/dist/network.js:24:58)\n at runMicrotasks (<anonymous>)\n at processTicksAndRejections (internal/process/task_queues.js:93:5)"},"msg":"Failed to reconcile indexer and network:"}
Rules:
deployment | allocationAmount | parallelAllocations | maxAllocationPercentage | minSignal | maxSignal | minStake | minAverageQueryFees | custom | decisionBasis |
---|---|---|---|---|---|---|---|---|---|
global | 1 | rules | |||||||
QmdoMvZbk2YDyDVQjhS73RfqymffMxz4EW1uS9RhEEfqog | 267000.0 | always | |||||||
QmbykNCTDoRx5NMgJgoXY7YARoV3xTRNfs4cWifsc3fm7u | 660000.0 | always | |||||||
QmbS7vmAWXJqsJEnAbvNSBYQXxCUmZWHFQygHPbyb3vy3N | 1018000.0 | always | |||||||
Qma3LJSzh4q5RGS5inHMLiGCBVTQ16PnGeRJscCDcQsWkH | 198000.0 | always | |||||||
QmZWZWLQqY7FqsjjTsShheSZDSmj1Q3RLogNSwME3yep4Q | 377000.0 | always | |||||||
QmZ92X4yhsjb7hnttAQMHeG1mH9C2SbLmiSGirHpU74iLW | 198000.0 | always | |||||||
QmWNP1jqMZm61G8FbsLzi9Mnii5qRXbkadBxXVHZPqzKri | 409000.0 | always | |||||||
Others | ........ | always |
Just realized that with the new change on @alexo382 's site (https://tsuki-graph.netlify.app/user/0x9bae7565cdebe993ff6198c82f4b71c101d491f2) I have stake on the xDai subgraphs, which I'm quite sure I've never
ed and are not being indexed.
Shouldn't their allocations be closed? Or should I also manualy set them to allocationAmount = null
?
decisionBasis never
should turn everything off from my understanding. Hopefully Ford can clarify once he sees the tag.
Just to confirm, I'm filtering for status: "Active"
before displaying the allocations on the frontend, so @juanmardefago's xDai allocations are definitely active.
parallelAllocation=1
7 hours ago I did nothing and again there were transactions that duplicated allocations.
My allocations are stuck for more than 20h even though I set everything to allocationAmount 0.01
and parallelAllocations 0
more than 12h ago