feat(cmd/rpc): Automatically detect running node for RPC requests
Closes #2859.
TL;DR Previously, the node.store flag had to be specified manually for each RPC request. This commit introduces automatic detection of the running node.
Assumptions:
- presence of lock indicates a running node
- specify order of network (mainnet, mocha, arabica, private) and type (bridge, full, light)
- 1 network will only have 1 running node type. multiple nodes of same network, same type are disallowed (receive
Error: node: store is in use). - auth token, other flags still retain prev behavior
- aligns with Unix daemon conventions.
Sample Test Cases
- Node store set
❯ celestia blob get 1318129 0x42690c204d39600fddd3 0MFhYKQUi2BU+U1jxPzG7QY2BVV1lb3kiU+zAK7nUiY= --node.store=$NODE_STORE
{
"result": {
"namespace": "AAAAAAAAAAAAAAAAAAAAAAAAAEJpDCBNOWAP3dM=",
"data": "0x676d",
"share_version": 0,
"commitment": "0MFhYKQUi2BU+U1jxPzG7QY2BVV1lb3kiU+zAK7nUiY=",
"index": 23
}
}
- Node store not set, but flag specified
❯ celestia blob get 1318129 0x42690c204d39600fddd3 0MFhYKQUi2BU+U1jxPzG7QY2BVV1lb3kiU+zAK7nUiY= --node.store=
Error: cant get the access to the auth token: root directory was not specified
- No node store flag specified, yay
❯ celestia blob get 1318129 0x42690c204d39600fddd3 0MFhYKQUi2BU+U1jxPzG7QY2BVV1lb3kiU+zAK7nUiY=
{
"result": {
"namespace": "AAAAAAAAAAAAAAAAAAAAAAAAAEJpDCBNOWAP3dM=",
"data": "0x676d",
"share_version": 0,
"commitment": "0MFhYKQUi2BU+U1jxPzG7QY2BVV1lb3kiU+zAK7nUiY=",
"index": 23
}
}
- Multiple networks running, will go to mainnet before mocha
❯ celestia blob get 1318129 0x42690c204d39600fddd3 0MFhYKQUi2BU+U1jxPzG7QY2BVV1lb3kiU+zAK7nUiY=
{
"result": "RPC client error: sendRequest failed: http status 401 Unauthorized unmarshaling response: EOF"
}
@Wondertan thanks for comments! Updated
also, note that flock now leaves .lock file in directory - https://github.com/gofrs/flock/blob/master/flock.go#L54
also, note that flock now leaves .lock file in director
I am aware and that's ok. The locking functionality should be around the flock syscall, not the existence of the file.
Hey @mastergaurang94. Gentle ping. We would want to have this PR merged and wonder if you are going to finish it up
@Wondertan yes. Been open too long! Will aim to merge this week. Thank you 🙏🏽
@Wondertan anything else?
I think that's all. I already skimmed through the code and its LGTM. The next step is to get another review from a team member
Haven't seen the approval in a long time 🥹 Thank you for lending your time & energy to this review @Wondertan! Had fun
cc @jcstein for visibility
Cross-posting from slack:
This flag also appears:
- in the cel-key utility: https://docs.celestia.org/developers/celestia-node-key#steps-for-generating-node-keys
- for using a different directory than the standard dir: https://docs.celestia.org/nodes/celestia-node-troubleshooting#changing-the-location-of-your-node-store
these are the two that i found on a quick search
what impact does this have on cel-key utility or someone wishing to use a different directory than the default?
what impact does this have on cel-key utility or someone wishing to use a different directory than the default?
@jcstein No impact on cel-key, and I think, as it stands, cel-key does not integrate automatic path resolution.
For non-default path users, they have to specify the path using the flag.
For non-default path users, they have to specify the path using the flag.
Got it, thanks @Wondertan. If I were running with a non-default node store, making an RPC request, I would still use the node.store flag, correct?
No impact on cel-key, and I think, as it stands, cel-key does not integrate automatic path resolution.
Correction after looking at the code. cel-key does have automatics but they aren't as smart as in this PR and this PR doesn't chang those
Got it, thanks @Wondertan. If I were running with a non-default node store, making an RPC request, I would still use the node.store flag, correct?
Yes
Thanks @vgonkivs & @walldiss! Also, PR will need to be labeled by someone who has access for merge
gm @mastergaurang94 - I'm working on the docs for this. can you help me understand which network(s) you were running a node(s) for on for 4-6?
- Multiple networks running, will go to mainnet before mocha
❯ celestia blob get 1318129 0x42690c204d39600fddd3 0MFhYKQUi2BU+U1jxPzG7QY2BVV1lb3kiU+zAK7nUiY= { "result": "RPC client error: sendRequest failed: http status 401 Unauthorized unmarshaling response: EOF" }
I'm assuming you had a mocha node running, the request was sent to mainnet node, but looks like the call failed?
- Multiple networks running, will go to mocha before arabica
❯ celestia blob get 1318129 0x42690c204d39600fddd3 0MFhYKQUi2BU+U1jxPzG7QY2BVV1lb3kiU+zAK7nUiY= { "result": "header: given height is from the future: networkHeight: 802095, requestedHeight: 1318129" }
I'm guessing you had mocha and arabica running?
- Run node with rpc config Address: 0.0.0.1 and Port: 25231 -- accurately directs request
❯ celestia blob get 1318129 0x42690c204d39600fddd3 0MFhYKQUi2BU+U1jxPzG7QY2BVV1lb3kiU+zAK7nUiY= { "result": "RPC client error: sendRequest failed: Post \"http://0.0.0.1:25231\": dial tcp 0.0.0.1:25231: connect: no route to host" }
It looks like this request fails?
hey @jcstein, sure:
- mainnet full, mocha light. re: fail, I confirmed the request went to the right node, not the validity of the response. 401 showed up in the mainnet node logs (that blob nor namespace exists there)
- yes, both mocha full & arabica light. re: header in future, I had deleted my node store so new one hadn't caught up yet.
main thing, node types are unique. these were the cases I listed but any variation will work
- similar to above, the request directs to http://0.0.0.1:25231. failure because I wasn't running anything at that address, if I was, it would have to that node
awesome, thanks for the details here @mastergaurang94
could you please take a look at the corresponding docs PR?