forest
forest copied to clipboard
Forest 🌲 testing plan and pipeline
Issue summary
Discuss and establish a fully automated testing plan for Forest 🌲 in preparation for beta release.
Refer to discussion of Forest Roadmap
Task summary
to be determined
Acceptance Criteria
to be determined
@LesnyRumcajs @lemmih any thoughts about this? We can have a discussion here.
Goal is to get over 90%. Where we at currently? My memory says 60% from stand-up discussions.
IMO there are a couple of steps that have to be done to get to a good 90%. Not only would they bring us closer to this goal, but they also solve other pain points.
- Go through all Forest flags and subcommands, and understand and document them. This is crucial; without it, we can, at the most, test that the process doesn't crash. That'd be worthless 90%.
- Perhaps we can remove some of the features that we don't think are no longer useful. That's one way of increasing the coverage. :D
- Based on our understanding, we know what to expect from each flag and subcommand. Write it down in the test plan. Everything is manual there.
- Go through existing tests. Is a feature properly tested? If yes, mark it as automated.
- If not, decide what to do with it. Can it be easily tested with a unit test? If not, we may need a component or integration test. For this, we may require additional tooling or framework.
- Prioritise by usage frequency, then by potential coverage increase.
- Start picking them one by one.
Here, the most crucial action point is 1. I am sure that by just trying to understand the features, we will uncover quite a lot of fiascoes. Plus, we will finally have some comprehensive documentation.
Most of the steps can be easily parallelised, the only bottleneck may come from a lack of framework for testing a running node (i.e., with spawning a local testnet or connecting to calibnet).
What do you guys think?
A lot of the subcommands are not intended for human consumption and are nearly impossible to manually use in a sensible manner. They often just expose part of the Filecoin blockchain to the miner.
So commands like forest-cli send
make sense on a human level and we can fairly easily test whether the command is working. But the RPC endpoint STATE_MINER_FAULTS
does not make any sense for a human and we're gonna have a difficult time to test it without running a miner.
I think we should consider removing the RPC endpoints that we cannot test without a miner. That would leave us with a set of endpoints (mostly related to wallet handling and chain status) that we can properly document and test.
Then, at some point in the future, we could set up a miner for calibnet and re-introduce the RPC endpoints required for mining. The minimum requirements for running a miner are quite rough (500GiB of memory, at least 1 fast GPU, 512-1024GiB of disk space), though.
Alternatively we would test for compliance using Lotus as the gold standard (send commands to both Forest and Lotus, and check if the responses are identical). Still, without running a miner, we're gonna have a difficult time testing the endpoints rigorously.
Rough RPC overview. An X in the second column means the RPC endpoint is obviously broken and a checkmark means I was able to run it but not necessarily verify that the output is correct. The third column tracks whether I think a human will ever use this endpoint.
RPC | Works? | Human usable | Tested |
---|---|---|---|
auth api-info |
✅ | ✅ | ❌ |
auth create-token |
✅ | ✅ | ❌ |
chain block |
✅ | ❌ | ❌ |
chain genesis |
✅ | ❌ | ❌ |
chain head |
✅ | ❌ | ❌ |
chain message |
❌ | ❌ | ❌ |
chain read-obj |
✅ | ❌ | ❌ |
chain tipset-hash |
✅ | ❌ | ❌ |
chain validate-tipset-checkpoints |
✅ | ❌ | ✅ |
config dump |
✅ | ✅ | ✅ |
db clean |
✅ | ✅ | ❌ |
db stats |
✅ | ✅ | ❌ |
fetch-params |
✅ | - | ✅ |
genesis add-miner |
- | - | ❌ |
genesis new-template |
- | - | ❌ |
mpool pending |
❌ | ❌ | ❌ |
mpool stat |
❌ | ❌ | ❌ |
net connect |
- | - | ❌ |
net disconnect |
- | - | ❌ |
net listen |
- | - | ❌ |
net peers |
- | - | ❌ |
send |
✅ | ✅ | ❌ |
snapshot * |
✅ | ✅ | ✅ |
state get-actor |
❌ | ❌ | ❌ |
state list-miners |
❌ | ❌ | ❌ |
state lookup |
❌ | ❌ | ❌ |
state power |
❌ | ❌ | ❌ |
state vesting-table |
❌ | ❌ | ❌ |
sync check-bad |
✅ | ❌ | ❌ |
sync mark-bad |
- | ❌ | ❌ |
sync status |
✅ | ✅ | ✅ |
sync wait |
✅ | ✅ | ✅ |
wallet balance |
✅ | ✅ | ❌ |
wallet default |
✅ | ✅ | ❌ |
wallet export |
✅ | ✅ | ❌ |
wallet has |
✅ | ✅ | ❌ |
wallet import |
✅ | ✅ | ❌ |
wallet list |
✅ | ✅ | ❌ |
wallet new |
✅ | ✅ | ❌ |
wallet set-default |
✅ | ✅ | ❌ |
wallet sign |
❌ | ✅ | ❌ |
wallet verify |
❌ | ✅ | ❌ |
Given how far away we are from running a storage provider using Forest, I think we should disable all the RPC calls that are only usable for storage providers. We can then document and test the rest, focusing on the "storage & retrieval" milestone.
@LesnyRumcajs @lerajk thoughts?
@lemmih Thanks for the detailed table, it's really useful. What do you mean by disabling them? Deleting the commands and leaving dead code, deleting the code or just replacing all the immediate calls with a "disabled"/418 response?
All in all, I'm okay with focusing only on the part we want to deliver in the near future.
I would completely delete the code. It would be nice to have good errors messages such as "Oh, Forest doesn't support storage providers yet. You can read more at this tracking issue {url}. For now, you'll have to use Lotus." But the lotus miner isn't all that user-friendly and I doubt it'll propagate our error messages.
@lemmih Does it make sense to have it still open?
Closing. RPC testing will be done via 'forest-tool api compare`.