bittensor
bittensor copied to clipboard
[feature] [BIT 578] speed up metagraph storage query
This PR speeds up the storage call made bysubtensor.neurons in the subtensor.use_neurons_fast function.
This feature works by bundling a nodejs binary with the polkadotjs API.
This binary is a CLI that implements the sync_and_save --filename <default:~/.bittensor/metagraph.json> --block_hash <default:latest> command.
This syncs the metagraph at the blockhash and saves it to a json file.
The speed-up is quite significant, below is a test run of the manual without the fix, with the ipfs cache, and with the fix.

And below is the IPFS cache sync versus the manual sync (with fix)

A pro of this is that it removes the need for a centralized IPFS cache of the metagraph.
A downside of this fix is that the binaries with nodejs bundled use ~50MB each (one linux, one macos).
There is currently no binary for windows, but I'm not certain this should be included anyway, as we only support linux/macos.
Another pro of this fix is it works on both nobunaga and nakamoto, and can be adapted to any network. This also leaves room for adding other large substrate queries and working further with the polkadot js api.
Very nice results @camfairchild !
where do binaries come from? It would be nice to have information about that and visibility about the code that originate that bins, What do you think about that?
The binaries are from https://github.com/opentensor/subtensor-node-api Not sure where to write docs about this though
The binaries are from https://github.com/opentensor/subtensor-node-api Not sure where to write docs about this though
If we are getting binaries from there I would suggest to comment it somewhere so we can wire the components and generate the knowledge
@camfairchild very nice job!!
The binaries are from https://github.com/opentensor/subtensor-node-api Not sure where to write docs about this though
If we are getting binaries from there I would suggest to comment it somewhere so we can wire the components and generate the knowledge
@camfairchild very nice job!!
Good idea. I'll add it in the setup.py file. Perhaps though we should maintain a docs website in future.
Perhaps I misunderstood your graphs, but looks like the ifs cache is still a little faster despite being centralized, is that right?
Also, you have conflicts ;)
Perhaps I misunderstood your graphs, but looks like the ifs cache is still a little faster despite being centralized, is that right?
Also, you have conflicts ;)
Nope, you're right. The IPFS is faster, but only when it's up ;)
Perhaps I misunderstood your graphs, but looks like the ifs cache is still a little faster despite being centralized, is that right? Also, you have conflicts ;)
Nope, you're right. The IPFS is faster, but only when it's up ;)
ouch. 🗡️
@camfairchild why is this still marked do not merge? any blockers?
Going to refactor into an external pypi package so the binary can be distributed easier