iri
iri copied to clipboard
Tool to migrate IRI DB to Hornet DB
Description
We want a tool that allows us to take a Mainnet DB of IRI 1.8.5, 1.8.6 and migrate it to the Hornet DB format.
Motivation
Allow partners and anyone who wants to retain DB history to migrate from IRI to Hornet.
Requirements
- Separate tool that I feed in an IRI 1.8.5 or 1.8.6 mainnet DB
- The DB spits out a DB that retains all data and can be used with Hornet 0.4.0
I suggest the tool reads in a RocksDB database and then acts like a peer to a Hornet node because otherwise you have to start to fiddle around with how the data structures look in Hornet when creating this tool. As a metric: synchronizing from Hornet to Hornet from the last global snapshot (summer 2019) takes about 2-3 days, so the tool should have about the same speed.
We can accomplish this by syncing iri to hornet. But because of the way iri handles transaction requests currently, the sync could take a long time. @luca-moser said a similar sync took about 2-3 weeks from iri to hornet. The same for hornet-hornet took about 3 days.
Implementing milstone request
and transaction request
in iri will speed up the process and give us hornet-hornet kind of sync duration.
We had already drafted issues to implement STING #1791.
For the purpose of this migration, we will need most of it except for heartbeat
- #1793
- #1794
- #1795
- #1799
Then we can release this extension of iri to be used for the migration of the db to hornet. @jakubcech @galrogo
@luca-moser If we write a tool that uses hornet code itself as a dependency won't we be able to abstract away the internal data structures? Or even what db hornet uses (badget/bolt)?
I just think writing a tool will always be faster than using the gossip layer. Those 3 days may turn into hours. Also, if for some reason people want data that wasn't approved by a milestone (I have no idea why) then gossip won't do for them
Having said that, if we still go the STING way the we might as well have heartbeat just because the code was already written in #1825
@jakubcech should we care about unconfirmed txs?
We should care about unconfirmed transactions, yes.
I see I incorrectly wrong the opposite in a DM
@luca-moser given that we need unconfirmed txs, syncing will not work. How feasible is it to map iri models to hornet?
Hornet has models that iri does not have and vice versa.
With the constraint that unconfirmed txs are required, here are the options that we have left.
- use IRI
broadcastTransaction
to send txs to the hornet node. Transactions that are older than the local snapshot the hornet node is running from will not be accepted on the node. If the hornet node is started from a global snapshot, then all txs will pass.
With this approach, we will migrate all transactions and let hornet worry about storing them.
It would have been best to broadcast txs in their insertion order so that hornet has txs that it requires to solidify quickly. But this is not possible. We can't broadcast txs in any given order.
- A tool to read bytes from rocksdb and push to hornet. It will be dependent on iota.go for its trinary package and call hornet DB operations directly. This involves messing around hornet models to map iri data to hornet data structures. It's fast but complicated. Here is a draft of the mapping for some models: https://hackmd.io/@djOmrag0QyiRMeuLnDw7mA/HJP9JF-s8 @luca-moser could confirm if this would cover the db requirements.
IMO, the easiest thing to do is option 1.
First version of this tool here: https://github.com/acha-bill/iri-db-migration
I've tested this tool on iri dbs that are older/newer than globalsnapshot and it works fine. All txs transferred to hornet db and I could find them using hornet explorer.