`KeyManager` / `TransactionFactory` are leaky abstractions
KeyManager is overly complex. part that is because of how it tries to encrypt the master key. That’s computationally expensive so there's all sorts of compensation.
KeyManager tracks the chain data and label metadata. that key metadata is a layer on top of key management. Private key management a parallel concern that depends on additional user input. Hardened derivation also relies on metadata for address enumeration but iirc is not really viable with current km approach. So basically by the time you build a tx, you should already pass in the pubkeys to use, and you should need the private key, potentially both before and after tx building, in order to add hardened deriv outputs as well as sign inputs,.
I think that's why TransactionFactory (not actually a factory) got to control KeyManager. Constructing the tx is building a message out of elements, not creating those elements, so these should be decoupled.
I propose separating the interfaces into the following:
-
enumerate pubkeys for discovering chain data
-
given chain data, track key usage state, which is monotone: derived but unused, revealed/shared (i.e. labeled), exists in known tx, exists in confirmed tx
-
find an unused receive address
-
find an unused change address
-
find an unused self spend address (like change but different purpose)
A cleaner abstraction may be a kind of two phase commit approach
- ask stateful key manager to prepare dependencies (commit-request: phase 1)
- construct a tx from those dependencies
- sign etc as needed (commit: phase 2)
successfully constructed tx that has been saved, viewed or broadcast commits that prepared data
a data model that describes operations on the stateful aspect of key management in a transactional way is missing. Now, KeyManager is like sql db in auto commit mode.
Only for broadcast txs that makes sense. For txs getting prepared/built the implications of using the existing api are not clear.
The most obvious is in the case of failed rounds: if the failed cj is ambiguous and client participates in blame rounds it's actually better to reuse, but only if the blame round still has most of the original inputs.
Decouple state mutation operation from the procedural computation between states, which is complex.
It's useful to consider something that might happen without knowing in advance if it will. You can do that in the state representation itself, but then you need to encode this modality into the state data.
Take svn vs git data model is a nice example. Both are logical, but git ignores irrelevant history information when you only care about file data. History data and file data are on separate layers and there's no "one true version.” svn encodes some history data into the file layout and some of it into linear ordering of versions. it took like a decade before it could actually merge branches sanely and yet git-svn exists and works great
--
This Issue was originally authored by @nothingmuch on matrix chat ~ credit where credit is due