hydra
hydra copied to clipboard
Validate soundness with Model Based Testing
Why
We are currently specifying the actual protocol we implemented (#448) and update the security properties and pen&paper proofs similar to the original research work.
To gain confidence that our implementation aligns with this non-mechanized formal specification, we want to exercise the model-based testing approach previously explored in #194.
Also, the previous work on this was
- not covering all properties,
- uses a perfect world model model,
- and is not fully end-to-end.
What
We think we would gain most confidence by being sure "not to loose funds". In terms of security properties, the soundness and completeness properties are covering this (from two different angles). Hence, within this work item we want to
- implement & exercise the soundness property as defined in the HeadV1 spec (see #448 or drafts thereof)
- create a model with an active adversary as defined in the paper page 18/19 (and later the HeadV1 spec), see the "experiment" description therein
- simulate the chain (similar, but different to the experiment description) by constructing transactions and validating them (at least phase 2 validation) - essentially running the validators
- as a slight extension to the
U_finaldefinition, we want to see it in a validfanouttransaction.
Open question
- The experiment for security definition in the paper starts with
U0right away, which the active adversary may even pick.- How would that translate to our model with real transactions + validation?
- If we run the validators the adversary would need to corrupt all parties to be in full control of
U0?- Not really, it would be about the adversary picking UTxOs for still uncorrupted parties.
- Example: assuming there is an attack where an honest party may get cheated if they pick a manufactured / carefully picked UTxO to break the protocol, e.g. an additional UTxO which an honest party would not care about, but has a bad side-effect
- Is the init/commit phases of the protocol even missing from the security definition and we should also add them to the analysis?
- An example of property dealing with these initial phases: A not-yet-open head can always be aborted.
- Can we even accomodate n active adversary in the model? i.e. allow injecting arbitrary
(i, xi)as the experiment describes