go-spacemesh icon indicating copy to clipboard operation
go-spacemesh copied to clipboard

tortoise: update opinion on the terminated layers within sliding window when received new data

Open dshulyak opened this issue 3 years ago • 7 comments

depends on https://github.com/spacemeshos/go-spacemesh/issues/3569

once layer is verified we do not revisit opinion on that layer unless rerun is triggered. it was assumed that syncer will trigger a rerun when it notices a good reason for it. opinion may change because of the following cases:

  • hare output changed within hdist
  • node downloaded atxs with additional ballots, previously counted ballots won't be sufficient to cross global threshold
  • ballots from higher layers voted differently from the previous layers and changed threshold for already terminated layer
  • counting delayed ballots changed opinion on already terminated layer

from that list above only 2nd case can be tracked successfully by syncer, in other cases it is either impossible (4th) or inefficient (1st and 3rd) to trigger full rerun.

implementation plan

revisit opinions on terminated layer if changes are within sliding window

epoch weight

it is already computed in current implementation, but there are some changes:

// float64 is not used in code
func expectedWeight(epochs map[uint32]uint64, target, last layer) float64 {
  weight := 0
  if target.epoch() == last.epoch() {
    weight = epochs[target.epoch()]
    return weight*(last.ordinal()-target.ordinal())/target.epoch().length()
  }
  weight += epochs[target.epoch()]*(target.epoch().length() - target.epoch().ordinal()) / target.epoch().length()
  for epoch := target.epoch() + 1 < last.epoch(); epoch++ {
    weight += epochs[epoch]
  }
  weight += epochs[last.epoch()]*(last.epoch().ordinal() / last.epoch().length())
  return weight
}

OnAtx handler

add OnAtx handler to the tortoise. every time when tortoise receives new atx update expected weight by adding full atx weight or a fraction, using same computation as above. OnAtx is expected to be called before OnBallot that references such atx.

related tortoise state

type state struct {
  // weight of individual atxs in the sliding window
  atxs map[id]uint64
  // referenceWeight is computed by dividing atx weight by an eligibility count when reference ballot is received
  // to get ballot weight - reference weight is multiplied by eligibilities attached to the ballot
  referenceWeight map[id]float64
  // weight of all epochs (only atxs that are were used within sliding window)
  epochs map[uint32]uint64
  // if missed weight is larger than 1/3 of epochs weight - trigger state reload (rerun from scratch)
  // this is specifically for atxs that weren't counted within sliding window
  // we rely on p2p layer and syncer to deduplicate this
  // but even if it doesn't - it just trigger state reloading prematurely - which is not a huge problem
  missed map[uint32]uint64
}

changes in the counting weight

count weight for layers past verified layer within whole sliding window. notify mesh to perform a revert if opinion changed.

full tortoise doesn't need any changes. but remember to profile implementation how long it takes to count votes from a single ballot for 2 000 - 10 000 previous layers.

verifying tortoise needs to be refactored avoid changes to the total good weight when layer is verified.

type verifying struct {
  // total weight of all good ballots
  total float64
  // uncounted good weight specifies how much weight from total should not be counted
  // for the keyed layer
  // intuitively 'total' includes weight from all layers between [0, 100]
  // but from that weight all layers below 51 can't vote for layer 50
  // so uncountedGoodWeight for layer 50 is a sum of all good ballots in layers [0, 50]
  uncountedGoodWeight map[uint32]float64
}

dshulyak avatar Sep 20 '22 10:09 dshulyak

@countvonzero i think existing interface for rerun was based on misinformed design. we definitely can't have just that interface. and most likely that interface is not even needed, because we can't recover from partition that lasted longer than sliding window anyway

dshulyak avatar Sep 20 '22 12:09 dshulyak

@countvonzero i think existing interface for rerun was based on misinformed design. we definitely can't have just that interface. and most likely that interface is not even needed, because we can't recover from partition that lasted longer than sliding window anyway

i am not sure which interface you are referring to. do you mean this one?

func rerun(ctx context.Context) error 

?

in my prototype, i changed this to

func rerun(ctx context.Context, from types.LayerID) error

my plan is:

for every ballot, we do in the following order

  • OnAtx(ballot.atxid, ignore): tortoise keeps track of only those ignored ATXIDs
  • OnBallot(ballot): weight is set to 0 if its atx is ignored

and there are two scenarios that will trigger rerun()

  • when syncer sees that is hash differ at layer N from a heavier weighted peer, and determine it should rerun from layer N
  • when tortoise accumulated enough ignored ATX weight in memory, and determine it should rerun from beginning of that epoch

countvonzero avatar Sep 20 '22 12:09 countvonzero

yes basically about rerun

func rerun(ctx context.Context, from types.LayerID) error

i don't understand how this interface can be supported using current implementation. rerun can start either from genesis, or within sliding window. and this issue describes what we need to change for it to work within sliding window.

and there are two scenarios that will trigger rerun()

those two are not sufficient. as i mentioned opinion may change for other reasons

dshulyak avatar Sep 20 '22 14:09 dshulyak

when syncer sees that is hash differ at layer N from a heavier weighted peer, and determine it should rerun from layer N

how do you know the weight? it was just communicated over rpc? i don't think that we can use this, anybody can trigger rerun by communicating high weight and a different hash

dshulyak avatar Sep 20 '22 14:09 dshulyak

rerun can start either from genesis, or within sliding window

if rerun can start from genesis (which i assume is outside the sliding window), why can't it start from a random layer in this range (genesis, windowStart)?

those two are not sufficient. as i mentioned opinion may change for other reasons

in the scenarios you listed

  • node downloaded atxs with additional ballots, previously counted ballots won't be sufficient to cross global threshold

i think this is consistent with when tortoise accumulated enough ignored ATX.

  • ballots from higher layers voted differently from the previous layers and changed threshold for already terminated layer
  • counting delayed ballots changed opinion on already terminated layer

i don't understand how these could happen if known ATX weight remain constant. i thought the security assumption is, once a block cross the global threshold, it's final until more ATX weight are discovered.

when syncer sees that is hash differ at layer N from a heavier weighted peer, and determine it should rerun from layer N

how do you know the weight? it was just communicated over rpc? i don't think that we can use this, anybody can trigger rerun by communicating high weight and a different hash

so i'm adding fetch API (direct P2P request) to support hash resolution. when syncer finds that it has a different accumulated hash from a peer X, it requests the list of ATXs from X. the node needs to determine how much new ATXs weight X introduced and whether the weight is enough for it to change opinions. whatever weight X claims it has, it needs to back it up by the actual ATXs.

note that syncer currently download ATXs in two ways

  • during the catch up sync, it download ATXs before everything else (ballots/blocks) to the current epoch.
  • at the first layer of every epoch, it download ATXs from the last epoch.

countvonzero avatar Sep 21 '22 02:09 countvonzero

if rerun can start from genesis (which i assume is outside the sliding window), why can't it start from a random layer in this range (genesis, windowStart)?

tortoise votes encoding is recursive. there might be a way to bootstrap tortoise from arbitrary layer, but there are many assumptions that needs to be handled. currently such interface is not supported

i don't understand how these could happen if known ATX weight remain constant. i thought the security assumption is, once a block cross the global threshold, it's final until more ATX weight are discovered.

there is no such assumption. there is an assumption that if global threshold is crossed and adversarial weight is cancelled - node still crosses local threshold. but otherwise crossing global threshold after counting votes from one layer doesn't guarantee any finality

every ballot votes on all previous layers, therefore opinion may change because of voting even if crossed global threshold. moreover global threshold total value depends on the number of layers that can vote on a layer that supposed to be verified

dshulyak avatar Sep 21 '22 03:09 dshulyak

tortoise votes encoding is recursive. there might be a way to bootstrap tortoise from arbitrary layer, but there are many assumptions that needs to be handled. currently such interface is not supported

thanks for the explanation. do you think it can be simplified and made possible after mutable ballot is made possible?

every ballot votes on all previous layers, therefore opinion may change because of voting even if crossed global threshold. moreover global threshold total value depends on the number of layers that can vote on a layer that supposed to be verified

right. my mental model keeps forgetting that global threshold is a moving target as layers grow.

countvonzero avatar Sep 21 '22 04:09 countvonzero