lightning icon indicating copy to clipboard operation
lightning copied to clipboard

Peer storage doesn't scale

Open morehouse opened this issue 2 years ago • 3 comments
trafficstars

If SCBs get larger than 64KB, the peer_storage message sent to peers gets truncated. This means that later when the SCB is sent back in the your_peer_storage message, it fails the MAC check.

Based on typical scb_chan serialized sizes, peer storage starts to break at ~500 channels. At maximum scb_chan serialized sizes, things break at ~200 channels.

One potential solution is to packetize large SCBs for transfer to/from peers.

morehouse avatar Sep 07 '23 19:09 morehouse

One potential solution is to packetize large SCBs for transfer to/from peers.

I need to grab again the protocol specification, but this looks a good option.

The only comment I have is that we may want also impose a limit on the number of packets used to split a message. Alternatively, we could create a BLIP to allow nodes to specify the maximum size that the receiver will accept.

I think that one of the current limitations of the peer_storage proposal is to support just small node, because if you have 200 channels you certainly want to use another way. However, I think allow a bigger peer_storage is useful in case of disaster,

vincenzopalazzo avatar Oct 14 '23 11:10 vincenzopalazzo

I agree this scheme doesn't scale for big node operators because ideally they shouldn't be relying on it anyways. Although I've an idea to scale it further, please do give your feedback on this.

What if we introduce new message types that appends to the existing backup. This way, nodes could distribute more than 65kb of data, similar to the *_continue messages in commando plugin.

I know this can lead to spamming, but we can pre-specify the amount of data that we'll be sending to the peer (in any pre existing msg if they support the peer_storage feature), so they can safely ignore if we send larger amount.

For example: if my peer provides a rate of 1 sat/kb, I could send 200kb of data to the peer using 'peer_storage' and 'peer_storage_continue' (~4 packets). After receiving the acknowledgment ('your_peer_storage' & 'your_peer_storage_continue'), I would then pay them 200 satoshis.

adi2011 avatar Nov 26 '23 11:11 adi2011

Well, your problem is actually the solution: having a lot of peers. Just don't deliver the same emergency recovery information to each of them.

Pick a partitioning of your data, use error correcting codes to encode N backups I to M packets where N is the number of channels and M is the number of peers. Then give each peer their packet. The redundancy isn't quite as good as giving each peer a full copy, but combining enough share will reconstruct the full packet. It can even include hints where to get the next segment of data from too.

cdecker avatar Mar 09 '24 10:03 cdecker