Results 1846 comments of Steven Allen

> The Session may also want to keep track of peers it deliberately removed from the Session (for example because they sent too many DONT_HAVEs) so it doesn't bother trying...

So, my point here is that including more than one CID doesn't actually help us in the end. See: > Insight: Sessions tend to be highly correlated (by design) so...

It's a performance trade-off, but I don't think this scenario is all that common. The basic assumption of sessions is that _most_ blocks being requested through the session are correlated...

I'm fine with that but I'd benchmark it. At the moment, it'll probably be faster as you'll be able to compare CIDs without allocating (while we currently end up allocating...

Let's spend some time digging through profiles together tomorrow. We'll likely need to refactor this, but for now, I think we're just falling behind. The machine that's having trouble has...

Are your units correct? * 1 Mbps = 1 mega-_bit_ per second. * 200 kB/s = 1000 kilo-_bytes_ per second. 1 Mbps = 1/8 MB/s = 125kB/s --- On the...

> not sure I know exactly how UnixFS will chunk up a large file but maybe there is a limitation in terms of the round tripping for want lists? (especially...

You're probably right. We probably do store duplicate CIDs at points however most of these are likely ephemeral. All of our *caches* tend to work with blocks from disk so...

(but it shouldn't be difficult to investigate this)

For context: this isn't _necessarily_ malicious as the peer could have lost/removed the content. However, if that's the case, we probably don't want to ask them for anything in the...