Respect Priorities
We currently try to respect priorities however, when we have to active tasks, we'll try to complete 8 tasks in parallel, regardless of priorities. Unfortunately, this means we can try to send up to 512KiB * 8 = 4 MiB in parallel in no particular order, even if the user really wants the first block right now.
We should probably try to avoid this.
@Stebalien I was thinking a way to resolve this is by ordering the Wantlist by Priority in 'MessageReceived'. Thoughts? The ordering would occur prior to coalescing blocks into fixed size tasks. (See code change below)
If this change sounds reasonable I'd be happy to submit a pull request and/ or share more details and discuss further.
// MessageReceived performs book-keeping. Returns error if passed invalid
// arguments.
func (e *Engine) MessageReceived(p peer.ID, m bsmsg.BitSwapMessage) error {
....
var activeEntries []*wl.Entry
for _, entry := range m.SortedWantlist() { <--- Change is here
....
}
if len(activeEntries) > 0 {
e.peerRequestQueue.Push(p, activeEntries...)
}
for _, block := range m.Blocks() {
log.Debugf("got block %s %d bytes", block, len(block.RawData()))
l.ReceivedBytes(len(block.RawData()))
}
return nil
}
The peer request queue should already take care of this, mostly (it sorts by priority, IIRC). I guess, given the new batching logic, we may need to pre-sort (I'm not sure but it looks like that may be the case, patches welcome if it is).
However, the issue here is that when we process the tasks, we process up to 8 tasks at once. If all tasks belong to the same peer, we could try to send up to 4MiB all at once which could take a while on some connections.