PyVRP icon indicating copy to clipboard operation
PyVRP copied to clipboard

Performance stuff

Open N-Wouda opened this issue 1 year ago • 10 comments

Discussion thread for ideas to improve PyVRP's performance. Could be low-level stuff, like better code optimisation, smaller data types, and/or caching. Or a smarter data structure, better algorithm, etc. Anything's in scope.

Edit: there's also perhaps some stuff in #181 that we might want to implement as part of this.

N-Wouda avatar Dec 11 '23 15:12 N-Wouda

One idea that might help a bit relates to how we skip move evaluation when nothing's changed since the last time we considered the client pairs $U$ and $V$:

https://github.com/PyVRP/PyVRP/blob/7604244262d342f14b649f1845ecd30eda56bc7e/pyvrp/cpp/search/LocalSearch.cpp#L92-L93

This works fine, but any change in the routes of $U$ and $V$ are now reason to re-evaluate. Even when that change happens many nodes after $U$ or $V$, which has no consequences for the evaluation.

We might make this more efficient by taking into account the size of our move operators (e.g., 1, 2, 3, etc. nodes), and checking whether there has been a recent modification that affected nodes sufficiently near either $U$ or $V$ to warrant a new evaluation.

I think I know how to implement this efficiently, but I'm lacking the time right now to pursue it. This will probably have to wait until PyVRP 0.8.0.

N-Wouda avatar Dec 11 '23 15:12 N-Wouda

This works fine, but any change in the routes of U and V are now reason to re-evaluate. Even when that change happens many nodes after U or V, which has no consequences for the evaluation.

I like the idea but I'm not sure if this is correct? At least with time windows, removing a node towards the end of the route may create slack which enables inserting a customer early in the route that would previously result in a large penalty, even if the affected parts of the route are far apart.

This situation may be hypothetical or not happen frequently so we may decide heuristically to skip those options but that's a functional change of the algorithm, not a more efficient implementation with the same result. Or am I missing something?

wouterkool avatar Dec 13 '23 10:12 wouterkool

This situation may be hypothetical or not happen frequently so we may decide heuristically to skip those options but that's a functional change of the algorithm, not a more efficient implementation with the same result. Or am I missing something?

Not really. This conclusion seems correct to me.

N-Wouda avatar Dec 13 '23 12:12 N-Wouda

Much more aggressive caching is also in scope here. Routes are queried at least two orders of magnitudes more than that they are changed. Every time something between two indices is calculated, can we somehow store the result? Is that worthwhile?

N-Wouda avatar Feb 29 '24 10:02 N-Wouda

Also relevant stuff that @leonlan and I discussed earlier today:

  • Restarts: how often, after what (no improv local best in this restart run, or global best so far over all restarts?)
  • SREX: can we limit the time spent in the subsequent local search? (see also #521)
  • Improve our neighbourhood strategy somehow? I know FILO has a much more advanced neighbourhood system, and we might be able to learn something from how that works exactly.
  • Really think about what route neighbours are in #126, and see if we can somehow use that as well.

N-Wouda avatar Mar 27 '24 16:03 N-Wouda

Improve our neighbourhood strategy somehow? I know FILO has a much more advanced neighbourhood system, and we might be able to learn something from how that works exactly.

Leaving out some details, FILO uses a granular neighborhood that is 1) node-based and 2) dynamic.

  1. Node-based means that the number of neighbors is different per node. Let $n_{\text{neighbors}}$ denote the maximum number of neighbors. The active number of neighbors is between $[1, n_{\text{neighbors}}]$ and is dynamically updated.
  2. Based on whether a node is involved local search moves is improving, the number of neighbors is updated. This is done using a "sparsification factor" $\gamma_i \in [0, 1]$ for each node $i$. The number of neighbors at any time is given by $\gamma_i * n_{\text{neighbors}}$ and $\gamma_i^0$ being the initial sparsification factor.
  • If node was involved in an improving move/solution, then reset $\gamma_i = \gamma_i^0$.
  • Otherwise, update the $\gamma_i = f \gamma_i$ with updating factor $f > 1$.

I don't really understand yet what the "node involved in improving move" means, but these two improvements will likely improve the solver.

Implementation details:

  • LocalSearch.search should take a granular neighborhood as input rather than making it part of its state.
  • We may need something like a NeighbourhoodManager that manages the granular neighborhood.

leonlan avatar Mar 28 '24 07:03 leonlan

Do people actually use add_node_operator and add_route_operator to add custom node/route operators? Or do they just use whatever default operators we have selected? I suspect the latter. If that's the case we might want to investigate how much it'd help the compiler optimise further if we bring all operators back into the LocalSearch object.

That is probably worth investigating regardless of whether we want to do it.

N-Wouda avatar May 03 '24 18:05 N-Wouda

Do people actually use add_node_operator and add_route_operator to add custom node/route operators? Or do they just use whatever default operators we have selected? I suspect the latter. If that's the case we might want to investigate how much it'd help the compiler optimise further if we bring all operators back into the LocalSearch object.

That is probably worth investigating regardless of whether we want to do it.

I'll have a look at this sometime soon.

N-Wouda avatar Nov 17 '24 14:11 N-Wouda