webknossos icon indicating copy to clipboard operation
webknossos copied to clipboard

Optimize fallback segmentation data roundtrips

Open fm3 opened this issue 2 years ago • 3 comments

In #6369 and also https://github.com/scalableminds/webknossos/pull/6367 and the existing zarr streaming for volume annotations, the tracingstore requests data from the datastore for the unchanged segmentation buckets.

It may be worth changing the API so that the user requests go directly to the datastore (where more buckets are ultimately loaded from), and it in turn asks the tracingstore for buckets (which that one only serves if it has them itself), and on empty the datastore serves them itself.

We would have to consider how much the datastore would have to know about annotations and permissions

fm3 avatar Aug 04 '22 15:08 fm3

Not sure if this belongs to this issue, too, but the title would fit at least:

The front-end still sends (potentially) two requests per bucket for segmentation layer with fallback data (one to tracingstore and one to the datastore if the first request failed). Could/should we change this to one request since the backend can do the fallback logic, now?

philippotto avatar Aug 08 '22 12:08 philippotto

I’m uncertain whether moving this to the back-end would in the end improve performance (this probing would then have to happen between tracingstore and datastore). That may be faster if those are in the same network, or even just if they are in a significantly better network than the front-end), but I would like to make measurements here first. But yes, this belongs here :)

fm3 avatar Aug 09 '22 12:08 fm3

but I would like to make measurements here first.

sure, good plan :) you are probably right, that this depends on the setup, so in the end we would need to make a decision which setup is "usually" faster. supporting both strategies is probably not worth the hassle.

philippotto avatar Aug 09 '22 14:08 philippotto