Should we make int32's configurable?
Just noticing that we rely a lot on 32-bit integers in datashader quite a bit - grepping for 'i4' and 'int32'. Should we make these configurable at some point? Either way, it would be nice if we had more tests that exercise inputs that exceed 2.15B, and verify that we aren't encountering subtle overflow issues (probably easier said than done). What do you think?
Yes, we should, thanks. int32 is good as a default for performance reasons, but should be configurable. Note that not all int32 types will be relevant for the dataset size, as e.g. using int32 for the aggregate array cell doesn't limit it to 2.15B data points, just 2.15B data points per pixel. Still needs to be configurable, of course.