deep_qa icon indicating copy to clipboard operation
deep_qa copied to clipboard

Avoid instantiating huge tensors as input to similarity functions

Open matt-gardner opened this issue 7 years ago • 4 comments

I'm not sure how this would work, really, but it takes a whole lot of memory to do it like we do it, tiling everything and then doing elementwise multiplication. There might be some way to make this work using some kind of batch_dot or dot.

matt-gardner avatar Apr 20 '17 18:04 matt-gardner

It looks like tf.einsum might do the trick, at least for simple similarity functions. For more complicated ones, I'm not sure.

matt-gardner avatar Apr 23 '17 05:04 matt-gardner

tf.matmul works well for generic dot product based similarities. It's probably a lot faster since it'll call directly the optimized matrix routines.

matt-peters avatar Apr 23 '17 19:04 matt-peters

The issue is that our similarity functions try to be fancy, letting you easily swap out different parameterized and non-parameterized functions when computing attentions. The trouble is that the way we make this easy is by taking a whole lot of memory. We need to re-think the API a bit.

matt-gardner avatar Apr 23 '17 19:04 matt-gardner

I'm decreasing the priority of this, as the adaptive batch size and dynamic padding stuff makes this not too big of an issue anymore.

It'd still be a nice optimization, and would likely make runtimes faster, but it's not blocking anything anymore.

matt-gardner avatar May 10 '17 18:05 matt-gardner