Jordan Fix

Results 52 comments of Jordan Fix

Sure, I think that's fine -- though FYI I think we have `scale = 0.f` in some other places too where we're using dummy quantization parameters, for example when we...

@mciprian13 I think we intended to allow some numerical changes, because they can improve performance, under the assumption that the model could tolerate slight changes in numerics. We have a...

@mciprian13 Sounds cool! As you expected I don't have time time to work on this right now. As inspiration, it sounds somewhat similar to how our quantization profiling works --...

Hi @ezimmer9, we currently do not support these ops. However they should be relatively easy to implement, depending on what precision you're talking about (we'd probably want to create `IntLookupTables`...

Hi @anujgupt-github -- Thanks for updating this issue. As far as I recall, we are only using the LUT when the inputs and outputs are both quantized. This is where...

Hi @NinaPacifier -- I actually think @ayermolo is already working on this, and will probably upstream it soon. Just want to make sure you don't waste your time on it.

@mciprian13 Did you address comments from @opti-mix here?

@mciprian13 Were you thinking of this just being used for debugging, or a real feature? Does ASAN not work to check this?

Not very familiar with Windows development, but there was this previous issue that was solved, maybe can help: https://github.com/pytorch/glow/issues/4021 If not I'd suggest trying a debug build if you haven't...