Will Constable
Will Constable
Yea, it should be ok to modify the codegen in the short term to call xla's function for generating the IR for a scalar value. Longer term.. i wonder if...
Yea, I shouldn't have said 'the compiler'. What I meant by that is that we want to deliver perf enhancements to users without them changing their code. We also sometimes...
Is there a formulaic way to apply quantization to a bunch of models? Or do we need expert help to make sure it's done correctly per model?
@vkuzo We currently run a[ bunch of open-source models ](https://github.com/pytorch/benchmark/tree/master/torchbenchmark/models) in a harness that lets us track cpu/gpu train/eval performance and do several things with the data. One is to...
I see- I so the calibration step is a post-processing alternative to QAT, and QAT is full training (or is it sometimes just a fine-tuning of a pre-non-QAT-trained model) Note...
No, I don't know. if git blame shows it as part of my initial copy to this repo, you could check blame on the old repo which was pytorch/hub. There...
oh, can you also fix default device? And check off the box here: https://github.com/pytorch/benchmark/projects/1
and make sure to raise NotImplemeted for jit? It looks like jit is silently disabled for now.
ok, deleting or commenting is fine just wanted to ask. oops, i sent the wrong link. check the moco specific 'nit' in this issue https://github.com/pytorch/benchmark/issues/65
also, i only counted 20 models with labels added, but we have 25 models. I'm not sure if all 5 of the missing models are ones we are disabling for...