brevitas
brevitas copied to clipboard
Brevitas: neural network quantization in PyTorch
Add support for Pytorch-XLA would mean support for TPUs. Currently the main blocker is the lack of a non-binary STE backend (i.e. based on python APIs only), which was removed...
Per title. https://dhall-lang.org/
Currently Github Actions don't support any standardized way to report back test results. It should be doable to it manually by exporting to junit xml format and then calling appropriate...
Add a cache for pip requirements (!= conda requirements), to cover Nox dependencies inside Github Actions.
Current naming clash between actual scale factor and its learned component is the source of confusion. Rename it without breaking compatibility with pretrained models.
Clarify compiler dependencies on Windows when Pytorch >= 1.3.0.
Currently we are not checking PEP8 compliance in any way, nor enforcing formatting. I really appreciate the fact that Black enforce a single uniform style, but i cannot accept dangling...
Test and document all the various alternatives around switching between different scale factors.
A major feature currently missing from Brevitas is the ability to clamp an accumulator to a specified bit width during in an operation such as conv2d or linear. This is...