Diego Fiori
Diego Fiori
Add nebullvm as a model backend in a similar way to ONNXruntime backend
Now that we have implemented backends that work natively on PyTorch and TF, we could exploit them further by allowing TF models to be converted to Torch models and vice...
Follows [this discussion](https://github.com/nebuly-ai/nebullvm/pull/47#discussion_r870734669). We should analyse the impact of the AUTO flag on quantization performances. If no issue is found we can safely switch from `"CPU"` to `"AUTO"`.
The ONNX based MLIR dialect implemented [here](https://github.com/onnx/onnx-mlir) looks promising for being integrated in nebullvm.
The nebullvm code is already commented in a compatible way to build documentation directly from it. We need to 1. Select a documentation builder library (like Sphinx) 2. Customize the...
Auto-Installer is currently not supported on Windows platforms. TVM and TensorRT in particular would need special care.
The auto-installer fails to install TVM correctly due to llvm not being installed on Ubuntu 18.04 systems. In fact, the command ``` sudo apt-get install llvm12 ``` fails in Ubuntu...
The current way the TVM configuration is done, it seems not to exploit the full potential of the TVM library since the performance is lower than theoretically expected (for TVM...
On Apple Silicon chips tensorflow cannot be installed with pip, i.e. trying to install nebullvm directly gives an error. We should highlight in the README the problem and give a...
Add support to the forward gradient computation technique. This technique allows gradients to be estimated during the forward pass without the need for an explicit backward calculation. More information is...