Sam_S
Sam_S
Check `ssd/config/defaults.py` [here](https://github.com/lufficc/SSD/blob/master/ssd/config/defaults.py), change the learning rate in line 80 where `_C.SOLVER.LR = 1e-3` is set.
Hi is this issue fixed, I still cannot run `from triton_python_backend_utils import Tensor` in `nvcr.io/nvidia/tritonserver:22.12-py3`?
I've had the same issue with torch==0.10.0. Had to use the latest version of torch and commented out the mentioned line with the pad assignment.
Maybe try some other saliency and feature map displaying methods from this repo https://github.com/jacobgil/pytorch-grad-cam
@ariefwijaya you can find it in my fork of the repo https://github.com/SamSamhuns/donut
Hello, it has been a long time since I have worked with the custom fork of the Donut model if you are using that and many things might be out...
I've set up a PR for this at #256. Did an example run with the scribble generation. You guys can test it out.
8G VRAM was not enough for me. Needed like 10GBs for running the scribble-to-image script. When the docker builds, the models are not copied directly into the image since these...
Did you notice any speedup when using pt 2.0 and compiling the models?
Here as well. Tokenizer is all messed up since I got 403 mid download