Jonas Belouadi
Jonas Belouadi
I ran into a similar problem where I ended up with a dirty working directory, too. With packer.nvim I can just add `--autostash` to the update command to solve it....
Yeah sounds like a very reasonable addition! I think we can implement this minimally, maybe by firing User autocmds. Then spinners etc could be implemented independently based on status line...
Hi @muhfaris, I don't use lazy.nvim, but maybe I can still help you. Could you please describe your specific problem in more detail?
Hi, apertium works for me, what curl error do you get? What sometimes happens is that the apertium beta endpoint is slow to respond and curl exits with a timeout...
Where does your checkpoint come from? If you start a fine-tune, checkpoints are created automatically and if you then interrupt the fine-tuning and start it again (with the same `--output`...
For training, multiple GPUs should be picked up automatically. Just execute `examples/train.py` or run it with `torchrun --nproc_per_node gpu examples/train.py`.
The models were trained on 4x A40 GPUs with 48gb of RAM each, but it should be possible to train them with less hardware, especially the 7b models. You could...
You can either use [infer.py](https://github.com/potamides/AutomaTikZ/blob/98b570c0e82fcfc6505c34335d4e413bb536be8b/examples/infer.py) for a cli interface or your own instance of the [webui](https://github.com/potamides/AutomaTikZ/tree/98b570c0e82fcfc6505c34335d4e413bb536be8b/examples/webui) (note that you would need to add your own model to the [model dict](https://github.com/potamides/AutomaTikZ/blob/98b570c0e82fcfc6505c34335d4e413bb536be8b/examples/webui/webui.py#L20-L25)...
I don't think I have encountered this error before. Did you make any changes to the training code?
If you need help please provide a [minimal, reproducible example](https://stackoverflow.com/help/minimal-reproducible-example).