NanoCode012

Results 163 comments of NanoCode012

A6000, L4 . You may also try some alternative providers as aws is quite expensive?

@kallewoof , thanks for checking. Will close this for now. If issue comes back, please comment/reopen.

Hm, I just tested a few times, and it seems to work for me.

Can you also provide screenshots of the out directory here for reference?

Hey, would you like to make a PR to fix this :)

> btw, is CUDA_VISIBLE_DEVICES="" necessary in doing python -m axolotl.cli.preprocess examples/llama-2/ver2.0.yml? I think I didn't when I preprocess Shouldn't be any issue. May I ask which model size you're running?...

Thanks for pointing out the issue. `chat_template` is a newer edition, and this may have been missed. In the short term, it should be possible to just do `ds_cfg.type.startsWith..` I...

What do you mean by custom prompting?

Hey, as said here https://github.com/OpenAccess-AI-Collective/axolotl/discussions/1171, it's llama-based, so you can use the llama configs :)