Edd

Results 22 comments of Edd

Yes it's there! it should be the same as HuggingFace's which is 1.0 (https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationConfig.temperature). Ofc you can modify your temperature there too! Just add `temperature=0.1` for example

Issue seems like related to this (https://github.com/state-spaces/mamba/issues/173) where it stated that your GPU is not supported on Triton, which is library that's powering Unsloth.

Hello are you still having your problem? By the way you can join our discord at `discord.gg/unsloth` where many people can help you there :D

https://github.com/unslothai/unsloth/issues/1731 related issue

Sorry for the very late reply. I finally able to get back into Unsloth stuff I don't think you can train non-LoRA model using Unsloth in general. When I tested...

You need to decrease the `triton` version to `3.1.0` for now to remove this issue

That's a weird library and I don't think we use that .-.

Good idea, thanks for the PR! I found that make a request to endpoint lacks a feedback for user of what's going on tho (that's what happened when I tested)...

Yeah you need to check whether `ollama` is running or not first. Oh nono, I mean combining with `save_to_gguf`, you can just pass the model object and the tokenizer object...

> Sure, just let me know if you want a bool parameter `push_to_ollama: bool` in `save_to_gguf` then we can directly add the `create_ollama_modelfile -> create the ollama model -> push...