[Docs] Added the automatic generation of examples documentation
Fine-tuning
- [x] Alignment Handbook
- [x] Axolotl
- [ ] QLoRA
- [ ] Pytorch Distributed Inference
- [ ] Infinity
- [ ] LoRAX
- [ ] Ollama
- [ ] TGI
- [ ] vLLM LLMs
- [ ] Llama 3.1
- [ ] Mistral
Hi @peterschmidt85, please I have a question. I'm trying to take up this examples issue. I see you have already done the examples for Alignment Handbook, Axolotl, and partly QLoRA. Since those are basically for fine-tuning with/on dstack, my understanding is that within this examples section, you intend to have other use cases aside from fine-tuning, like running/serving with/on dstack, etc. Hence, using Ollama, QLoRA etc. I would like to confirm if that's the goal. If that's the case, can you give a little detail on how you might want the example to look like.
@peterschmidt85, is this relevant?
@peterschmidt85, is this relevant?
I suggest that we close it