Eliran Wong
Eliran Wong
Function calling makes a big difference. It would be perfect if ollama could support function calling
An approach to run multiple functions with Ollama: https://github.com/eliranwong/freegenius#approach-to-run-function-calling-equivalent-features-offline-with-common-hardwares I like Ollama and will use ollama as the default engine for the project.
It is helpful, much appreciated
> I have been able to get function calling (openai style) to work with dolphin-mistral:8x7b-v2.7-q8_0, probably works with others too but probably not them all. Here's the prompt I send:...
It seems this workaround works with better speed: https://github.com/eliranwong/freegenius#approach-to-run-function-calling-equivalent-features-offline-with-common-hardwares I am testing with "phi", "mistral" and "llama2" via Ollama  ... still in testing, though ...
Does that mean LLaMA-Factory does not support AMD card? 這是指 LLaMA-Factory 不支援 AMD card 嗎?
> > A question. Does LLaMA-Factory support AMD graphic cards? > > https://github.com/vosen/ZLUDA could work. I also thought of ZLUDA. I will try both ROCm and ZLUDA in coming weeks.
I shared the set up notes at https://github.com/eliranwong/MultiAMDGPU_AIDev_Ubuntu/blob/main/README.md I will update it when I test LLaMA-Factory with AMD cards

 