YaLM-100B
YaLM-100B copied to clipboard
gguf / mlx format?
Hello and thanks for open-sourcing the model!
As it doesn't seem to be any ready to use gguf or mlx formats (for llama.cpp and macos respectively) - is there any chance you can give a hint on how to convert YaLM there?
It would be of real help to enable model to run on non-Nvidia enabled HW, like any modern pc and mobile.
Thanks in advance!