llama.cpp
llama.cpp copied to clipboard
Feature Request: GLM-4 9B Support
Prerequisites
- [X] I am running the latest code. Mention the version if possible as well.
- [X] I carefully followed the README.md.
- [X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [X] I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
It would be really cool to have support for these models that were released today. They have some very impressive benchmarks. I've also been trying out the model in huggingface spaces myself and noticed it speaks a lot of languages fluently and is knowledgeable on many topics. Thank you for your time.
Here are the download links:
Here is the English README: README_en.md
Motivation
The motivation for this feature are found in some of the technical highlights for this model:
- These models were trained on 10T tokens.
- GLM-4-9B-Chat models have 9B parameters.
- GLM-4-9B-Chat-1M model supports 1M context length and scored 100% on the needle in haystack challenge.
- GLM-4-9B models support 26 languages.
- Has a vision model (glm-4v-9b).
- Early impressions are impressive.
Here are some of the results:
Needle challenge:
Longbench:
Possible Implementation
We might be able to use some of the code from: https://github.com/ggerganov/llama.cpp/pull/6999.
There is also chatglm.cpp but it doesn't support GLM-4.
You can try chatllm.cpp, which supports GLM-4.
You can try chatllm.cpp, which supports GLM-4.
Can confirm this works and is cool 😎
It would be good to get this functionality in Llama.cpp too, if only for the GPU acceleration
You can try chatllm.cpp, which supports GLM-4.
Well, it chatllm.cpp is CPU-only. Why not trying transformers version in fp16.
llama.cpp GPU support for GLM-4 would be great, and then quantized versions will appear, which will be even more comfortable to run.
This GLM-4 looks like comparable or beating LLama 3, maybe even best-in-class for now.
We might have this feature soon: https://github.com/ggerganov/llama.cpp/pull/8031
This issue was closed because it has been inactive for 14 days since being marked as stale.
Any updates?
I saw it's merged, but does it works with llama-cop-python and how to get vision stuff working in gguf?