llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Feature Request: GLM-4 9B Support

Open arch-btw opened this issue 1 year ago • 4 comments
trafficstars

Prerequisites

  • [X] I am running the latest code. Mention the version if possible as well.
  • [X] I carefully followed the README.md.
  • [X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [X] I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

It would be really cool to have support for these models that were released today. They have some very impressive benchmarks. I've also been trying out the model in huggingface spaces myself and noticed it speaks a lot of languages fluently and is knowledgeable on many topics. Thank you for your time.

Here are the download links:

Here is the English README: README_en.md

Motivation

The motivation for this feature are found in some of the technical highlights for this model:

  • These models were trained on 10T tokens.
  • GLM-4-9B-Chat models have 9B parameters.
  • GLM-4-9B-Chat-1M model supports 1M context length and scored 100% on the needle in haystack challenge.
  • GLM-4-9B models support 26 languages.
  • Has a vision model (glm-4v-9b).
  • Early impressions are impressive.

Here are some of the results:

Needle challenge:

eval_needle

Longbench:

longbench

Possible Implementation

We might be able to use some of the code from: https://github.com/ggerganov/llama.cpp/pull/6999.

There is also chatglm.cpp but it doesn't support GLM-4.

arch-btw avatar Jun 05 '24 20:06 arch-btw

You can try chatllm.cpp, which supports GLM-4.

foldl avatar Jun 07 '24 05:06 foldl

You can try chatllm.cpp, which supports GLM-4.

Can confirm this works and is cool 😎

It would be good to get this functionality in Llama.cpp too, if only for the GPU acceleration

jamfor999 avatar Jun 08 '24 10:06 jamfor999

You can try chatllm.cpp, which supports GLM-4.

Well, it chatllm.cpp is CPU-only. Why not trying transformers version in fp16.

llama.cpp GPU support for GLM-4 would be great, and then quantized versions will appear, which will be even more comfortable to run.

This GLM-4 looks like comparable or beating LLama 3, maybe even best-in-class for now.

ELigoP avatar Jun 08 '24 19:06 ELigoP

We might have this feature soon: https://github.com/ggerganov/llama.cpp/pull/8031

matteoserva avatar Jun 20 '24 10:06 matteoserva

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Aug 05 '24 01:08 github-actions[bot]

Any updates?

yukiarimo avatar Sep 01 '24 21:09 yukiarimo

I saw it's merged, but does it works with llama-cop-python and how to get vision stuff working in gguf?

yukiarimo avatar Sep 01 '24 21:09 yukiarimo