ggml icon indicating copy to clipboard operation
ggml copied to clipboard

[Feature request] Implement GPT-JT

Open pablogranolabar opened this issue 2 years ago • 14 comments

e.g. https://www.together.xyz/blog/releasing-v1-of-gpt-jt-powered-by-open-source-ai

pablogranolabar avatar Dec 01 '22 10:12 pablogranolabar

Well this looks like the same model as GPT-J, just different weights. You should already be able to run it - just convert it to ggml format and use the gpt-j example

ggerganov avatar Dec 01 '22 17:12 ggerganov

Ok, converted these weights from GPT-JT and it generated the model file, however I'm getting the following error when loading:

gptj_model_load: f16     = 1
gptj_model_load: ggml ctx size = 13334.86 MB
gptj_model_load: memory_size =  1792.00 MB, n_mem = 57344
gptj_model_load: unknown tensor '       W*ƍyC$B' in model file
main: failed to load model from './ggml-model.bin'

any ideas?

pablogranolabar avatar Dec 04 '22 01:12 pablogranolabar

The convert script assumes that the original weights are FP32 and converts to FP16 when necessary. However, in the new GPT-JT, the weights are in FP16 by default, so the script has to be adjusted.

Try changing the following:

https://github.com/ggerganov/ggml/blob/90ee5c6358a3f33a5615256a0b229aa793ff4b49/examples/gpt-j/convert-h5-to-ggml.py#L118-L121

to:

    # ftype == 0 -> float32, ftype == 1 -> float16
    ftype = 0;
    if use_f16:
        if name[-7:] == ".weight" and n_dims == 2:
            print("  Converting to float16")
            data = data.astype(np.float16)
            ftype = 1
        else:
            print("  Converting to float32")
            data = data.astype(np.float32)
            ftype = 0

ggerganov avatar Dec 04 '22 08:12 ggerganov

Just tested and it works: ed09c7190ea26f68faf9adba57feb3c7f404a26d

Also fixed unicode support for the GPT-2 and GPT-J models in general

ggerganov avatar Dec 04 '22 16:12 ggerganov

@pablogranolabar Is there a noticeable difference in quality of output of GPT-JT compared to GPT-J?

trholding avatar Dec 09 '22 03:12 trholding

Yes and no, it's getting a lot of conflicting reviews because GPT-JT is fine tuned for task oriented stuff like chain of thought reasoning. So for canned general tasks like causal LM it's potentially worse in whatever you would consider precision and accuracy, but with quality prompt engineering all of these additional tasks can be teased out during inference. So, the inevitable "it depends" is applicable there, depending on target architecture, model handler customization, and inference hyperparameters + prompt injection and optimization during inference.

pablogranolabar avatar Dec 12 '22 04:12 pablogranolabar

So for canned general tasks like causal LM it's potentially worse in whatever you would consider precision and accuracy, but with quality prompt engineering all of these additional tasks can be teased out during inference.

Would be awesome if you could share some sample outputs.

If there is a way to share large models, I'd be willing to convert it to ggml and share. Maybe IPFS or Torrent, have to figure out. I have bandwidth caps on server.

trholding avatar Dec 12 '22 05:12 trholding

@pablogranolabar Thanks for sharing the great idea about using GPT-JT

@ggerganov Thanks for the fix

I uploaded the model to huggingface so that its easy for people to get hold of the gpt-jt ggml model variant without eating into your hosting bills:

https://huggingface.co/trholding/GPT-JT-6B-v1-ggml

cd models
mkdir gpt-jt-6B ; cd gpt-jt-6B
wget https://huggingface.co/trholding/GPT-JT-6B-v1-ggml/resolve/main/ggml-model.bin
cd ../..

# Run the GPT-JT 6B v1 model (requires 12GB disk space and 16GB CPU RAM)
./bin/gpt-j -m models/gpt-jt-6B/ggml-model.bin -p "This is an example"

trholding avatar Dec 12 '22 10:12 trholding

probably best suited for a new issue, but @ggerganov what do you think about adding 8-bit inference? this would further cut model memory consumption by 50% and with nominal loss of precision. this is a supported option now for transformers with bitsandbytes via Accelerate.

pablogranolabar avatar Dec 13 '22 05:12 pablogranolabar

@pablogranolabar Hm, I'm probably missing something - the referenced repo is a CUDA wrapper. I cannot find any information about Apple Accelerate supporting 8-bit precision. Can you provide any reference?

ggerganov avatar Dec 13 '22 07:12 ggerganov

Yeah for example: https://github.com/huggingface/transformers/pull/17901

pablogranolabar avatar Dec 13 '22 07:12 pablogranolabar

Again, I might be missing something, but it seems this refers to huggingface/accelerate framework which is all CUDA and does not apply to Apple Accelerate.

Unless there is a way to use Apple framework with direct 8-bit precision support, I think 8-bit support will be very low priority for ggml. It means I will have to implement the quantization from scratch with NEON and I'm not really sure how to do this atm. And even if I achieve it, it will very likely be less performant compared to the existing mixed FP16/FP32 + Accelerate because we will lose the AMX coprocessor benefits that we currently have.

ggerganov avatar Dec 13 '22 08:12 ggerganov

Ah sorry I was referring to the Accelerate framework used with PyTorch. Here's a decent writeup of their 8-bit quantization methods: https://huggingface.co/blog/hf-bitsandbytes-integration

pablogranolabar avatar Dec 13 '22 08:12 pablogranolabar

@trholding - your model link gives a 404. Is the GPT-6JT ggml still available anywhere?

regstuff avatar Feb 28 '23 13:02 regstuff