gpt2-quickly icon indicating copy to clipboard operation
gpt2-quickly copied to clipboard

Sentencepiece alternative

Open Muennighoff opened this issue 5 years ago • 27 comments

Amazing work! Currently the spmtrain in build_tokenizer doesn't work, cuz I think it needs a local installation of sentencepiece to be able to use the command. Is there a specific reason you chose Google's sentencepiece over just:

from tokenizers import SentencePieceBPETokenizer
tokenizer = SentencePieceBPETokenizer()
tokenizer.train(files=paths, vocab_size=3000, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.save(os.path.join(configs.data.path, "spiece.model"))

Also, I was wondering did fast attention work for you?

Muennighoff avatar Feb 22 '21 18:02 Muennighoff

Thanks for the feedback!

  1. Yes, the installations of the sentencepiece is required. Try pip install -r requirements.txt to update your environment.
  2. It's ok to use tokenizers.SentencePieceBPETokenizer, but both of them require the input sentence should be pretokenized in no-spaces languages such as Chinese or Japanese.
  3. I forgot to add pretoken in build_tokenizer.py, I will update it later.
  4. fast attention is still not available in this repo, hope norabelrose's work coming soon.

mymusise avatar Feb 23 '21 06:02 mymusise

Thanks for the feedback!

  1. Yes, the installations of the sentencepiece is required. Try pip install -r requirements.txt to update your environment.
  2. It's ok to use tokenizers.SentencePieceBPETokenizer, but both of them require the input sentence should be pretokenized in no-spaces languages such as Chinese or Japanese.
  3. I forgot to add pretoken in build_tokenizer.py, I will update it later.
  4. fast attention is still not available in this repo, hope norabelrose's work coming soon.

1-3: Great, thanks for the info. 4: Yeah that would be awesome. I've also been working on it on a fork from transformers, but it's still unable to converge somehow, so I was thinking of using the original tensorflow implementation from g-research as you did. But I guess it didnt work out for you?

Muennighoff avatar Feb 23 '21 06:02 Muennighoff

so I was thinking of using the original tensorflow implementation from g-research as you did. But I guess it didnt work out for you?

I try both of them two months ago but still have some problems. In my test:

  • transformer.PerformerAttention works but it train and predict slower than the normal Transformer.
  • the fast-attention of google train and predict faster, but the result of prediction is bad. It's confusing that the accuracy will stop at 66% with training a small corpus like dataset/test/raw.txt. Perhaps I build the model in the wrong way.

Hope it helps :) and wait for your good news :rocket:

mymusise avatar Feb 24 '21 16:02 mymusise

close it, reopen if it needs.

mymusise avatar Mar 08 '21 16:03 mymusise

close it, reopen if it needs.

BERT & DistilBERT Performers now work with pretrained transformers (https://colab.research.google.com/drive/1A9reiUZbA7DELuJ8keTo73sIQ4dJJVoT#scrollTo=F5k6jxicGf3E)

Still working on autoregressive models like gpt .. but it seems the gains won't be that big for them; will keep you updated!

Muennighoff avatar Mar 09 '21 03:03 Muennighoff

Great work! @Muennighoff After trying your code, BERT & DistilBERT Performers does work better than the normal Transformer. I tried GPT2 work with Performers a few days ago and got some clue, I think the reason of Performer will perform well in non-causal models like BERT and T5 but will have a negative effect in causal models like GPT, maybe is the matrix multiplication of softmax(Q @ K) @ CausalMask @ V is faster then the loop multiplication of _headwise_causal_numerator.

mymusise avatar Mar 10 '21 06:03 mymusise

Great work! @Muennighoff After trying your code, BERT & DistilBERT Performers does work better than the normal Transformer. I tried GPT2 work with Performers a few days ago and got some clue, I think the reason of Performer will perform well in non-causal models like BERT and T5 but will have a negative effect in causal models like GPT, maybe is the matrix multiplication of softmax(Q @ K) @ CausalMask @ V is faster then the loop multiplication of _headwise_causal_numerator.

Yeah the problem seems to be the loop multiplication - We could try the causal multiplication from lucidrains - it shouldn't be too difficult to copy it over.

I'm also still running into shape errors when trying GPT2 Performer - did you get it to work?

Muennighoff avatar Mar 10 '21 06:03 Muennighoff

Great work! @Muennighoff After trying your code, BERT & DistilBERT Performers does work better than the normal Transformer. I tried GPT2 work with Performers a few days ago and got some clue, I think the reason of Performer will perform well in non-causal models like BERT and T5 but will have a negative effect in causal models like GPT, maybe is the matrix multiplication of softmax(Q @ K) @ CausalMask @ V is faster then the loop multiplication of _headwise_causal_numerator.

Yeah the problem seems to be the loop multiplication - We could try the causal multiplication from luciddrains - it shouldn't be too difficult to copy it over.

I'm also still running into shape errors when trying GPT2 Performer - did you get it to work?

It looks so cool! I'll try it and keep you updated!

mymusise avatar Mar 10 '21 06:03 mymusise

@Muennighoff Hi, I got an OOM error after using the causal_linear_attention_noncuda function from luciddrains. I think it's a bad idea to generate a content matrix with torch.einsum('...nd,...ne->...nde', k, v). If the shape of Q, K, V is [B, H, L, D], and the context will have a huge size [B, H, L, D, D]. In many applications, the value of D is in [64-128]. Maybe it has a little deviation from the design of performer.

mymusise avatar Mar 10 '21 13:03 mymusise

@Muennighoff Hi, I got an OOM error after using the causal_linear_attention_noncuda function from luciddrains. I think it's a bad idea to generate a content matrix with torch.einsum('...nd,...ne->...nde', k, v). If the shape of Q, K, V is [B, H, L, D], and the context will have a huge size [B, H, L, D, D]. In many applications, the value of D is in [64-128]. Maybe it has a little deviation from the design of performer.

Yes i think that's why he also has the other version above (https://github.com/lucidrains/performer-pytorch/blob/3bff14e39284e7dc82952153099a63dcd3561dc0/performer_pytorch/performer_pytorch.py#L142) Did you try that one as well?

Muennighoff avatar Mar 10 '21 16:03 Muennighoff

Yes i think that's why he also has the other version above (https://github.com/lucidrains/performer-pytorch/blob/3bff14e39284e7dc82952153099a63dcd3561dc0/performer_pytorch/performer_pytorch.py#L142) Did you try that one as well?

I didn't try it, but I think maybe rewriting the loop multiplication with the basic API of CUDA, is a way to work out the performance. Some like fast_transformers.causal_product.CausalDotProduct do.

mymusise avatar Mar 12 '21 08:03 mymusise

Hi all, I'm happy to join this conversation if there's anything to be done.

Is this too inefficient?

import numpy as np
  
q0 = np.array([[1, 2, 3], [4, 5, 6]])
k0 = np.array([[2, 3, 4], [5, 6, 7]])
v0 = np.array([[3, 4, 5, 6], [7, 8, 9, 10]])

q = np.expand_dims(q0, axis=1)
k = np.expand_dims(k0, axis=2)
v = np.expand_dims(v0, axis=2).transpose(0, 2, 1)

assert q.shape == (2, 1, 3)
assert k.shape == (2, 3, 1)
assert v.shape == (2, 1, 4)

kv = (k @ v)

assert kv.shape == (2, 3, 4)

kv_sum = np.cumsum(kv, axis=0)

assert kv_sum.shape == (2, 3, 4)

qkv = q @ kv_sum

assert qkv.shape == (2, 1, 4)

qkv = np.squeeze(qkv, axis=1)

assert (qkv == ((q0 @ k0.T) * np.array([[1, 0], [1, 1]])) @ v0).all()

JamesDeAntonis avatar Mar 12 '21 21:03 JamesDeAntonis

Welcome guy! @JamesDeAntonis

Is this too inefficient?

No, I think it can speed up the operation but will spend more memory. And I'm not sure the dimension of q0 (2, 3) mean (L, D)? A sequence with two tokens and each token has 3 dimensions?

If so, I think the code above is equal to torch.einsum('...nd,...ne->...nde', k, v) in a way.

As I comment above, it will spend a lot of memory.

If the shape of Q, K, V is [B, H, L, D], and the context will have a huge size [B, H, L, D, D]. In many applications, the value of D is in [64-128]. Maybe it has a little deviation from the design of performer.

mymusise avatar Mar 14 '21 09:03 mymusise

And I'm not sure the dimension of q0 (2, 3) mean (L, D)? A sequence with two tokens and each token has 3 dimensions?

Yes, that's what I meant.

If the shape of Q, K, V is [B, H, L, D], and the context will have a huge size [B, H, L, D, D]. In many applications, the value of D is in [64-128]. Maybe it has a little deviation from the design of performer.

Don't you iterate across heads in the code, to save memory? This would mean [B, 1, L, D, D], whereas regular transformer has [B, H, L, L]. Generally speaking, I'm unsure what exactly you are trying to change.

JamesDeAntonis avatar Mar 15 '21 16:03 JamesDeAntonis

Does this mean CausalDotProduct in fast-attention is what we want?

JamesDeAntonis avatar Mar 15 '21 18:03 JamesDeAntonis

Don't you iterate across heads in the code, to save memory?

Great, that's a good idea.

This would mean [B, 1, L, D, D], whereas regular transformer has [B, H, L, L]

Generally speaking, the value of D is in [64-128], if we have L = 2048, D = 128, then L*D*D=2048*128*128 is much bigger than the regular transformer L*L=2048*2048, if H=8, then H*L*L == L*D*D, it means we can save more memory when L>2048. But we still have to iterate each head if we do what you mention above. I‘m not sure I've understood your mind。

I'm unsure what exactly you are trying to change.

Actually, I've no idea to speed up the causal multiplication while saving memory. :mask:

mymusise avatar Mar 15 '21 18:03 mymusise

isn't this only solvable by implementing the for-loop directly in the lower-level language? I imagine this is effectively what fast attention does

JamesDeAntonis avatar Mar 15 '21 18:03 JamesDeAntonis

Does this mean CausalDotProduct in fast-attention is what we want?

Yes, but I'd not test yea, cause I can't find the tensorflow version and I've not mastered C++. :joy:

mymusise avatar Mar 15 '21 18:03 mymusise

I imagine this is effectively what fast attention does

Yes, I have the same guess.

mymusise avatar Mar 15 '21 18:03 mymusise

Can we talk over a call? I just emailed you

JamesDeAntonis avatar Mar 15 '21 18:03 JamesDeAntonis

I was just reading the fast attention code, and I think it does exactly what we want. Typing is really the only reason the c++ code is torch-specific. Otherwise, all the logic should directly transfer to TF. My issue is that I don't know tf. I think we only need to change the #include line at the top of the c++ file, the typing in the c++ file, and the wrapper code in __init__.py, then it will be TF compatible

JamesDeAntonis avatar Mar 15 '21 22:03 JamesDeAntonis

@JamesDeAntonis Cool! I did not read the fast attention code seriously, sounds like it's worth trying.

mymusise avatar Mar 16 '21 05:03 mymusise

I don't think I'm the guy to do this (I don't use c++ or tensorflow), but I think this is a pretty easy problem for someone who at least knows tensorflow

I started an implementation here. @mymusise will you take a look?

JamesDeAntonis avatar Mar 16 '21 17:03 JamesDeAntonis

Cool! You are so efficient! @JamesDeAntonis Actually, before converting to a tf version, I prefer to test fast-attention.CausalDotProduct in PyTorch first. At the same time, I will try to convert it base on your implementation.

Muennighoff seems to be solving the masking problem of the T5 decoder. @Muennighoff Did you try fast-attention.CausalDotProduct? Dose it work?

mymusise avatar Mar 17 '21 04:03 mymusise

@ice-americano (who I work with) ran if and it seemed to work to some degree. Compared to regular attention, he was getting significant improvements in memory usage, but a noticeable slowdown. We're not sure why the slowdown is occurring

JamesDeAntonis avatar Mar 17 '21 15:03 JamesDeAntonis

@mymusise I made some experiments with the fast-attention.CausalDotProduct on the SST notebook (https://colab.research.google.com/drive/1A9reiUZbA7DELuJ8keTo73sIQ4dJJVoT#scrollTo=f_R7D-mZyRXm) --- & the CausalDotProduct itself seems to work, however I get a shape mismatch in the denominator/normalization, since the sequence length of the key may not be the same as the query in a t5 enc-dec; As the keys come from the encoder, but the queries from the decoder input;

If i force the shapes to fit or just skip the normalization, it goes through but always predicts the same value giving 50% accuracy whereas the equivalent transformer converges (check it under t5/pytorch/encdec in the notebook);

this might tho be unrelated to the CausalDotProduct and could e.g. be caused by the attention mask or sth else --- @JamesDeAntonis did @ice-americano get it to converge?

Muennighoff avatar Mar 18 '21 06:03 Muennighoff

Yes, we got it to converge.

On our task, in relation to regular attention, it was (i) a step worse in terms of loss, (ii) noticeably lower memory utilization on long sequences, and (iii) slower (this is the most perplexing to us)

JamesDeAntonis avatar Mar 18 '21 16:03 JamesDeAntonis