Simon Hällqvist

Results 4 comments of Simon Hällqvist

I'm interested in fine-tuning LLaMa for creating text embeddings, anyone have any tips for how to do it with the LLaMa architecture? Can I just add a pooling layer at...

Hi @shamanez, very I'm interested in RETRO. How far along are you in an implementation?

I'll give this a go - just so I understand the first part correctly: - I should be able to do (full) finetuning with an existing adapter model as `base_model`...

Should the tokenizer be retrained with the new token added to the vocab or is it enough to just add my custom token to the fine-tuning data (e.g. prepend text...