Enrico Shippole

Results 155 comments of Enrico Shippole

Closing this until I can figure out formatting errors. Going to open one that handles issues with linting and adds support for all LLM providers.

Can you try setting max iterations to something like 2? `max_iterations=2`

Hi @biofoolgreen , A package manager with pip will be implemented in the future. See the TODO and this [PR](https://github.com/conceptofmind/LaMDA-pytorch/pull/1). Best, Enrico

Hi @biofoolgreen , I am not currently receiving this error on my end when running a few test cases. I will work through a minimal reproducible example to see if...

I set up a minimum reproducible example in a Jupyter Notebook and seems to be working fine. I will have to do a further review. ```python tokenizer = GPT2Tokenizer(vocab_file='/token/vocab.json', merges_file='/token/merges.txt')...

Hi @biofoolgreen , I rebuilt the data loader to work locally: https://github.com/conceptofmind/LaMDA-pytorch/blob/main/lamda_pytorch/build_dataloader.py A few things you are going to have to take into consideration if you are going to use...

Hi @msaidbilgehan , What version of python are you using? I have been reading more into the error and it seems that typing annotations with dataclasses were changed later in...

> @conceptofmind i believe you said you were working on this? Yes actively working on this with a group of peers. We have successfully deployed inference with the 65B models....

Would have to think about how to handle the sizes of different models though. I could see this becoming an issue for the end user.......

4 bit may be plausible. 8 bit should be fine. The weights are already in fp16 from my understanding. I would have to evaluate this further.