MittelmanDaniel

Results 3 comments of MittelmanDaniel

@bryce13950 I received the standard Cuda OutofMemory error For context I have an RTX 3060 Laptop.

I am running this script ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch from transformer_lens import HookedTransformer model_name = "google/gemma-2-2b-it" # Load the tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) device = torch.device("cuda"...

I had an external tokenizer just to showcase that I was only changing that one line. I was using 2.9.0