Andrew Lapp

Results 205 comments of Andrew Lapp

I got 13434 tweets ``` twitterscraper "@pnbindia" --lang en --output test55.json -bd 2019-03-31 -ed 2020-04-01 INFO: {'User-Agent': 'Mozilla/5.0 (compatible, MSIE 11, Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko', 'X-Requested-With': 'XMLHttpRequest'}...

Thanks. I have a second question. I see `_providers.py` is autogenerated. Would it be within the scope of this project to create a new module to allow for the inclusion...

I subclassed the `InteractiveParser`. These changes resulted in ~10x faster `accepts()`. Pretty hacky, but might help. ``` class FastParserState(ParserState): copy_memo = {} def __copy__(self): new_value_stack = [] for value in...

@mano3-1 what does the traceback say if you run ``` with torch.autograd.detect_anomaly(): trainer.train() ```

I'm running into issues with back-propagation in unsloth as well, albeit I'm using a custom loss function and Mistral instead of llama-3. It works fine for `AutoModelForCausalLM` & `get_peft_model`, but...

I'm not sure. Your backwards step where it fails is a different layer of the model than me, but the only thing our scripts have in common is unsloth. How...

requirements.txt isn't the same as pip freeze. `pip3 freeze` will detail the version of all packages.

Thanks @danielhanchen Here is my reproduction script as well, run on a 4090 with cuda 12.1. @mano3-1 has a standard SFT script so his is probably worth looking at first....

Sorry about my confusion @mano3-1 I reviewed and compared our installed packages. Nothing noteworthy in the shared dependencies, other than perhaps the issue is related to the use of xformers....

I have a similar issue in docker on some machines. I'm using `local/llama.cpp:full-cuda` After an `strace`, it turned out `/server` couldn't find `libcublas.so.11`. However I have it in `/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcublas.so.11` Perhaps...