Clément Dumas
Clément Dumas
**Describe the bug** No output when segfault during test **Expected behavior** Output **Reproduction steps** ```cpp TEST_CASE("test") { std::cout - OS: **Linux** - Compiler+version: **GCC v13.2** - Catch version: **v3.4.0** **Additional...
Hello, the codingame game engine which allow people to create games for its platform use your sanitizer on the game documentation. When I tried to implement some overflow on tables,...
**Is your feature request related to a problem? Please describe.** Sometimes some word can be translated into a bunch of words depending on the context. **Describe the solution you'd like**...
- The output format of this one seems weird: https://github.com/epfl-dlab/llm-latent-language/blob/1be3cd6eaf14c7408e8f03703d1c1905a6b00c44/Translation.ipynb#L242 E.g. some column are called "de" but contains french words. - This line should use `lang_latent` and not `'en'`: https://github.com/epfl-dlab/llm-latent-language/blob/1be3cd6eaf14c7408e8f03703d1c1905a6b00c44/Translation.ipynb#L316
It's really hard to adjust the size of the top level on mobile as you can see in [this video](https://youtu.be/LFmhSkTSp08)
```py nn_model = LanguageModel("meta-llama/Llama-2-70b-hf", token=token) ``` Fails because `token` is not passed to the tokenizer initialization
Reproduced on my local setup and on colab ```py !pip install git+https://github.com/EleutherAI/elk/ import elk ``` ``` ----> 2 import elk [/usr/local/lib/python3.10/dist-packages/elk/__init__.py](https://localhost:8080/#) in ----> 1 from .evaluation import Eval 2 from...
Using `.token[0]` returns the padding token ```py from nnsight import LanguageModel model = LanguageModel("gpt2", device_map="cpu") probs = model.trace("a zd zdb", trace=False).logits with model.trace(["ab dfez zd", "a", "b"]): inp = model.input.save()...
This kind of silent failure can make nnsight very hard to debug: ```py import torch as th from nnsight import LanguageModel nn_model = LanguageModel("gpt2", device_map="cpu") # The patching fails silently...