Fabian
Fabian
It seems the meanings of `SYSCTL_POWER_V33` and `SYSCTL_POWER_V18` are reversed. If you set `SYSCTL_POWER_V18`, you get 3.3V, and if you set `SYSCTL_POWER_V33`, you get 1.8V. EDIT: Forget it, my circuit...
I investigated a bit further and I'm pretty sure the I/O voltage is not field-programmable. You "choose" the I/O voltage by supplying the desired voltage to the corresponding VCC pins....
I had the same issue. It would be cool to merge this (after fixing indentation).
I could reproduce the issue with a minimal huggingface/transformers example, so I created an issue there: https://github.com/huggingface/transformers/issues/22550. Leaving this open for tracking. It seems to work on a MacBook with...
Thanks for the quick answer! Not holding my breath for a fix though. It's one out of 10K+ open issues in pytorch...
Actually running LLaMa was my goal, I was just trying something simpler first. Now I tried LLaMa using the following: ```python from transformers import AutoTokenizer, LlamaForCausalLM, pipeline model = LlamaForCausalLM.from_pretrained("/path/to/models/llama-7b/")...
I have the same problem on my MacBook M1 14". Every time I switch models, the entire `hf-diffusion-models` directory disappears from `~/Library/Application Support` and models are re-downloaded. The very same...
I have the same problem. It works at first, but when the SSH connection drops and is reestablished, it keeps complaining about `/usr/bin/cmake` being a bad executable. `/usr/bin/cmake` does indeed...