Settordici
Settordici
If you are using pins that are not the default ones, try looking at this issue #127 I have a ESP32 too and while receiving is mostly fine, transmitting is...
The architecture "MixtralForCausalLM" is not supported yet. You can see the supported architectures [here](https://hub.docker.com/r/ollama/quantize)
> @Settordici which then raises the question of how the mixtral, dolphin-mixtral and notux models in ollama.ai/library were converted and quantized. The original models are all MixtralForCausalLM. > > @Jas0nxlee...