David Hinrichs

Results 9 comments of David Hinrichs

what is the current state on this? happy to contribute!

had the same issue on web just now, logging out and back in fixed that for me

gonna piggy back off this, I had a very similar error that was also about the model.config ` File "/opt/conda/lib/python3.10/runpy.py", line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error)...

Fails the same way for me on a gcp vm with `docker run --gpus all --shm-size 1g -p 8888:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:2.0 --model-id $model --speculate 3 --num-shard 2` > File...

Oh, thank you so much! Glad the example was of service

Learning burn rn and happen to have an M4 on MacOS 26, so I tried to reproduce this (what i can gather) with a simple norm.forward on wpgu with a...

main.rs ``` use burn::backend::wgpu::{self, Wgpu, WgpuDevice}; use burn::nn::LayerNormConfig; use burn::tensor::Tensor; type Backend = Wgpu; fn main() { let device = WgpuDevice::default(); wgpu::init_setup::(&device, Default::default()); let norm = LayerNormConfig::new(2).init::(&device); let input =...

very cool, just read the muon paper a couple of days ago

ran into the same proble, thanks for posting the fix here!