Guillaume Lagrange

Results 243 comments of Guillaume Lagrange

With the discussions we had on discord it wasn't too difficult to figure out an easy way to draft this :) I understand your point of view a bit better...

Ok perhaps I misunderstood your initial request 🙂 thought you wanted something a bit in between, but in this case it's simply differentiating high level types.

Closing this draft as it's out of date. If anyone feels strongly about this, feel free to fork with your suggestions.

Yep looks like the checks are also failing locally for a single broadcast test with wgpu. I opened the PR before they completed, will check it out. /edit: Uh, looks...

Once the linked cubecl PR is merged I'll update the dependency and this PR should have tests passing.

Thanks for reporting this with relevant info! Related: #715 Many things have changed since the discussion in the linked issue, but this was never officially supported. We're seeing more RL...

I forgot we also have a [mpsc channel implementation](https://github.com/tracel-ai/burn/blob/main/crates/burn-autodiff/src/runtime/mspc.rs) under the `"async"` feature flag in `burn-autodiff` though it's not really used much. The channel doesn't block on sends, so maybe...

Yeah I figured you'd probably face the same issue at the server level.. 😅

That's because `keepdims` is set to false, and that's not currently supported when converting the model to Burn code. https://github.com/tracel-ai/burn/blob/dd628ec91c8dafa7f5767d85f822d46dec8f4707/crates/burn-import/src/onnx/op_configuration.rs#L1465-L1468 In burn, `tensor.max_dim(dim)` returns a tensor of the same rank...

That document was [moved to the contributor book](https://github.com/tracel-ai/burn/blob/main/contributor-book/src/guides/onnx-to-burn-conversion-tool.md), but I don't think it would be relevant for that context 😅