Guillaume Lagrange

Results 243 comments of Guillaume Lagrange

Thanks for breaking down the pros and cons! I think this is a good place to start if you want to tackle this change 🙂

> Is it correct to assume "Sequential LR schedulers" is just changing the scheduler used depending on the epoch? Yeah! It's just a way to combine different LR schedule based...

> Gotcha gotcha thank you for the clarification. One more thing would we want to reinitialize the learning rate when the scheduler is changed or just pass the current learning...

I'm sorry there is no roadmap for this feature at this time. We have a lot of other things we're working to improve and add like quantization. Would love to...

Ahhh sorry I completely overlooked that part in your comment. The sync when retrieving the data from wgpu to get it to ndarray is inevitable at this time, but just...

The `Tensor` struct is generic on the backend, so I'm not sure how this would fit for ndarray methods. Maybe under a new flag for ndarray (since the `burn-tensor` crate...

Hey @kushalkolar 👋 I think there is definitely some interest, but we have also been limited by time so our efforts have been focused elsewhere. If you decide to move...

Accidentally closed this thinking we had full remainder support with the linked PRs, but actually this was only implemented for `tensor % scalar`. We still need element-wise remainder support.

The `input_tensor.rank` is incorrectly inferred as being 0 here. This happened because of a preceding `Reshape` node ![Image](https://github.com/user-attachments/assets/cada6eac-d361-4377-905d-fb5be4a82de3) `Reshape` takes the output shape has input, but with the current state...