Andreas Nachtmann
Andreas Nachtmann
Hey! Please excuse my rather cautious approach. I'm a little bit burned by my corporate environment where even minor changes to lead to people scream at you 😅 If you...
Please excuse the delay. Here is my proposed solution for #2924. I finally did not change the return type of the backend trait methods and chose to panic instead since...
Thank you for taking the time for reviewing my PR and sorry for that sloppy oversight! I'll try to account for those issues as soon as possible.
@laggui I tried to address all issues/comments and the pipeline seems to be happy as well so it should be fine for a review. Thanks in advance!
Hey all, I seem to have the same issue. Every now and then and seemingly at random I get nan as loss. Here is an example that caused the loss...
```rust use burn::{ backend::{Autodiff, Wgpu}, nn::loss::BinaryCrossEntropyLossConfig, tensor::{Int, Tensor, TensorData}, }; #[test] fn test_binary_cross_entropy_preds_almost_correct() { type MyBackend = Wgpu; type MyAutodiffBackend = Autodiff; let device = burn::backend::wgpu::WgpuDevice::default(); let preds = Tensor::::from_floats(...
Currently it is. ``` thread 'test_binary_cross_entropy_preds_almost_correct' panicked at crates\sketchy_pix2pix\tests\bce_test.rs:23:17: Tensors are not approx eq: => Position 0: NaN != 0.000000849366756483505 | difference NaN > tolerance 0.00000010000000000000004 stack backtrace: 0: std::panicking::begin_panic_handler...
> Ahhh ok I can reproduce on 0.16 > > ``` > ---- test_binary_cross_entropy_preds_almost_correct stdout ---- > > thread 'test_binary_cross_entropy_preds_almost_correct' panicked at src\main.rs:23:17: > Tensors are not approx eq: >...
I have the same issue in my nvim setup using the default configuration provided by the [official repo](https://github.com/neovim/nvim-lspconfig/blob/master/lua/lspconfig/configs/taplo.lua) ``` [ ERROR][2025-06-17 18:03:41] ...p/_transport.lua:36 "rpc" "taplo" "stderr" ERROR taplo:completion:possible_schemas_from: failed to...
First of all: thanks for the response! > Hmm I'm not sure this is the cause of your issue. If the ID is not present, then self.tensors.get(id)? would propagate None...