Dilshod Tadjibaev
Dilshod Tadjibaev
@finnkauski no, I didn't. I am using Burn framework and now it supports WebGPU and Candle CPU backend which compilable to WASM . Here is an example I created https://github.com/tracel-ai/burn/tree/main/examples/image-classification-web
I came across this as well. It has to do with an overhead copying data to GPU and running it there. I was testing a small model (MNIST) and it...
> ``` > [nix-shell:~/git/tch-rs]$ python -c 'import torch; print(torch.__version__)' > 2.0.1 > ``` @n8henrie There was in issue with the exact 2.0.0 version vs 2.0.1. Please see an issue ticket...
Sorry I missed this flag so I am not sure if it is related. I am still looking into this.
@simlay Does it mean it's possible to build and link the torch lib statically? I filed this ticket originally to find out: https://github.com/LaurentMazare/tch-rs/issues/647
@rekka Can you please merge this update and push a new release? We are getting a build warning telling that the future version of Rust will be rejected: ``` Finished...
I have implemented into_scalar for bool tensor as well. I don't remember as part of which PR but it's on the main currently.
The backend type is already carrying the Device type information [`burn-tensor/src/tensor/backend/base.rs` ](https://github.com/burn-rs/burn/blob/9aca1837c205feff85b0df457fb8e2f7aa507134/burn-tensor/src/tensor/backend/base.rs#L4). What's the problem did you come across? Can you give a working example? It'd be easier to understand...
> For example running the following code yields an error that the two tensors are on different devices when trying to add them together. It would be great if this...
I like the option 2 as well. I would be happy to see if we could achieve this without a lot of disruption. If it only affects the backend types...