dfdx
dfdx copied to clipboard
Deep learning in Rust, with shape checked tensors and neural networks
Please add a method for splitting on a given axis, returning an iterator over subtensors selected in that axis. For example, suppose `mytensor` is this `2x3` tensor: ```rust [ [1,...
For some generic types that implement `Dim`, their value need to get subtracted from. Note: I haven't any form of stress test on this, so this PR should be considered...
From https://github.com/rust-lang/stdarch/pull/1454, it seems that setting `_MM_SET_FLUSH_ZERO_MODE` results in "immediate Undefined Behavior", and as a result these functions were deprecated with 1.75.0. This PR removes the functions that include calls...
This is weird since the exact opposite thing happened in #762, but it appears that now `__hmin` and `__hmax` are not being found. I tested driver versions 525 and 535...
1. Adds an empty `.rustfmt.toml` so formatting for this project will be respected in workspaces with different formatting. 1. Adds `unstack`, which is the inverse of `stack`. Note: I've never...
# TapeGlobal Here's another tape-tracking API to consider, as implemented in client code: ```rust type FloatInner = f32; type DfdxDevice = Cpu; thread_local! { static TAPE_GLOBAL: once_cell::sync::Lazy< Mutex, > =...
Hi, I'm unable to build this crate without changes for old GPU (Tesla M40, capability level 5.2, driver 462). Most of the problems are related to FP16 and the fact...
# What This PR Does Starts a general binary op implementation for `Webgpu`. This code is based on an (old, in-progress WGPU branch)[https://github.com/DonIsaac/dfdx/tree/don/feat/wgpu2] I made a while ago. I've (mostly)...
I have two models, identical in structure, in which one is meant to be updated periodically from the other. Prior to #854 I did so using `TensorCollection::iter_tensors` as follows:: ```rust...
Re-opening per #215: I was working on implementing AlexNet in dfdx per https://github.com/LaurentMazare/tch-rs/blob/main/src/vision/alexnet.rs, but have gotten a bit blocked by not having this layer type available. Would it be possible...