Hermes Trismegistus
Hermes Trismegistus
### Feature Description There is a call for a feature that bundles the device a tensor is on in its type. This is to ensure that device compatibilities become compile-time...
### Feature description It would be great if burn-wgpu supported f16. Is there a timeline for this? ### Feature motivation Large models such as Stable Diffusion exceed wgpu's maximum buffer...
### Feature description Support deriving `Module` for enums to enable code such as ``` #[derive(Module, Debug, Clone)] enum Layer { Slow(LayerBig), Fast(LayerSmall), } ``` ### Feature motivation This feature would...
# Summary Caching in whisper-burn becomes corrupted when the Whisper logits output is not cloned before computing log probabilities. This issue occurs on both CUDA and CPU when using burn-tch,...
Add ```no_grad``` to the sampling function to improve inference efficiency.
I get the following error message on bend version 0.2.7 running with `bend run test.bend` ``` attempt to divide by zero note: run with `RUST_BACKTRACE=1` environment variable to display a...
### Reproducing the behavior Running `bend run test.bend` on ```bend def get_string(): return "Hello, World!" def main(): val = get_string() return val ``` yields `Result: get_string` rather than `Result: "Hello,...
When I added `candle-flash-attn` to my .toml file the build process seems hang on `Building [=======================> ] 114/118: candle-flash-attn(build)` and the compilation doesn't proceed. My .toml file is ``` [package]...