Benjamin Warner

Results 10 issues of Benjamin Warner

Update Optimizer docs with additional clarity relative to PyTorch optimizers. Fix the SGD with Momentum Weight Decay comparison test/example. Move `OptimWrapper` usage example to right after the `OptimWrapper` definition.

`show_doc` errors out when using Python 3.10 union sub-type hints. This occurs with typing.List `List[Tensor|int]` or list `list[Tensor|int]` Minimal reproducible code and error: ```python from __future__ import annotations from nbdev.showdoc...

`show_doc` currently has rendering issues with TorchScript functions. ```python import torch from nbdev.showdoc import * @torch.jit.script def test(p:torch.Tensor): "Test torchscript function" return p show_doc(test) ``` results in the following output,...

bug

Some fastai loss functions, such as `LabelSmoothingCrossEntropyFlat` are not picklable, and thus not exportable via `Learner.export`. But most, including `LabelSmoothingCrossEntropy` and other flattened losses, appear pickle without issue. This minimal...

The Saturation transform converts the input image to grayscale using RGB Luma 601-2 when the input image is already in logit space. For an accurate conversion, the image would either...

Adding a `delegates` decorator to a method prevents `TypeDispatch` from creating the correct dispatching table. Without `delegates`, the dispatch for `TensorAudio.create` is correct: ```python class TensorAudio(TensorBase): @classmethod def create(cls, fn:str|Path,...

This PR modifies the FFCV `Loader` and `EpochIterator` to create one set of Cuda streams and reuse them each epoch instead of creating new Cuda streams every epoch. The current...

# Items to be completed for full Tensor Datasets & DataLoaders integration: - [x] Datasets - [x] DataLoaders - [ ] Reversing transforms (IE from 6 to the proper string...

This PR adds support for compiling models with dynamic shapes `dynamic=True` to almost all models with SDPAttention implementations which currently do not support dynamic shapes. #30442 added support for Llama,...

This PR replaces the fastxtend optimizers with [optimi](https://github.com/warner-benjamin/optimi) optimizers while retaining support for bitsandbytes eight-bit optimizers. `optimizer.utils` adds a mixin class `` to assist in converting any PyTorch optimizer into...