Christoph Heindl
Christoph Heindl
``` dataset -> model -> loss model: input: BxIxT input_transform: fn(BxKxT) -> BxIxT condition: BxCxT output: BxQxT def loss(inputs, outputs): if 't' in batch: targets = batch['t'][..., 1:] # BxQxT...
would that also work for different model output interpretation such as #24
https://amaarora.github.io/2020/07/24/SeNet.html
Hey, yes you are right, local conditioning is not fully supported right now. Let me explain the limitations: - [in training](https://github.com/cheind/autoregressive/blob/main/autoregressive/wave.py#L408) you would need to check if the conditioning is...
also add bits-per-dim metric that measures how many bits are required to encode pixel intensities. see E.2 of https://arxiv.org/pdf/1705.07057.pdf https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/tutorial11/NF_image_modeling.html
See also for how to optimize based on cdf ```https://github.com/Rayhane-mamah/Tacotron-2/issues/155```
My earlier post got lost, sorry: i do not own the PyPi package, so I cannot update it. Installing from github is easy though ``` pip install git+https://github.com/cheind/pydantic-numpy.git ``` which...
I updated the ReadMe and performed a version bump to 1.4.0
@bendichter thanks for your suggestions. Maybe we can work around the missing shape parameter using `typing.Annotated`. See here https://stackoverflow.com/a/72585748/242594
Hey! I honestly don't know - I tested mostly on inward facing synthetic scenes. You could try, but I guess you'd encounter some glichtes.